In April 2018, the scandal over how Cambridge Analytica gained access to personal data of 50 million Facebook users led to CEO Mark Zuckerberg testifying before Congress and an investigation by the Federal Trade Commission. A September 2018 security breach involved the personal information of at least 30 million users. And, in January, a judge unsealed documents that show Facebook executives knew the company was realizing substantial profits on in-app purchases by minors as young as 5.
Zuckerberg recognizes the growing pressure on Facebook — which turned 15 last week — but he failed to address the public’s concerns in a widely publicized Wall Street Journal op-ed in January. Instead, he chose to portray Facebook practices and business model in almost bizarrely utopian language and blamed any concerns on consumers’ failure to understand, not on Facebook’s policies.
No wonder then that the calls for Facebook to be more regulated are growing louder. Surely Facebook needs to change how it handles user data and personal information.
As someone who studies the regulation of online spaces, count me among those who favor new regulation. But count me also among the smaller group willing to admit that regulating Facebook, or any social network, is going to be a challenge. Any regulation must avoid the trap of letting emotional reactions to the scandal of the moment drive legislation.
Policymakers must consider the costs of new regulations, to both consumers and the companies. Expensive solutions, regardless of effectiveness, will only entrench the largest players in online spaces and hamper innovation, as research at Harvard University’s Berkman Klein Center and elsewhere has shown.
So what’s the ideal regulatory solution? That’s unclear, and it will likely be an evolving process. But it’s clear that the status quo is inadequate, and that Facebook would likely get more respect from lawmakers and the public if it acknowledged its complicity in user-data breaches and misuse, and that in large part its business model is predicated on unregulated monetization of that data.
A promising way forward comes from Professors Jonathan Zittrain of Harvard Law School and Jack M. Balkin of Yale Law School, whose work was incorporated into draft federal legislation. They suggest building on the concept of financial fiduciaries to create a new legal concept of a data or “information fiduciary.” Tech companies gathering large quantities of user data would be compelled to treat that information with a similar level of care to that with which doctors treat medical data, or lawyers treat client files.
If properly designed, including the right penalties and enforcement protocols, along with more transparency and power for Facebook users, this could be a viable solution. However, if rushed into place, overturning effective existing data-management legislation like the Children’s Online Privacy Protection Act and others; and especially if written by industry lobbyists, a so-called “grand bargain” for big internet companies could be a mere fig leaf, or even a change for the worse. Such an approach would leave the impression of effective regulation with none of the reality, and might well prevent further progress.
It may be true that effective regulation would represent an existential threat to Facebook. But if effective regulation that appropriately empowers users and the public, and keeps our data safe and under our control means no more Facebook as we know it, then so be it.
Adam Holland is a project manager at the Berkman Klein Center for Internet and Society at Harvard University.