TODAY'S PAPER
Good Morning
Good Morning
OpinionCommentary

The lawsuits against Facebook don't go far enough

Facebook CEO Mark Zuckerberg speaking at the Paley

Facebook CEO Mark Zuckerberg speaking at the Paley Center in New York on Oct. 25, 2019. Government must go further and consider radical reform of how the market behind digital media platforms such as Facebook operates. Credit: AP/Mark Lennihan

This week's antitrust cases against Facebook by the Federal Trade Commission, 46 states and two other jurisdictions are welcome, but hardly surprising. They translate into legal claims the clear findings of the recent report from the Democratic-led House antitrust subcommittee on the excessive monopoly power of tech behemoths Google, Amazon, Facebook and Apple. (Amazon chief executive Jeff Bezos owns The Washington Post.) And they mirror the Justice Department's October lawsuit against Google. But this litigation rests on a narrow antitrust model of how digital platforms should be regulated — and so the complaints fail to address the most important problem: the negative social side-effects of big tech's basic business model, which is designed to facilitate toxic content's spread across its news feeds.

Meanwhile in Europe, regulatory authorities are proposing a broader approach that holds Facebook and other platforms accountable for failing to police illegal content, with the sanction of huge fines. This at least looks beyond economic regulation to raise the issue of social consequences. But if it's limited to punishing platforms for not rooting out illegal content, then the proposed European legislation will still fall short of the core issue.

(Facebook says people and businesses use the company's platforms because they "deliver the most value.")

Two decades ago, the United States, Great Britain and many other societies delegated to digital platforms the redesign of the spaces where human beings meet, and they ignored the possible social consequences. The result today is a media ecosystem where corporations like Facebook — increasingly dependent on engagement and outrage — shape how our ideas and information circulate.

No one exactly planned this. But when connected computer devices became embedded in daily life, and when the Internet shifted in the early 1990s from being a public infrastructure to a commercial one, the conditions already existed for a few hugely successful platforms to emerge as an anticompetitive force. In the 2010s, that manifested through Facebook's acquisitions of Instagram and WhatsApp, now under challenge in U.S. courts.

But what was not necessarily predictable, and what for now remains outside the sights of U.S. and European regulators, was the particular business model that came to dominate today's large digital platforms, such as Facebook. It is here we reach the true source of the damage to public and private life, both economic and social, that provoked these new legal actions.

In a recent report, we called that business model the "consumer Internet": It is what follows when the vast space of online human interaction becomes managed principally for profit. The model has three sides: data collection on the user to generate behavioral profiles; sophisticated algorithms that curate the content targeted at each user; and the encouragement of engaging — even addictive — content on platforms to hold the user's attention to the exclusion of rivals. The model is designed to do only one thing: maximize the profitable flow of content across platforms. Facebook, where one of us once worked, is its prime exponent.

The problem is not that Facebook makes profit — even large profits. The problem is that its increasingly interconnected apps and interfaces work to reconfigure everyday flows of social information to suit a business model that treats all content suppliers the same. For sure, Facebook did not intend this consequence. But when its business model sought to maximize content traffic by whatever means, and there happened to exist disinformation merchants who also just wanted to maximize traffic, those goals were destined to interact in a marriage made in hell.

And here is the paradox for contemporary societies and their economic regulators: Whatever Facebook's pro-social claims and however fairly it competes, its goals and those of bad social actors remain in deep and largely hidden alignment. And since there is no social world without some bad actors, this alignment has consequences. It is time that, as a society, we began to address them.

The risks of a computer-based social infrastructure of social connection were predicted as long ago as 1948 by the founder of cybernetics, Norbert Wiener, who wrote: "It has long been clear to me that [with] the modern ultrarapid computing machine . . . we were here in the presence of [a] social potentiality of unheard-of importance for good and for evil." Wiener's unease was ignored in the headlong rush to commercially develop the Internet, but it is not too late, even now, to heed Wiener's warning.

Societies through their regulators and lawmakers must renegotiate the balance of power between the corporate platform and the consumer. A new digital realignment is needed. And Facebook is as good a place to start as any.

Whatever the outcomes of current lawsuits and the penalties that result from new European legislation, government must go further and consider radical reform of how the market behind digital media platforms such as Facebook operates.

Consumers should be enabled, by law, to exercise real choice about how data that affects them is gathered, processed, and used by platforms, including a real choice to use those services without data being gathered. Locking in this last point would challenge the privacy-undermining impacts of Facebook's business model at its heart: It should apply not just to the current Facebook, but to whatever smaller platforms result from its breakup.

In practice, this could equate to users having access to a clear list of the kinds of information — both raw personal data and the algorithmic inferences made by Facebook about our individuality — with selection checkboxes and a deletion button. This functionality could come along with clear information about what Facebook does with your data and explicit options to not have your data processed by the company.

Second, Facebook should be required to uncover its business models' full workings, revealing exactly where it creates advantages for bad social actors and how Facebook gains from this. Such a requirement could be manifested through radical transparency over the algorithms Facebook uses to profile users, curate their social content and target ads at them.

And just as proposed in Europe, U.S. regulators should impose penalties on Facebook and other Big Tech when they circulate criminal content. But they should go further, requiring, for example, Facebook to uncover the economic incentives that underlie its evolving relations with political actors, including bad actors at home and abroad, by publishing advertising revenue by client, as well as specific information about ad-targeting parameters, target audiences, clickthrough rates and spend by the client.

The civic debate that will result is for the future, but a healthy democracy cannot do without it. Just as, decades back, societal outrage led to chemical giants being held accountable for toxic byproducts of their production process, a similar debate is needed today for the digital environment. But we will never have that debate unless Facebook and other platforms are compelled to expose to public inspection the workings of their business model as it affects the social and political sphere.

Do nothing, and we risk turning a blind eye to an extraordinarily lucrative corporate world that incentivizes the doing of social harm. Yes, this will involve government and regulators revising their remits. But now that the dangers are becoming clear, why would any sane society want to do otherwise? Surely, the possibility of salvaging a future citizens' Internet from the wreckage of today's consumer Internet is worth the effort.

Ghosh is author of "Terms of Disservice" and leads the Digital Platforms and Democracy Project at the Harvard Kennedy School. He is a former adviser at Facebook and in the Obama White House. Couldry is a professor of media, communications and social theory at the London School of Economics and a faculty associate at the Berkman Klein Center for Internet and Society at Harvard. This piece was written for The Washington Post.

Columns