This month, one of the most important intelligence documents about Russian interference in the U.S. election emerged. But it didn’t come from the National Security Agency or the House Intelligence Committee. It was published by Facebook.
Facebook’s report on “Information Operations” was the company’s first public acknowledgment that political actors have been influencing public opinion through the social networking platform. The company says it will work to combat these information operations, and it has taken some positive steps. It removed some 30,000 fake accounts before the French election last month. It has purged thousands more ahead of the upcoming British election.
But more important, the report reveals that while we are all talking about “fake news,” we should also be talking about the algorithms and fake accounts that push bad information around.
Facebook deployed a “cross functional team of engineers, analysts and data scientists” as part of a detailed investigation into possible foreign involvement in the U.S. election. They found fake groups, fake likes and comments, and automated posting across the network by unnamed malicious actors. The report’s authors claim that their investigation “does not contradict” the findings made in the U.S. Director of National Intelligence report published in January, which blamed Russia for a sweeping online influence campaign conducted in the lead-up to the election.
Essentially, this confirms what researchers have suspected for several years: Large numbers of fake accounts have been used to strategically disseminate political propaganda and mislead voters. These accounts draw everyday users into “astroturf” political groups disguised as legitimate grass-roots movements. Unfortunately, Facebook’s refusal to collaborate with scientists and share data has made it difficult to know how many voters are affected or where this election interference comes from.
It is incredibly hard to study the impact of fake news and algorithms on public life. Through our project at the University of Oxford, we have been able to demonstrate how similar campaigns of misinformation work on Twitter. We have also been able to compare the trends internationally. During the recent French election, we found that people interested in French politics were posting one fake news story for every two produced by a professional journalist. During an uncontroversial presidential election in Germany this year, German users were sharing one fake news story for every four credible stories. But when we looked back and investigated the content being shared by users in Michigan in the lead-up to the 2016 election, we found an even ratio of one junk news story for every one reputable one.
Facebook, of course, does not have the same issues with data access. It has the metadata to identify precisely which accounts were created, where they operated and what kinds of things those users were up to during the U.S. election. Their data scientists could probably provide some insights that the intelligence services cannot.
The company argues that fake accounts have been participating in only a small amount of the overall activity around politics and public life in the United States. But even a small percentage of total Facebook activity, if concentrated strategically, could be influential. Was the activity mostly in swing states? Did it occur in the months of the Republican primaries and originate with accounts seeded from Russia? Or did fake-news and fake-account activity peak in the three days before the election?
If there was collusion between the Trump campaign and Russian influence operations, Facebook may be able to spot that, too. In many ways, massive coordinated propaganda campaigns are just another form of election interference. If Facebook has data on this, it needs to share it. The House Intelligence Committee should call Facebook to testify as part of its investigation.
While the outcome of the U.S. election is settled, major elections are coming up around the world. Facebook needs to tell us what it knows and demonstrate that it can prevent interference with democratic deliberation.
Philip N. Howard is a professor of internet studies at the Oxford Internet Institute and Balliol College at the University of Oxford. Robert Gorwa is a researcher with the Project on Computational Propaganda at the University of Oxford. They wrote this for The Washington Post.