TODAY'S PAPER
48° Good Afternoon
48° Good Afternoon
BusinessTechnology

Facebook: We're better at policing nudity than hate speech

The logo for Facebook appears on screens at

The logo for Facebook appears on screens at the Nasdaq MarketSite in Manhattan's Times Square on March 29. Photo Credit: AP / Richard Drew

Getting rid of racist, sexist and other hateful remarks on Facebook is more challenging than weeding out other types of unacceptable posts because computer programs still stumble over the nuances of human language, the company revealed Tuesday.

Facebook's self-assessment showed its policing system is far better at scrubbing graphic violence, gratuitous nudity and terrorist propaganda. Automated tools detected 86 percent to 99.5 percent of the violations Facebook identified in those categories.

For hate speech, Facebook's human reviewers and computer algorithms identified just 38 percent of the violations. The rest came after Facebook users flagged the offending content for review.

Facebook also disclosed that it disabled nearly 1.3 billion fake accounts in the six months ending in March. Had the company failed to do so, its user base would have swelled beyond its current 2.2 billion. Fake accounts have gotten more attention in recent months after it was revealed that Russian agents used them to buy ads to try to influence the 2016 U.S. +elections.

Even after all that disabling, though, Facebook has said that 3 percent to 4 percent of its active monthly users are fake, meaning up to 88 million fake accounts slip through.

The report was Facebook's first breakdown on how much material it removes for violating its policies. The statistics cover a relatively short period, from October 2017 through March of this year, and don't disclose how long it takes Facebook to remove material violating its standards. The report also doesn't cover how much inappropriate content Facebook missed.

"Even if they remove 100 million posts that are offensive, there will be one or two that have some really bad stuff, and those will be the ones everyone winds up talking about on the cable-TV news," said Timothy Carone, who teaches about technology at the University of Notre Dame.


The increased transparency comes as the Menlo Park, California, company tries to make amends for a privacy scandal triggered by loose policies that allowed a data-mining company with ties to President Donald Trump's 2016 campaign to harvest personal information on as many as 87 million users. The content screening has nothing to do with privacy protection, though, and is aimed at maintaining a family-friendly atmosphere for users and advertisers.

More news