If authorities let it be known to Big Tech companies...

If authorities let it be known to Big Tech companies that their algorithms were at risk for misusing customer data, privacy concerns would probably be taken more seriously. Credit: SOPA Images / LightRocket via Getty Images

If you haven't heard about the rise and fall of the photo app Ever, I'd suggest paying attention. Its story illustrates how the government, if it wanted to, could compel Big Tech companies such as Google and Facebook to respect people's privacy.

Like many cloud services, Ever offered users a place to store their photos. It then went a step further, using those photos to train a facial-recognition algorithm, which it marketed to law enforcement agencies and other potential clients. Some Ever users felt that their privacy had been violated, and the Federal Trade Commission alleged that the company, Everalbum, had acted deceptively by employing face recognition without customers' knowledge and by failing to delete their photos when they deactivated their accounts.

What's really interesting are the terms of the settlement reached last week. It doesn't just require Everalbum to delete the photos in question and obtain consumers' consent to use face recognition. The company must also delete any algorithms that it developed with the photos and videos that it obtained through the app (which was shut down last year).

The FTC's focus on the algorithms could set a powerful precedent. In the world of artificial intelligence, people's data are just the raw material: for Google, search terms and ad clicks; for Facebook, the posts people read and how long they're engaged; for Amazon, what people buy and how they find it. The companies then use that data to update their algorithms — daily, hourly or even every minute —to attract and generate profit from ever more people. The algorithms are the core of the product. They contain the full accumulated knowledge, including the newest links, the latest viral videos and the hottest new products.

So when the FTC fines Facebook $5 billion for misusing user data, as it did in 2019, that's maybe expensive but far from fatal. The most valuable assets — the algorithms that Facebook developed from the misappropriated data — remain intact. Like the bodies of euthanasia patients in the dystopian thriller "Soylent Green," people's information has already been processed into the final product, ready to be fed to the next in line.

But what if authorities required Facebook to delete the offending parts of the algorithm? What if the company had to revert to an earlier version, before it started misusing the data? The AI would be completely out of touch: Imagine Facebook serving up articles from before the 2016 election. Retraining without the missing information would require a monumental effort, severely screwing up the business model for some time.

Therein lies a potent weapon. If authorities let it be known that they'll be coming after the algorithms the next time they catch someone misusing data, tech companies will probably take privacy concerns a lot more seriously.

Cathy O'Neil is a Bloomberg Opinion columnist.

Newsday LogoSUBSCRIBEUnlimited Digital AccessOnly 25¢for 5 months
ACT NOWSALE ENDS SOON | CANCEL ANYTIME