A viral video released in February showed Boston Dynamics’ bipedal robot, Atlas, performing human-like tasks: opening doors, tromping about in the snow, lifting and stacking boxes. Tech geeks cheered and Silicon Valley investors salivated at the potential end to human manual labor.
Shortly thereafter, White House economists released a forecast that calculated more precisely whom Atlas and other forms of automation are going to put out of work. Most occupations that pay less than $20 an hour are likely to be, in the words of the report, “automated into obsolescence.”
In other words, the so-called Fourth Industrial Revolution has found its first victims: blue-collar workers and the poor.
The general response in working America is disbelief or outright denial. A recent Pew Research Center survey found that 80 percent of Americans think their job will still exist in 50 years, and only 11 percent of today’s workers were worried about losing their job to automation. Some — like my former colleagues at the CIA — insist that their specialized skills and knowledge can’t be replaced by artificial intelligence. That is, until they see plans for autonomous drones that don’t require a human hand and automated imagery analysis that outperforms human eyes.
Human workers of all stripes pound the table, claiming desperately that they’re irreplaceable. Bus drivers. Bartenders. Financial advisers. Speechwriters. Firefighters. Umpires. Doctors and surgeons, especially. Meanwhile, corporations spend billions — $8.5 billion last year on AI, and $1.8 billion on robots — toward making all those jobs replaceable. Why? Simply put, robots and computers don’t need health care, pensions, vacation days or even salaries.
Experts forecast that 45 percent of today’s workplace activities could be done by robots, AI or some other demonstrated technology. Some professors argue that we could see 50 percent unemployment in 30 years.
Deniers of the scale of this looming economic upheaval point hopefully to retraining programs, and insist that there will be a need for people to build and service the machines. They believe that such shifts are many decades away, even as noted futurist Ray Kurzweil, who is also Google’s director of engineering, says AI will equal human intelligence by 2029. Deniers also talk about the new jobs they assume will be created during this Fourth Industrial Revolution. Alas, a report from the 2016 World Economic Forum calculated that the technological changes underway likely will destroy 7.1 million jobs around the world by 2020.
With the future value of human labor in doubt, what do we do? One way to cushion the economic blow is to reclaim something from the technology realm that we’ve been giving away for free: our personal data.
Companies that sell personal data should pay a percentage of the revenue into a Data Mining Royalty Fund that would provide annual payments to U.S. citizens, much as the Alaska Permanent Fund distributes oil revenues to Alaskans. This payment scheme would start with traditional data but would also extend to future forms of data like our facial expressions and other biometrics. If Google, Facebook or others were profiting from a public resource, it would be illegal and immoral for them not to pay for it. The same logic should apply to our data.
Ethicists and philosophers are debating what a world without work might look like. It’s clear that no one will escape the outcomes of this economic and technological revolution.
A Data Mining Royalty Fund isn’t about helping just the unemployed factory worker who used to earn $20 an hour, the truck driver replaced by self-driving vehicles or the minimum-wage barista. It’s about taking steps to guarantee some minimum income to your family, or the one down the block, before any of us are automated into obsolescence.
Bryan Dean Wright is a former CIA covert operative. He wrote this for the Los Angeles Times.