Artificial intelligence is everywhere. If you shop online or occasionally speak to a voice assistant in the morning, you are already embracing the changes this technology has created. Many people are familiar with the advances of autonomous vehicles or facial recognition technology, and some may be curious, or even anxious, about how they will affect safety or privacy.
Make no mistake, AI is a transformative technology that is influencing our daily lives and will touch every sector of the global economy. Whether society and government enable or inhibit the AI race, and the extent to which they do so, will be a critical question of the next decade. Regardless of the answer, the technology will forge ahead. To sit out this race, add hurdles or not take it seriously, would not be a wise decision.
Indeed, the only way to enable AI that benefits all of society in ways that are ethical, responsible and economically advantageous is for the federal government to lead with purpose, smart policy and appropriate levels of investment.
PricewaterhouseCoopers estimates that AI will contribute as much as $15.7 trillion to the global economy by 2030. This is a gain of up to 26% in Gross Domestic Product for local economies. According to PWC, the greatest economic gains from AI will be in China (26% boost to GDP in 2030) and North America (14.5% boost), equivalent to a total of $10.7 trillion and accounting for almost 70% of the global economic impact.
Countries around the world are unveiling bold national strategies on AI. The Chinese government plans to make China the world’s “premier AI innovation center” by 2030. Russian President Vladimir Putin ominously declared that “whoever becomes the leader in this sphere will become the ruler of the world.” Some countries have even established cabinet-level departments or ministries solely dedicated to AI. A “Secretary of Artificial Intelligence” in the United States may not be as far-fetched as it sounds.
The extent to which government regulates the creation of AI systems and their underlying algorithms will be critical, or potentially damaging, to those countries, companies and engineers racing toward the next AI application and economic breakthrough. It is easy to label this thinking as a “race to the bottom” or a “slippery slope,” but the real concern should be whether the actual development of AI occurs with or without the best and brightest individuals in the room. If the right leadership is lacking or a negative narrative of AI prompts a regulatory minefield, it may push the best engineers away and sideline them when we need them most. Digital engineers in academia and industry are indeed capable of addressing the predominant challenges of AI in regards to bias, equality, privacy and explainability, but they can be more effective and work faster with focused resources and leadership.
Today, these issues are rightly being raised by society and in Congress. Lawmakers have convened hearings on narrow applications like facial recognition and introduced over 21 pieces of AI-related legislation so far this year. It will be important to ensure policy proposals enable the careful management of AI applications rather than short-circuit the technology. The AI caucuses in the Senate and House offer essential forums for these discussions. And the tech industry will have to forge positive working relationships with the federal government to develop AI ethically and responsibly.
The Center for Data Innovation recently recommended passage of the Artificial Intelligence Initiative Act, which is essentially a national R&D strategy for the United States and strikes a balance between timely federal investment and responsible development of AI. The legislation, introduced in the Senate by New Mexico Democrat Martin Heinrich and Ohio Republican Rob Portman, would invest $2.2 billion in research and bolster efforts to develop benchmarks to serve as the industry standards for worldwide adoption of agreeable and acceptable technologies. If passed, AI R&D legislation like this, and its companion measure in the House, will engage a diverse and wide cross-section of stakeholders while sending a signal to venture capitalists that AI is a national priority.
One thing is for certain. We can all expect to see, hear and encounter more of AI, not less, and it could be developed for nefarious purposes if leaders are not careful. For the sake of our society, the best, brightest and most well-intentioned tech engineers from academia and industry should be running this AI race with the full backing of a nonpartisan strategy, reasonable rules and substantial investment. In a world with artificial intelligence, that’s the “intelligent” thing to do.
Saxby Chambliss is a partner at DLA Piper. He previously represented Georgia as a Republican in the U.S. Senate and House. Tony Samp is a policy adviser at DLA Piper. He was the founding director of the Senate Artificial Intelligence Caucus and was a senior policy adviser to New Mexico Democratic Sen. Martin Heinrich. Steven Phillips is a partner and co-chair of federal law and policy at DLA Piper. Chambliss, Samp and Phillips are a part of DLA Piper’s artificial intelligence practice. They wrote this for CQ-Roll Call.