ChatGPT is one example of how artificial intelligence has advanced,...

ChatGPT is one example of how artificial intelligence has advanced, raising concerns about its regulation. Credit: TNS/Dreamstime

Artificial intelligence experts on Long Island and elsewhere in New York are urging regulation of the rapidly emerging technology, days after a statement by industry leaders warned it could pose a “risk of extinction” for the human race.

The statement, released last week, outlines no specific doomsday scenarios or remedies. But  it says experts’ anxieties over AI’s emergence have galloped past recent popular discourse about its impact on worker displacement and opportunities for student cheating.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement read.

Those words, posted to the website of the nonprofit Center for AI Safety, appear alongside the names of hundreds of scientists, academics and executives from tech companies such as Google, Microsoft and OpenAI that have in recent months released online chatbots capable of conversing and generating readable text, even images and video. Technology leaders in March, including Elon Musk, had called for a pause in development of some advanced AI systems. 

AI is already used in a range of industrial applications, like predictive maintenance and quality control, and has medical applications, including diagnosis and imaging. 

Areas of concern, which also appear on the Center for AI Safety website, include obvious ones, such as weaponization of the technology, and others that might appear to a nonspecialist to belong in the realm of science fiction. Take the risk of “emergent goals:” AI models demonstrate “unexpected, qualitatively different behavior as they become more competent.” Goals such as self-preservation may emerge, along with latent capabilities that emerge only after deployment.

For now, though, “the bigger question … is how the technology could be used by people with bad intentions,” said Steven Skiena, director of Stony Brook University’s Institute for AI-Driven Discovery and Innovation, who was not among the signatories. “This is where you get into concerns about global risks.”

AI could be used to disseminate misinformation and disinformation at massive scale, he said. “Imagine what happens to the populace if it doesn’t know it can trust the news … Imagine the risks of targeting people with false information through all the communication channels.”

Chat GPT “is not going to make a nuclear device,” said Skiena, who specializes in natural language processing, the branch of computer science that gives it and similar bots their human-seeming ability to understand text and spoken words.

But he was struck by the pace of development, which is far beyond what he saw 10 or even three years ago. “If you have learning systems” that train themselves, he said, “there suddenly becomes a point where they can get much better than you imagined, and this can happen pretty quickly … We should be thinking of ‘rules and guardrails.’ ”

Humans have adapted themselves to other technologies that may have seemed disturbingly disruptive when they were introduced, such as the power loom in 19th-century Britain or the 20th-century nuclear bomb. But our experience with AI may be different because “we don’t know what we don’t know,” said Sarah Kreps, a political scientist and director of the Cornell Tech Policy Institute at Cornell University, who signed the statement.

“The worry that some people have now is how quickly exponential the growth is," Kreps said. "At this rate, there are scenarios that could happen in five or 10 years that we haven’t thought about.”

Chief among them, for Kreps, is weaponization of the tech. Already, the Center for AI Safety notes on its website, researchers have repurposed an AI trained to develop drugs to design potential biochemical weapons. A separate experiment suggested that not much human guidance might be needed; researchers prompted an AI program “to autonomously conduct experiments and synthesize chemicals in a real-world lab.”

Automating kinetic weapons systems, such as nuclear weapons or targeting systems that could override human decision-making, could open a new set of perils. “Now we’re automating the same weapons we spent decades trying to regulate with arms control,” she said.

Whatever policies the United States and its allies might make about keeping a human hand “in the loop” might be insufficient, she said. “Not everyone plays by the same rules, and if other countries play by different rules, then we need to imagine a world where these harms can come to pass.” 

In a May 30 article in the magazine “Foreign Affairs,” Bill Drexel and Hannah Kelley warned that use of AI by China and other authoritarian nations already poses military and ethical concerns. State systems of “surveillance, suppression and control” have been used against targets including China’s Uyghur minority, they write, but AI failure is also a worrisome possibility. Partly because authoritarian states are engineered to protect the government from public criticism, they are “far more prone to systemic missteps that exacerbate an initial mistake or accident.” That puts China, which invests tens of billions of dollars annually in the technology, on a “collision course with the escalating dangers of AI," the authors write.

Other global concerns, like atomic power and climate change, have been met if not mastered by an array of international partnerships, treaties and regulations. But “there isn’t a similar entity advising the world’s nations on the hazards, risks and solutions attending explosive changes in the Earth’s information environment,” Andrew Revkin, another signer and director of the Initiative on Communication Innovation and Impact at the Columbia Climate School, wrote on his Substack newsletter last week.

“It takes time to build the diplomatic and advisory tools that society will need to intelligently navigate” the technology and its implications, he said in an interview. 

Xingzhi Guo, who did doctoral work in AI under Skiena but is soon bound for Seattle to work for Amazon, said that much of his daily interaction with the technology was mundane but powerful: “It can help me finish some mechanical things, such as plotting graphs, using a programming language or adjusting document format,” in seconds. 

Guo said he spends little time worrying about “some robot killer walking around,” a la "Terminator" movies, but does share Skiena’s worries about misinformation and the need for regulation. “Large-scale scams, or intended or unintended false information — these do have long-term consequences, undermining our trust," he said, "and our trust is a fundamental thing to continue our civilization.”

Artificial intelligence experts on Long Island and elsewhere in New York are urging regulation of the rapidly emerging technology, days after a statement by industry leaders warned it could pose a “risk of extinction” for the human race.

The statement, released last week, outlines no specific doomsday scenarios or remedies. But  it says experts’ anxieties over AI’s emergence have galloped past recent popular discourse about its impact on worker displacement and opportunities for student cheating.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement read.

Those words, posted to the website of the nonprofit Center for AI Safety, appear alongside the names of hundreds of scientists, academics and executives from tech companies such as Google, Microsoft and OpenAI that have in recent months released online chatbots capable of conversing and generating readable text, even images and video. Technology leaders in March, including Elon Musk, had called for a pause in development of some advanced AI systems. 

WHAT TO KNOW

  • Hundreds of academics and tech executives signed a statement last week warning that AI poses a “risk of extinction.”
  • Risks include weaponization and massive-scale misinformation.
  • Experts say the technology is already in use and is developing rapidly.

AI is already used in a range of industrial applications, like predictive maintenance and quality control, and has medical applications, including diagnosis and imaging. 

Areas of concern, which also appear on the Center for AI Safety website, include obvious ones, such as weaponization of the technology, and others that might appear to a nonspecialist to belong in the realm of science fiction. Take the risk of “emergent goals:” AI models demonstrate “unexpected, qualitatively different behavior as they become more competent.” Goals such as self-preservation may emerge, along with latent capabilities that emerge only after deployment.

For now, though, “the bigger question … is how the technology could be used by people with bad intentions,” said Steven Skiena, director of Stony Brook University’s Institute for AI-Driven Discovery and Innovation, who was not among the signatories. “This is where you get into concerns about global risks.”

The bigger question now "is how the technology could be...

The bigger question now "is how the technology could be used by people with bad intentions,” said Steven Skiena, director of AI-Driven Discovery & Innovation at Stony Brook University, shown in 2019. Credit: Barry Sloan

AI could be used to disseminate misinformation and disinformation at massive scale, he said. “Imagine what happens to the populace if it doesn’t know it can trust the news … Imagine the risks of targeting people with false information through all the communication channels.”

Chat GPT “is not going to make a nuclear device,” said Skiena, who specializes in natural language processing, the branch of computer science that gives it and similar bots their human-seeming ability to understand text and spoken words.

But he was struck by the pace of development, which is far beyond what he saw 10 or even three years ago. “If you have learning systems” that train themselves, he said, “there suddenly becomes a point where they can get much better than you imagined, and this can happen pretty quickly … We should be thinking of ‘rules and guardrails.’ ”

Humans have adapted themselves to other technologies that may have seemed disturbingly disruptive when they were introduced, such as the power loom in 19th-century Britain or the 20th-century nuclear bomb. But our experience with AI may be different because “we don’t know what we don’t know,” said Sarah Kreps, a political scientist and director of the Cornell Tech Policy Institute at Cornell University, who signed the statement.

“The worry that some people have now is how quickly exponential the growth is," Kreps said. "At this rate, there are scenarios that could happen in five or 10 years that we haven’t thought about.”

Chief among them, for Kreps, is weaponization of the tech. Already, the Center for AI Safety notes on its website, researchers have repurposed an AI trained to develop drugs to design potential biochemical weapons. A separate experiment suggested that not much human guidance might be needed; researchers prompted an AI program “to autonomously conduct experiments and synthesize chemicals in a real-world lab.”

An artificial intelligence program producing artwork is demonstrated at the Micro-Star...

An artificial intelligence program producing artwork is demonstrated at the Micro-Star International Co. booth during the Taipei Computex expo in Taipei, Taiwan, on May 30.  Credit: Bloomberg/I-Hwa Cheng

Automating kinetic weapons systems, such as nuclear weapons or targeting systems that could override human decision-making, could open a new set of perils. “Now we’re automating the same weapons we spent decades trying to regulate with arms control,” she said.

Whatever policies the United States and its allies might make about keeping a human hand “in the loop” might be insufficient, she said. “Not everyone plays by the same rules, and if other countries play by different rules, then we need to imagine a world where these harms can come to pass.” 

In a May 30 article in the magazine “Foreign Affairs,” Bill Drexel and Hannah Kelley warned that use of AI by China and other authoritarian nations already poses military and ethical concerns. State systems of “surveillance, suppression and control” have been used against targets including China’s Uyghur minority, they write, but AI failure is also a worrisome possibility. Partly because authoritarian states are engineered to protect the government from public criticism, they are “far more prone to systemic missteps that exacerbate an initial mistake or accident.” That puts China, which invests tens of billions of dollars annually in the technology, on a “collision course with the escalating dangers of AI," the authors write.

Other global concerns, like atomic power and climate change, have been met if not mastered by an array of international partnerships, treaties and regulations. But “there isn’t a similar entity advising the world’s nations on the hazards, risks and solutions attending explosive changes in the Earth’s information environment,” Andrew Revkin, another signer and director of the Initiative on Communication Innovation and Impact at the Columbia Climate School, wrote on his Substack newsletter last week.

“It takes time to build the diplomatic and advisory tools that society will need to intelligently navigate” the technology and its implications, he said in an interview. 

Xingzhi Guo, who did doctoral work in AI under Skiena but is soon bound for Seattle to work for Amazon, said that much of his daily interaction with the technology was mundane but powerful: “It can help me finish some mechanical things, such as plotting graphs, using a programming language or adjusting document format,” in seconds. 

Xingzhi Guo worries about misinformation.

Xingzhi Guo worries about misinformation. Credit: Xigao Li

Guo said he spends little time worrying about “some robot killer walking around,” a la "Terminator" movies, but does share Skiena’s worries about misinformation and the need for regulation. “Large-scale scams, or intended or unintended false information — these do have long-term consequences, undermining our trust," he said, "and our trust is a fundamental thing to continue our civilization.”

Get the latest news and more great videos at NewsdayTV Credit: Newsday

New grub for Islanders season ... Study: Penn Station needs to expand ... What's up on LI this weekend ... Get the latest news and more great videos at NewsdayTV

Get the latest news and more great videos at NewsdayTV Credit: Newsday

New grub for Islanders season ... Study: Penn Station needs to expand ... What's up on LI this weekend ... Get the latest news and more great videos at NewsdayTV

SUBSCRIBE

Unlimited Digital AccessOnly 25¢for 6 months

ACT NOWSALE ENDS SOON | CANCEL ANYTIME