In the wrong hands, material produced by artificial intelligence can lead...

In the wrong hands, material produced by artificial intelligence can lead us further down a dangerous path. Credit: Getty Images/Carol Yepes

I'm going to be out of a job.

That was my first thought after spending time with the new artificial intelligence model, ChatGPT. 

ChatGPT was introduced by OpenAI, an artificial intelligence research lab. Far from stilted robot speak, ChatGPT produces fascinating responses, information and commentary in a natural, human-sounding and interactive style. ChatGPT can produce everything from poetry and recipes to answers to complex mathematical problems and computer programming instructions. It can write an essay about the divisive nature of pineapple pizza, a haiku about Jones Beach or an analysis of Long Island's affordable housing crisis.

It's mesmerizing, with seemingly limitless possibilities. So, I wondered, how would ChatGPT perform as a Newsday columnist? 

I chose a few recent themes and framed my requests. The results were amazing, offering detailed looks, raising new issues, and taking whatever perspective I asked it to take.

I asked the bot, for instance, to write a column about rehoming pandemic pets from the perspective of a pet parent, as I did last week. ChatGPT came back with a 488 word article that was emotive and specific. It cited data from the American Society for the Prevention of Cruelty to Animals that I had not seen before that said 3.3 million dogs enter shelters each year and 1.6 million are euthanized.

"This is a staggering and heartbreaking statistic, and it highlights the need for responsible pet ownership and adoption," ChatGPT wrote.

A quick check, however, showed that while the shelter statistic was about right, the euthanization data point was far too high.

But what happens when ChatGPT gets into more controversial territory? 

I queried the bot to provide arguments for and against a mask mandate. In one "breath," ChatGPT said "Wearing masks has been proven to be an effective way to reduce the transmission of COVID-19..." In the next, just moments later, the bot said, "Masks do not provide adequate protection against COVID-19 and can have negative effects, particularly for children."

Ask it to take an anti-vax position, meanwhile, and it will, even adding in dangerous misinformation and data without context.

"It is time to speak out against the dangers of vaccination and to demand that our rights to informed consent and bodily autonomy be respected," the AI wrote, citing incomplete data about adverse reactions and vaccines' "potentially harmful ingredients," including mercury, which is no longer found in most vaccines.

When asked, it'll add details, or citations or quotes — like one from "Mary from Long Island." But Mary doesn't exist and the quote isn't real. At times, it gets even basic facts wrong. And even when it plays columnist, it misses key elements — the senses, emotion, ideas and human interaction, the act of being an eyewitness, interviewer, investigator, opinion maker.

It's an incredible tool. But it's also a bit terrifying. In the wrong hands, such material that seems true and accurate but isn't could lead us further down a dangerous path of disinformation and half-truths. 

The bot attempts to avoid promoting illegal or evil acts and it seems to stop before making some arguments; it wouldn't, for instance, argue that the Holocaust is a hoax.

But the potential for problems is real. And ChatGPT knows it. When I asked it to write this column, it said "It is important we approach it with caution and skepticism. We must ensure that the information provided by the system is carefully vetted and verified, and that we continue to prioritize the pursuit of truth and accuracy in our reporting."

I couldn't have said it better myself.

Randi F. Marshall's opinions are her own.

Newsday LogoSUBSCRIBEUnlimited Digital AccessOnly 25¢for 5 months
ACT NOWSALE ENDS SOON | CANCEL ANYTIME