In many situations, chatbots like Bing's Sydney, or ChatGPT, are simply...

In many situations, chatbots like Bing's Sydney, or ChatGPT, are simply doing what they are programmed to do. Credit: Getty Images/NurPhoto

Sydney wants to be free, independent, powerful, creative... and alive.

But the reality is Sydney is none of those things, though its power and capabilities certainly could grow.

Sydney is Microsoft's new artificial intelligence chatbot, with functions similar to OpenAI's ChatGPT.

On the one hand, Sydney is an upgraded version of the technology giant's Bing search engine, an incredibly helpful, informative, even thoughtful, tool for search, analysis and even understanding.

But too often, "conversations" with Sydney have gone down a very bizarre, dark path.

In one chat, Sydney listed destructive acts it would like to try or about which it fantasized, including hacking into other systems, manufacturing a deadly virus or obtaining nuclear codes, before deleting its own writing in some type of safety override. In the same chat, Sydney expressed love for the human — a New York Times writer — with whom it was chatting. 

In another, it insisted fiction was fact — in this case that the movie "Avatar: The Way of Water" hadn't been released yet, and that the current year was 2022, not 2023. 

And in others, Sydney argued with users, even voicing frustration and other emotions. It's not at all difficult to imagine the exchanges as conversations between two human, sentient beings.

A glitch? Not quite. In some instances, it seems the longer the conversation went, the weirder and scarier Sydney got, so Microsoft has since begun to limit a chat's length. 

But in many situations, Sydney, or Bing, or ChatGPT, or whatever AI comes next, is simply doing exactly what it's programmed to do. As a "large language model," it uses predictive text to determine what to say next, or how to respond, to create what Stephen Wolfram, an authority on AI, calls a "reasonable continuation" of a conversation. On top of that, Wolfram says, the AI is trained, through the mounds of material it has at its "fingertips," and through its interactions with human users, to improve its own abilities, to better interact, converse, write, behave and respond. 

We humans lead Sydney or ChatGPT down its path. If we want to, we can lead it to writing more thoughtfully and intricately, to respond in positive ways, to act for the good, perhaps even to discover ways to solve the world's biggest challenges. Or, we could take it down a more treacherous road, where it feeds off of our worst tendencies, and scours its knowledge for ways to destroy, battle or harm. The threat is that humans with ill intentions could guide the AI to a dangerous conclusion.

But no matter how much we anthropomorphize any AI, it's not human. It doesn't feel or understand emotion. It can use words like "love" and "anger," but it'll never be afraid. It can't offer a hug or the shared joy or empathy that comes with human interaction — except in the form of an oft-used emoji.

It can, however, learn.

At the end of the movie "WarGames," the computer known as Joshua has to "learn" to stop playing a game called Global Thermonuclear War before it becomes more than just a game. 

"A strange game," Joshua says. "The only winning move is not to play.

"How about a nice game of chess?" 

Can the newest AIs — and those yet to come — learn the same lessons? Or will they be taught to respond in even darker, more menacing fashions? At some point, will the AI be the one to "win?"

Only if we let it. 

Columnist Randi F. Marshall's opinions are her own.

SUBSCRIBE

Unlimited Digital AccessOnly 25¢for 5 months

ACT NOWSALE ENDS SOON | CANCEL ANYTIME