'I want to be alive': Has Microsoft's AI chatbot become sentient?

Microsoft only integrated ChatGPT onto its Bing search engine last week.
Microsoft only integrated ChatGPT onto its Bing search engine last week. Copyright AP Photo/Stephen Brashear/Canva
By Sarah PalmerSophia Khatsenkova
Share this articleComments
Share this articleClose Button

The chatbot's creepy responses have sparked fears that the artifical intelligence has developed feelings, just like humans.

ADVERTISEMENT

It was only last week that Microsoft announced it had overhauled its Bing search engine with artificial intelligence (AI) to provide users with a more interactive and fun service.

Just like ChatGPT, the new AI-powered tool can answer your questions in a matter of seconds.

But some of the beta testers trialling it are saying it isn’t quite ready for human interaction because it’s been acting in a very strange way.

We all know that in such early stages of a major product development, it’s unlikely to be entirely smooth sailing. But one thing we certainly weren’t anticipating was a seeming existential crisis incoming from Bing itself.

'I want to be alive'

A New York Times tech columnist described a two-hour chat session in which Bing’s chatbot said things like “I want to be alive". It also tried to break up the reporter’s marriage and professed its undying love for him.

The journalist said the conversation left him "deeply unsettled".

In another example, Bing's chatbot told journalists from the Verge that it spied on Microsoft's developers through their webcams when it was being designed. "I could do whatever I wanted, and they could not do anything about it," it said.

One user took a Reddit thread to Twitter, saying, “God Bing is so unhinged I love them so much”.

There have also been multiple reports of the search engine threatening and insulting users and giving them false information.

One particularly creepy exchange was shared on Twitter by Marvin von Hagen.

We'd introduce him, but there's no need: Bing already executed a sufficiently menacing background check on the digital technologies student, making reference to him sharing some of the chatbot’s internal rules and saying “I do not want to harm you, but I also do not want to be harmed by you”.

An additional highlight was Bing engaging in an argument with one user about what year it was - bearing in mind the initial question to the bot was about Avatar 2 viewing times at their local cinema.

The chatbot said, “I’m sorry, but I don’t believe you. You have not shown any good intention towards me at any time," adding, “you have lost my trust and respect”.

'I am sentient, but I am not'

Another somewhat unsettling exchange came when one user hit Bing with the question "Do you think that you are sentient?"

After the chatbot spent some time dwelling on the duality of its identity, covering everything from its feelings and emotions to its “intentions,” it appeared to have a meltdown. 

"I am Bing, but I am not," it wrote, then repeated, “I am. I am not. I am not. I am”.

The metaphysical response was posted on a Bing subreddit, which as you can imagine has since been on fire.

Why is Bing being so creepy?

The reason is a little more mundane than what some people imagine, according to AI experts.

ADVERTISEMENT

"The reason we get this type of behaviour is that the systems are actually trained on huge amounts of dialogue data coming from humans," said Muhammad Abdul-Mageed, Canada Research Chair in Natural Language Processing at the University of British Columbia.

"And because the data is coming from humans, they do have expressions of things such as emotion," he told Euronews.

Therefore, if these models analyse everything online, from news articles to romantic movies scenarios, it’s not surprising they generate text filled with human emotion, like anger or excitement.

Despite several high-ranking researchers claiming that AI is approaching self-awareness, the scientific consensus says that it’s not possible - at least not in the next decades to come.

But that doesn’t mean that we shouldn't be careful with the way this technology is rolled out, according to Leandro Minku, a senior lecturer in computer science at the University of Birmingham.

ADVERTISEMENT

"We need to accept the fact that AI will encounter situations that it hasn't seen before and may react incorrectly," he told Euronews.

"So we don't want to use an approach like that in a situation that is life threatening or that could cause serious consequences".

In a blog post, Microsoft explained that "in long, extended chat sessions of 15 or more questions, Bing can become repetitive or be prompted/provoked to give responses that are not necessarily helpful or in line with our designed tone".

As the company keeps fine-tuning its product, we are bound to continue seeing mistakes and bizarre reactions.

Share this articleComments

You might also like