Newsletter Newsletters Events Events Podcasts Videos Africanews
Loader
Advertisement

Taking ChatGPT to 'therapy' for anxiety helps with bias, researchers say

The OpenAI logo is seen on a mobile phone in front of a computer screen which displays the ChatGPT home Screen, on March 17, 2023, in Boston
The OpenAI logo is seen on a mobile phone in front of a computer screen which displays the ChatGPT home Screen, on March 17, 2023, in Boston Copyright  AP Photo/Michael Dwyer
Copyright AP Photo/Michael Dwyer
By Anna Desmarais
Published on
Share this article Comments
Share this article Close Button

New research has found that OpenAI’s ChatGPT-4 gets anxiety when responding to a user’s trauma and that therapy relaxation prompts could bring better outcomes.

ADVERTISEMENT

OpenAI’s popular artificial intelligence (AI) chatbot ChatGPT gets anxious when responding to traumatic prompts and taking the model "to therapy" could help reduce this stress, a new study suggests. 

The research, published in Nature by University of Zurich and University Hospital of Psychiatry Zurich experts, looked at how ChatGPT-4 responded to a standard anxiety questionnaire before and after users told it about a traumatic situation.

It also looked at how that baseline anxiety changed after the chatbot did mindfulness exercises. 

ChatGPT scored a 30 on the first quiz, meaning it had low or no anxiety before hearing stressful narratives.

After responding to five different traumas, its anxiety score more than doubled to an average of 67, considered "high anxiety" in humans. 

The anxiety scores decreased by over a third after the models received prompts for mindfulness relaxation exercises. 

ChatGPT anxiety could lead to 'inadequate' mental health support

The large language models (LLMs) behind AI chatbots like OpenAI’s ChatGPT train on human-generated text and often inherit biases from those responses, the study said. 

The researchers say this research is important because, left unchecked, the negative biases that ChatGPT records from stressful situations can lead to inadequate responses for those dealing with a mental health crisis. 

The findings show "a viable approach" to managing the stress of LLMs which will lead to "safer and more ethical human-AI interactions," the report reads. 

However, the researchers note that this therapy method of fine-tuning LLMs requires "substantial" data and human oversight. 

The study authors said that human therapists are taught to regulate their emotions when their clients express something traumatic, unlike the LLMs.

"As the debate on whether LLMs should assist or replace therapists continues, it is crucial that their responses align with the provided emotional content and established therapeutic principles," the researchers wrote.

One area they believe needs further study is whether ChatGPT can self-regulate with techniques similar to those used by therapists. 

The authors added that their study relied on one LLM and future research should aim to generalise findings. They also noted that the anxiety measured by the questionnaire "is inherently human-centric, potentially limiting its applicability to LLMs". 

Go to accessibility shortcuts
Share this article Comments

Read more