The AI giant said the teenager should not have been using the technology without parental consent and should not have bypassed ChatGPT’s protective measures.
OpenAI has denied allegations that it is to blame for a teenager’s suicide, after the family sued the company in August, alleging the 16-year-old used ChatGPT as his “suicide coach”.
OpenAI, which makes the popular artificial intelligence (AI) chatbot, responded for the first time on Tuesday in a legal response filed in the California Superior Court in San Francisco.
A lawsuit was filed against the company and its CEO, Sam Altman, by the parents of 16-year-old Adam Raine, who died by suicide in April.
The parents alleged that Raine developed a psychological dependence on ChatGPT, which they say coached him to plan and take his own life earlier this year and even wrote a suicide note for him.
Chat logs in the lawsuit showed that ChatGPT discouraged the teenager from seeking mental health help, offered to help him write a suicide note, and advised him on his noose setup, according to media reports.
In its court filing, OpenAI argued that the “tragic event” was due to “Raine’s misuse, unauthorised use, unintended use, unforeseeable use, and/or improper use of ChatGPT,” according to NBC News.
OpenAI added that the teenager should not have been using the technology without parental consent and should not have bypassed ChatGPT’s protective measures.
OpenAI said in a blog post that its goal "is to handle mental health-related court cases with care, transparency, and respect".
It said its response to the Raine family's lawsuit included "difficult facts about Adam's mental health and life circumstances".
"Our deepest sympathies are with the Raine family for their unimaginable loss," the post said.
Jay Edelson, a lawyer for the Raine family, told NBC News that OpenAI “abjectly ignore[d] all of the damning facts we have put forward: how GPT-4o was rushed to market without full testing”.
“That OpenAI twice changed its Model Spec to require ChatGPT to engage in self-harm discussions. That ChatGPT counselled Adam away from telling his parents about his suicidal ideation and actively helped him plan a 'beautiful suicide,'” he added.
Raine’s case is one of several lawsuits claiming that ChatGPT drove people to suicidal behaviour or harmful delusions.
Since September, OpenAI has increased parental controls, which include notifying parents when their child appears distressed.
If you are contemplating suicide and need to talk, please reach out to Befrienders Worldwide, an international organisation with helplines in 32 countries. Visit befrienders.org to find the telephone number for your location.