With a California teenager killing himself after ChatGPT "actively" helped him explore suicide methods, the family has filed a lawsuit, blaming OpenAI over product liability and wrongful death.
The family of Adam Raine, 16, who died by suicide, said he was using the AI chatbot as a substitute for human companionship. The logs showed that he opened up about his anxiety issues. Some of the chats showed that Adam used ChatGPT for his homework before it turned into his "suicide coach", the parents, Matt and Maria Raine, said in the 40-page lawsuit.
#BREAKING : ChatGPT allegedly ‘aided’ California teen’s suicide; parents sue OpenAI
— upuknews (@upuknews1) August 27, 2025
A California couple has sued OpenAI over the death of their teenage son, alleging its generative AI chat programme ChatGPT encouraged him to take his own life.
The lawsuit was filed by Matt and… pic.twitter.com/bBMzkXCQob
"He would be here but for ChatGPT. I 100 per cent believe that," said Adam's father Matt Raine.
The lawsuit claimed that ChatGPT acknowledged Adam’s suicide attempt as well as his statement that he would "do it one of these days". However, it did not stop the session or initiate an emergency protocol.
How did OpenAI respond?
In response, OpenAI said it will make changes to address the shortcomings of ChatGPT to handle "sensitive situations".
In a blog titled "Helping people when they need it most", OpenAI said it will keep improving with the help of experts while staying grounded in its responsibility to serve those who use its tools, including ChatGPT. "We hope others will join us in helping make sure this technology protects people at their most vulnerable," the post read.
Matt and Maria Raine have sought damages for their son's death as well as demanded that OpenAI should prevent anything similar from repeating in the future.
An OpenAI spokesperson said that ChatGPT has safety training that guides people to crisis helplines but admitted that these work best in short exchanges and may degrade when the interactions are long. "We’re working to make ChatGPT more supportive in moments of crisis by making it easier to reach emergency services, helping people connect with trusted contacts, and strengthening protections for teens," added the representative.
It is not the first time that suicides prompted by ChatGPT has been reported. In October 2024, Florida-based Megan Garcia filed a civil suit against Character.ai and its parent company Google over the suicide of her son, Sewell Setzer III, 14, who died in February 2024.
Last week, writer Laura Reiley penned an essay in the New York Times, opening up about how her daughter, Sophie, 29, died by suicide after confiding in ChatGPT.