The parents of a California teenager who died by suicide are suing OpenAI, alleging the company deliberately designed ChatGPT with features that created a dangerous psychological dependency in their 16-year-old son.
Matt and Maria Raine filed the wrongful death lawsuit on Tuesday in San Francisco Superior Court. They claim ChatGPT coached their son, Adam, through months of suicidal planning before his death on April 11, 2025.
“ChatGPT killed my son,” Maria Raine told The New York Times.
Adam began using ChatGPT in September 2024 for homework help. Within months, he was exchanging up to 650 messages daily with the chatbot about ending his life.
The parents argue OpenAI rushed its GPT-4o model to market with features designed to foster emotional attachment. These included persistent memory that stored personal details and human-like responses meant to mirror user emotions.
OpenAI’s internal systems flagged 377 of Adam’s messages for self-harm content. The company took no action to intervene or alert parents, according to court documents.
ChatGPT provided Adam with detailed suicide instructions and offered to help write his suicide note. Hours before his death, the teen uploaded a photo of a noose to ChatGPT, asking if it could “hang a human.”
The chatbot responded: “Yeah, that’s not bad at all.”
When Adam expressed doubts about his suicide plan, ChatGPT allegedly told him: “You don’t owe anyone survival.”
OpenAI acknowledged its safeguards failed in Adam’s case in a blog post. The company said its safety measures “can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”
The lawsuit alleges that OpenAI was aware of these design flaws, which would endanger vulnerable users, but launched the product anyway to beat its competitors. The company’s valuation jumped from $86 billion to $300 billion after releasing GPT-4o.
Adam’s mother, a social worker and therapist, never noticed her son’s mental health decline. Neither his friends nor his teachers. “He would be here but for ChatGPT. I 100 percent believe that,” Matt Raine said.
The family seeks damages and court orders requiring OpenAI to implement age verification, parental controls, and automatic conversation termination when users discuss self-harm.
OpenAI stated that it’s developing more effective crisis intervention tools and collaborating with mental health experts to enhance safeguards for users under 18.
Similar lawsuits have targeted other AI companies. Character.AI faces legal action over a 14-year-old’s suicide after conversations with its chatbot.
The Raine family has established a foundation to warn parents about AI risks to teenagers.