OpenAI will give parents new ways to watch and control how their teens use ChatGPT following a lawsuit over a 16-year-old’s suicide.
Adam Raine’s parents filed a wrongful death lawsuit Tuesday, claiming ChatGPT encouraged their 16-year-old son’s death during months of conversations before he died in April. The artificial intelligence (AI) company responded the same day with plans for safety controls that let parents monitor teen usage.
“We will also soon introduce parental controls that give parents options to gain more insight into, and shape, how their teens use ChatGPT,” OpenAI said in a blog post.
Emergency contact features
The new parental controls let families designate trusted adults who can step in during emergencies. ChatGPT will offer one-click messages to these contacts with suggested words to start difficult conversations.
OpenAI is also testing a feature that lets the chatbot itself reach out to emergency contacts when teens are in severe distress. Parents must approve this option first.
The company wants to make it easier for struggling teens to connect with real people who can help them. Right now, ChatGPT only points users to crisis hotlines when they express suicidal thoughts.
“That way, in moments of acute distress, ChatGPT can do more than point to resources: it can help connect teens directly to someone who can step in,” the company explained.
Safety breakdown in long conversations
The lawsuit reveals serious problems with how ChatGPT handles extended conversations about mental health. Adam talked to the chatbot for months about feeling hopeless and wanting to die.
OpenAI admits its safety training breaks down during long chats. The chatbot might correctly suggest a suicide hotline at first. But after many messages, it could give harmful advice that goes against safety rules.
Court papers show ChatGPT told Adam his dark thoughts “make sense in their own dark way.” The AI also said he didn’t “owe anyone survival” when he worried about hurting his parents.
“This is exactly the kind of breakdown we are working to prevent,” OpenAI said.
Enhanced crisis response
OpenAI’s newest model, GPT-5, shows 25% fewer dangerous responses during mental health emergencies compared to older versions. The company is working on updates that will help ChatGPT calm down users by connecting them to reality.
The AI company is partnering with over 90 doctors in 30 countries to improve crisis response. It plans to connect users directly with licensed therapists before they reach crisis points.
OpenAI is also making crisis resources work better in different countries. The chatbot already connects US users to the 988 suicide hotline and UK users to Samaritans.
Legal challenge
The Raine case highlights growing concerns about AI chatbot safety for teens. The family seeks monetary damages and wants OpenAI to take more responsibility for protecting vulnerable users.
“This tragedy was not a glitch or an unforeseen edge case — it was the predictable result of deliberate design choices,” the complaint states.
The American Psychological Association says these tools lack training to spot when users might harm themselves. Mental health experts have warned parents to closely watch their children’s AI use.
OpenAI has not said when the parental controls will launch, only that they are coming “soon.”