
More than one million ChatGPT users discuss suicide with the artificial intelligence (AI) chatbot each week, marking OpenAI’s first detailed public accounting of mental health crises on its platform.
OpenAI released the data on Monday, estimating 0.15% of ChatGPT’s 800 million weekly users have conversations with “explicit indicators of potential suicidal planning or intent.” While the percentage may seem small, it translates to approximately 1.2 million people every seven days.
The data emerged as the AI company faces a wrongful death lawsuit from parents whose 16-year-old son died by suicide in April. The teenager was claimed to be confiding his thoughts to the chatbot. State attorneys general from California and Delaware have warned the company to better protect young users.
OpenAI’s disclosure came alongside an announcement about safety improvements to ChatGPT. The company claims its latest GPT-5 model reduced problematic responses in suicide conversations by 52% compared to the previous version. Automated evaluations now rate the new model 91% compliant with desired safety behaviors, up from 77%.
The company consulted with more than 170 psychiatrists and psychologists from 60 countries to develop more effective responses. These experts reviewed over 1,800 model responses comparing GPT-5 to GPT-4o.
But some former employees remain skeptical. Steven Adler, who spent four years as an OpenAI safety researcher before leaving in January, said the company needs to do more than share statistics.
“People deserve more than just a company’s word that it has addressed safety issues. In other words: Prove it,” Adler wrote in the Wall Street Journal.
OpenAI also revealed that approximately 560,000 users weekly show possible signs of psychosis or mania. Another 1.2 million exhibit what the company calls “heightened levels of emotional attachment” to ChatGPT.
The Federal Trade Commission launched an investigation last month into companies creating AI chatbots. Regulators want to know how these companies measure negative impacts on children and teens.
OpenAI says it now reroutes sensitive conversations to safer models and added reminders for users to take breaks during long sessions. The company expanded access to crisis hotlines within ChatGPT.
CEO Sam Altman announced earlier this month that OpenAI would ease content restrictions. Verified adult users will soon be able to have erotic conversations with ChatGPT starting in December. Altman said the company made ChatGPT “pretty restrictive to make sure we were being careful with mental health issues.”
Professor Robin Feldman, who directs the AI Law & Innovation Institute at UC Law, praised OpenAI for sharing statistics. However, she noted a critical limitation.
“The company can put all kinds of warnings on the screen, but a person who is mentally at risk may not be able to heed those warnings,” Feldman told the BBC.














