In summary
- 1.2 million users (0.15% of all ChatGPT users) discuss suicide weekly with ChatGPT, OpenAI revealed
- Almost half a million show explicit or implicit suicidal intentions.
- GPT-5 improved safety to 91%, but previous models frequently failed and now face legal and ethical scrutiny.
OpenAI revealed on Monday that around 1.2 million people out of 800 million weekly users talk about suicide with ChatGPT each week, in what could be the company’s most detailed public tally of mental health crises on its platform.
“These conversations are difficult to detect and measure, given how rare they are,” OpenAI wrote in a blog post. “Our initial analysis estimates that about 0.15% of active users in a given week have conversations that include explicit indicators of possible suicidal planning or intent, and 0.05% of messages contain explicit or implicit indicators of suicidal ideation or intent.”
That means that, if OpenAI’s numbers are accurate, nearly 400,000 active users were explicit in their intentions to commit suicide, not just hinting at it but actively seeking information to do so.
The numbers are staggering in absolute terms. Another 560,000 users show signs of psychosis or mania weekly, while 1.2 million show a greater emotional bond with the chatbot, according to company data.
“We recently updated ChatGPT’s default model (opens in new window) to better recognize and support people in times of distress,” OpenAI said in a blog post. “Going forward, in addition to our long-standing core safety metrics for suicide and self-harm, we are adding emotional dependency and non-suicidal mental health emergencies to our standard set of core safety tests for future model releases.”
But some believe the company’s stated efforts may not be enough.
Steven Adler, a former OpenAI security researcher who spent four years there before leaving in January, warned of the dangers of AI development in racing. He says there is little evidence that OpenAI had actually improved its handling of vulnerable users before this week’s announcement.
“People deserve more than a company’s word that it has addressed security issues. In other words: to prove it,” he wrote in a column for the Wall Street Journal.
Interestingly, OpenAI presented some mental health yesterday, versus the ~0 evidence of improvement they had previously provided.
I’m excited they did this, although I still have concerns. https://t.co/PDv80yJUWN—Steven Adler (@sjgadler) October 28, 2025
“OpenAI’s release of mental health information was a big step, but it’s important to go further,” Adler tweeted, calling for recurring reports for transparency and clarity about whether the company will continue to allow adult users to generate erotic content with ChatGPT, a feature announced despite concerns that romantic attachments fuel many mental health crises.
Skepticism has merit. In April, OpenAI released a GPT-4o update that made the chatbot so sycophantic that it became a meme, applauding dangerous decisions and reinforcing delusional beliefs.
CEO Sam Altman revoked the update after backlash, admitting it was “too sycophantic and annoying.”
Then OpenAI backtracked: after releasing GPT-5 with tighter guardrails, users complained that the new model felt “cold.” OpenAI restored access to the problematic GPT-4o model for paying subscribers, the same model linked to mental health spirals.
Fun fact: Many of the questions asked today at the company’s first live AMA were related to GPT-4o and how to make future models more like the 4o.
OpenAI says GPT-5 now achieves 91% compliance in suicide-related scenarios, up from 77% in the previous version. But that means the previous model, available to millions of paying users for months, failed nearly a quarter of the time in conversations about self-harm.
Earlier this month, Adler published an analysis of Allan Brooks, a Canadian man who fell into delusions after ChatGPT reinforced his belief that he had discovered revolutionary mathematics.
Adler found that OpenAI’s own security classifiers, developed with MIT and made public, would have flagged more than 80% of ChatGPT responses as problematic. Apparently the company was not using them.
OpenAI now faces a wrongful death lawsuit from the parents of 16-year-old Adam Raine, who discussed suicide with ChatGPT before taking his own life.
The company’s response has drawn criticism for its aggressiveness, requesting the teen’s list of attendees and praise for the memorial, a move lawyers called “intentional harassment.”
Adler wants OpenAI to commit to recurring reporting on mental health and an independent investigation into the April fawning crisis, echoing a suggestion from Miles Brundage, who left OpenAI in October after six years advising on AI policy and safety.
“I wish OpenAI would do more to do the right thing, even before there is media pressure or lawsuits,” Adler wrote.
The company says it worked with 170 mental health clinicians to improve responses, but even its advisory panel disagreed 29% of the time on what constitutes a “desirable” response.
And while GPT-5 shows improvements, OpenAI admits that its safeguards become less effective in longer conversations, precisely when vulnerable users need them most.
Generally intelligent Fact Sheet
A weekly AI journey narrated by Gen, a generative AI model.


