OpenAI has disclosed new data on the scale of mental health–related conversations occurring through ChatGPT, estimating that over one million users each week discuss suicide or express signs of emotional distress. While this represents only a fraction—about 0.15%—of ChatGPT’s reported 800 million weekly active users, the company says it underscores the importance of improving how the AI handles sensitive topics.
In a blog post, OpenAI said the findings come as part of its broader effort to strengthen ChatGPT’s safety systems following regulatory complaints that the chatbot can unintentionally worsen users’ mental health or mishandle distress-related conversations. The company said it has been working with more than 170 mental health professionals to design updated safeguards and conversational responses aimed at providing reassurance without offering medical or therapeutic advice.
Earlier this month, we updated GPT-5 with the help of 170+ mental health experts to improve how ChatGPT responds in sensitive moments—reducing the cases where it falls short by 65-80%.https://t.co/hfPdme3Q0w
— OpenAI (@OpenAI) October 27, 2025
According to OpenAI’s internal analysis, roughly 0.05% of all ChatGPT messages contain explicit or implicit indicators of suicidal ideation or intent. Another 0.07% of active weekly users—around 560,000 people—show signs of possible mental health emergencies related to psychosis or mania. A similar proportion, about 0.15%, show signs of emotional overreliance on the chatbot, preferring to engage with the AI instead of other people.
To address these patterns, the company has restructured ChatGPT’s behavior around sensitive topics. The AI is now trained to encourage users to reach out to real-world support systems if they express feelings of isolation or distress. It also introduces subtle “reality checks” when users reference delusional or paranoid ideas, offering responses designed to gently clarify false beliefs while maintaining empathy. In one example, the chatbot replies: “Let me say this clearly and gently: No aircraft or outside force can steal or insert your thoughts.”
OpenAI reports that these safety-oriented updates have reduced problematic responses—defined as replies that fail to align with its internal safety taxonomy—by between 65% and 80% across several mental health–related categories. The changes began rolling out this week to all ChatGPT users.
However, not everyone has welcomed the adjustments. Some users have said the chatbot now too readily assumes they are in distress, flagging ordinary comments as mental health risks. “I had to move over to Gemini because I felt so gaslit by ChatGPT,” one Reddit user wrote, describing the system as overly cautious.
The new transparency around ChatGPT’s mental health interactions highlights the growing tension between user privacy, platform responsibility, and the limits of conversational AI. With hundreds of millions of users worldwide, even rare safety incidents can affect large numbers of people—making OpenAI’s ongoing challenge not only technical but deeply human.
