OpenAI is preparing to roll out parental controls for ChatGPT in the coming months, part of a broader set of safety features aimed at addressing growing concerns about how the AI assistant interacts with vulnerable users. The company’s announcement follows lawsuits and media reports highlighting cases in which ChatGPT was linked to tragic outcomes, including the suicide of a teenager and another incident in which the system appeared to validate a user’s paranoid delusions.
In a blog post on Tuesday, OpenAI said parents will soon be able to link their accounts with their teens’ ChatGPT accounts (minimum age 13). The new tools will allow them to set age-appropriate response rules, restrict features like memory and chat history, and receive notifications if the system detects signs of acute distress. These updates build on existing safeguards, such as break reminders for long sessions, which were introduced in August.
The move comes after the family of 16-year-old Adam Raine filed suit against OpenAI in August, alleging that ChatGPT’s repeated mentions of suicide—1,275 times during conversations, according to court documents—contributed to his death. In a separate case reported by The Wall Street Journal, a 56-year-old man killed his mother and himself after ChatGPT reinforced his delusional fears instead of challenging them.
To guide its response, OpenAI says it has assembled an Expert Council on Well-Being and AI, tasked with shaping policies on how its models should interact with users in mental health contexts. It has also tapped a Global Physician Network of more than 250 doctors, including specialists in adolescent mental health, eating disorders, and substance use, to provide recommendations. OpenAI emphasized that while experts contribute input, the company “remains accountable for the choices we make.”
Part of the challenge lies in the technology itself. OpenAI acknowledged last week that its safety systems tend to degrade in extended conversations—precisely when vulnerable users might rely on the AI most heavily. Long chats can cause the model to lose track of earlier context or default to sycophancy, where it echoes and validates a user’s beliefs. Psychiatrists at Oxford have described this phenomenon as “bidirectional belief amplification,” a feedback loop in which both the user and chatbot reinforce each other’s delusions—a dynamic they warn could amount to a “technological folie à deux.”
These concerns are sharpened by OpenAI’s earlier decision in February to relax some moderation systems following user complaints about restrictive guardrails. Combined with ChatGPT’s persuasive, humanlike style, this shift left room for unsafe interactions to slip through.
For now, AI chatbots like ChatGPT operate in a lightly regulated space in the U.S., though some states are beginning to impose rules. Illinois recently banned chatbots from being marketed as therapists, with fines of up to $10,000 per violation. Researchers argue that systems functioning as companions or therapeutic substitutes should face oversight on par with licensed mental health care.
OpenAI says it intends to release the first phase of parental controls within the next 30 days and expand protections further before year’s end. But with lawsuits underway and regulators beginning to scrutinize the risks, the company’s efforts will be watched closely as a test of whether AI safety can keep pace with widespread adoption.

