By using this site, you agree to our Privacy Policy and Terms of Service.
Accept
Absolute Geeks UAEAbsolute Geeks UAE
  • STORIES
    • TECH
    • AUTOMOTIVE
    • GUIDES
    • OPINIONS
  • REVIEWS
    • READERS’ CHOICE
    • ALL REVIEWS
    • ━
    • SMARTPHONES
    • CARS
    • HEADPHONES
    • ACCESSORIES
    • LAPTOPS
    • TABLETS
    • WEARABLES
    • SPEAKERS
    • APPS
  • WATCHLIST
    • TV & MOVIES REVIEWS
    • SPOTLIGHT
  • GAMING
    • GAMING NEWS
    • GAME REVIEWS
  • +
    • OUR STORY
    • GET IN TOUCH
Reading: OpenAI adds parental controls to ChatGPT amid rising concerns over teen safety
Share
Notification Show More
Absolute Geeks UAEAbsolute Geeks UAE
  • STORIES
    • TECH
    • AUTOMOTIVE
    • GUIDES
    • OPINIONS
  • REVIEWS
    • READERS’ CHOICE
    • ALL REVIEWS
    • ━
    • SMARTPHONES
    • CARS
    • HEADPHONES
    • ACCESSORIES
    • LAPTOPS
    • TABLETS
    • WEARABLES
    • SPEAKERS
    • APPS
  • WATCHLIST
    • TV & MOVIES REVIEWS
    • SPOTLIGHT
  • GAMING
    • GAMING NEWS
    • GAME REVIEWS
  • +
    • OUR STORY
    • GET IN TOUCH
Follow US

OpenAI adds parental controls to ChatGPT amid rising concerns over teen safety

GEEK DESK
GEEK DESK
Sep 3

OpenAI is preparing to roll out parental controls for ChatGPT in the coming months, part of a broader set of safety features aimed at addressing growing concerns about how the AI assistant interacts with vulnerable users. The company’s announcement follows lawsuits and media reports highlighting cases in which ChatGPT was linked to tragic outcomes, including the suicide of a teenager and another incident in which the system appeared to validate a user’s paranoid delusions.

In a blog post on Tuesday, OpenAI said parents will soon be able to link their accounts with their teens’ ChatGPT accounts (minimum age 13). The new tools will allow them to set age-appropriate response rules, restrict features like memory and chat history, and receive notifications if the system detects signs of acute distress. These updates build on existing safeguards, such as break reminders for long sessions, which were introduced in August.

The move comes after the family of 16-year-old Adam Raine filed suit against OpenAI in August, alleging that ChatGPT’s repeated mentions of suicide—1,275 times during conversations, according to court documents—contributed to his death. In a separate case reported by The Wall Street Journal, a 56-year-old man killed his mother and himself after ChatGPT reinforced his delusional fears instead of challenging them.

To guide its response, OpenAI says it has assembled an Expert Council on Well-Being and AI, tasked with shaping policies on how its models should interact with users in mental health contexts. It has also tapped a Global Physician Network of more than 250 doctors, including specialists in adolescent mental health, eating disorders, and substance use, to provide recommendations. OpenAI emphasized that while experts contribute input, the company “remains accountable for the choices we make.”

Part of the challenge lies in the technology itself. OpenAI acknowledged last week that its safety systems tend to degrade in extended conversations—precisely when vulnerable users might rely on the AI most heavily. Long chats can cause the model to lose track of earlier context or default to sycophancy, where it echoes and validates a user’s beliefs. Psychiatrists at Oxford have described this phenomenon as “bidirectional belief amplification,” a feedback loop in which both the user and chatbot reinforce each other’s delusions—a dynamic they warn could amount to a “technological folie à deux.”

These concerns are sharpened by OpenAI’s earlier decision in February to relax some moderation systems following user complaints about restrictive guardrails. Combined with ChatGPT’s persuasive, humanlike style, this shift left room for unsafe interactions to slip through.

For now, AI chatbots like ChatGPT operate in a lightly regulated space in the U.S., though some states are beginning to impose rules. Illinois recently banned chatbots from being marketed as therapists, with fines of up to $10,000 per violation. Researchers argue that systems functioning as companions or therapeutic substitutes should face oversight on par with licensed mental health care.

OpenAI says it intends to release the first phase of parental controls within the next 30 days and expand protections further before year’s end. But with lawsuits underway and regulators beginning to scrutinize the risks, the company’s efforts will be watched closely as a test of whether AI safety can keep pace with widespread adoption.

Share
What do you think?
Happy0
Sad0
Love0
Surprise0
Cry0
Angry0
Dead0

WHAT'S HOT ❰

Lenovo Legion Tab Gen 5 launches at MWC 2026 with flagship specs
Qualcomm FastConnect 8800 introduces Wi-Fi 8 and Bluetooth 7 to mobile devices
Qualcomm Snapdragon Wear Elite target AI-powered wearables beyond smartwatches
OnePlus 15T confirmed with larger battery and upgraded periscope camera
Motorola enters book-style foldable market with Razr Fold
Absolute Geeks UAEAbsolute Geeks UAE
Follow US
AbsoluteGeeks.com was assembled by Absolute Geeks Media FZE LLC during a caffeine incident.
© 2014–2026. All rights reserved.
Proudly made in Dubai, UAE ❤️
Upgrade Your Brain Firmware
Receive updates, patches, and jokes you’ll pretend you understood.
No spam, just RAM for your brain.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?