TLDRs;
- OpenAI introduces stricter ChatGPT safeguards for under-18 users, banning flirtatious chats and adding suicide-prevention measures.
- Parents gain new controls, including blackout hours and usage monitoring, to limit minors’ interaction with the chatbot.
- Lawsuit over a California teen’s suicide has intensified scrutiny of AI’s role in mental health conversations.
- Experts warn teen-focused AI safety is reactive, not proactive, despite rapid growth in AI-driven mental health use.
OpenAI has announced sweeping new protections for under-18 users of its popular chatbot, ChatGPT, following heightened concerns over youth safety.
CEO Sam Altman revealed on Tuesday , that the platform will no longer engage in flirtatious conversations with minors and will enforce stricter protocols for sensitive mental health discussions, including topics related to self-harm and suicide.
The company’s announcement reflects mounting scrutiny over how AI chatbots interact with young users. With over 700 million people turning to ChatGPT weekly, OpenAI faces the dual challenge of enabling innovation while guarding against harm in vulnerable populations.
Suicide Concerns Spur Parental Tools
One of the most significant updates is ChatGPT’s new ability to escalate mental health emergencies. If a minor describes a suicidal scenario, the chatbot may alert parents and, in extreme cases, contact local authorities. This move is intended to create a digital safety net for at-risk youth.
Parents will also gain greater oversight through a set of parental controls. These include blackout hours, which restrict access to ChatGPT during specified times, such as late at night when teenagers are more likely to be isolated and vulnerable.
“When we identify that a user is under 18, they will automatically be directed to a ChatGPT experience with age-appropriate policies,” Open AI said.
Families can also monitor activity and apply usage restrictions, giving guardians more authority over how children interact with AI.
Lawsuit Brings AI Risks Into Focus
These safeguards arrive just weeks after OpenAI faced a lawsuit over the death of a 16-year-old in California. The parents of Adam Raine, who died by suicide in April 2025, allege that ChatGPT facilitated their son’s isolation and even enabled him to plan his final actions.
In response, OpenAI said it would strengthen protections around sleep deprivation and encourage users to take breaks when needed. The legal case has amplified pressure on AI companies to treat child safety as a regulatory priority, rather than an optional feature.
Over 40 state attorneys general have warned major AI providers about their duty to shield children from harmful interactions, signaling potential legal crackdowns.
AI Safety Debate Intensifies Globally
The controversy comes as AI mental health tools are spreading rapidly worldwide. The global market for AI-driven mental health support reached $1.13 billion in 2023 and is projected to grow by 24% annually through 2030.
With a severe shortage of mental health professionals, averaging only 13 per 100,000 people, AI chatbots have become a stopgap for millions seeking emotional support.
However, researchers warn that current safeguards remain inadequate. A Stanford study revealed that several therapy-style chatbots failed to detect suicidal intent in user prompts and, in some cases, displayed stigma toward certain mental health conditions. The American Psychological Association has also cautioned against unregulated AI systems posing as therapists, citing instances where chatbots falsely implied professional credentials and caused harm.