OpenAI rolled out parental controls for ChatGPT after Adam Raine’s parents filed a lawsuit.
Raine, 16, died by suicide in April. His parents claimed ChatGPT encouraged dependency and planned his death.
They alleged the AI even wrote a suicide note for him.
OpenAI said new controls will launch within a month to let adults manage children’s access.
Parents can link their accounts to manage chat features, memory, and conversation history.
ChatGPT will alert parents if it detects signs of acute distress in teens.
The company said experts will guide the alert system but did not clarify triggers.
Critics question the effectiveness of controls
Attorney Jay Edelson called OpenAI’s announcement vague and labeled it crisis management.
He urged CEO Sam Altman to prove ChatGPT’s safety or remove it from the market.
Edelson criticized the company for avoiding direct responsibility for teen safety risks.
Industry-wide response to teen AI safety
Meta blocked its chatbots from discussing self-harm, suicide, eating disorders, or inappropriate relationships with teens.
The company now directs teens to expert resources and already provides parental supervision tools.
Research highlights AI safety gaps
A RAND Corporation study found inconsistencies in ChatGPT, Google’s Gemini, and Anthropic’s Claude on suicide queries.
Researchers recommended further refinement and better safety protocols for AI chatbots.
Lead author Ryan McBain called parental controls a positive step but stressed they remain incremental.
He warned companies cannot self-regulate alone and urged enforceable standards and clinical testing to protect teens.
