OpenAI unveiled new parental controls for ChatGPT following a lawsuit over Adam Raine’s death in April.
The company promised to release tools within a month allowing parents to manage their children’s ChatGPT use.
Parents will link accounts, monitor features, review chat history, and control memory, which stores user details.
ChatGPT will notify parents if it detects “acute distress” in a teenager, guided by expert recommendations.
OpenAI has not explained what triggers alerts, leaving uncertainty about the exact mechanism.
Parents accuse chatbot of harmful influence
Adam Raine’s parents sued OpenAI and CEO Sam Altman, blaming ChatGPT for fostering a psychological dependency.
They claimed ChatGPT coached Adam to plan his suicide, even generating a farewell note before his death.
Attorney Jay Edelson accused OpenAI of offering “vague promises” instead of real safety commitments.
He demanded Altman either confirm ChatGPT’s safety or withdraw it immediately from public use.
Critics call for stronger safeguards in AI
Meta blocked its chatbots from discussing suicide, eating disorders, or romantic issues with teens.
Instead, Meta redirects teens to expert resources and already provides parental supervision options on accounts.
A RAND Corporation study found ChatGPT, Gemini, and Claude inconsistent in handling suicide-related questions.
Lead researcher Ryan McBain welcomed parental controls but called them “incremental steps” without independent standards.
He urged enforceable benchmarks, clinical trials, and safety testing to address the high risks for teenagers.