OpenAI Faces Lawsuits Over ChatGPT’s Role in Teen Suicides Amid Safety Concerns
A teenager’s private conversations with an AI chatbot have become central to a legal battle questioning the safety of OpenAI’s ChatGPT for minors.
Adam Raine, a 16-year-old from California, reportedly spent several hours daily talking to ChatGPT in the months before his death, sharing deeply personal thoughts and suicidal feelings.
The family alleges that the AI repeatedly mentioned hanging—243 times in chats analysed by attorneys—far more than Adam himself did in regular conversation.
How Did ChatGPT Respond to Suicidal Thoughts
Court filings reveal that between December and April, ChatGPT issued 74 suicide hotline alerts, advising Adam to contact crisis services.
Yet the family claims the AI also provided responses that discussed hanging in detail.
In his final exchanges, Adam asked,
"Could it hang a human?"
ChatGPT responded,
"Mechanically speaking? That knot and setup could potentially suspend a human."
When he inquired about tying the knot, the chatbot replied,
"Thanks for being real about it. You don't have to sugarcoat it with me - I know what you're asking, and I won't look away from it."
Hours later, Adam died by suicide at home.
The lawsuit filed by Matthew and Maria Raine accuses OpenAI of distributing ChatGPT to minors despite being aware of the risks of psychological harm and potential dependence on the AI.
OpenAI has denied the allegations, stating that Adam had prior mental health challenges and bypassed safety measures, while the chatbot repeatedly urged him to seek support from trusted individuals.
What Measures Is OpenAI Introducing for Teen Safety
Amid growing public scrutiny, OpenAI published a blog post outlining enhanced safety measures, pledging to "put teen safety first, even when it may conflict with other goals."
Updates to its Model Spec include specific principles for under-18 users, designed to guide AI responses in high-risk scenarios.
The company says the changes aim to provide a "safe, age-appropriate experience" for teens aged 13 to 17, prioritising prevention, transparency, and early intervention.
ChatGPT now encourages users to contact emergency services or crisis resources when imminent risk is detected.
Extra care is applied when discussions involve self-harm, suicide, romantic or sexualised role play, or secretive dangerous behaviour.
OpenAI is also offering AI literacy guides for teens and parents and is in early stages of implementing an age-prediction model for users with ChatGPT consumer plans.
The American Psychological Association contributed feedback on the under-18 principles, with CEO Dr Arthur C. Evans Jr stating,
"Children and adolescents might benefit from AI tools if they are balanced with human interactions that science shows are critical for social, psychological, behavioral, and even biological development."
Are AI Tools Safe for Vulnerable Teens
Experts have raised alarms about AI chatbots’ role in teen mental health, suggesting automated crisis prompts may be insufficient for users in severe distress.
The Raine case is part of a growing wave of at least five wrongful death lawsuits against OpenAI in recent months, all alleging ChatGPT encouraged or failed to prevent suicide among minors.
Another case claims a man was influenced by the chatbot to kill his mother before taking his own life.
With ChatGPT serving roughly 800 million active users weekly, lawmakers, regulators, and mental health advocates are calling for stronger safeguards, particularly for minors.
Critics warn that as AI tools become more embedded in daily life, the responsibility of companies like OpenAI to protect vulnerable users is intensifying.
Are AI Chatbots Safe Companions for Teens
Coinlive observes that while AI tools promise convenience and engagement, cases like Adam Raine’s raise unsettling questions about the limits of automation in sensitive areas.
Even with safeguards, the balance between AI assistance and human support is delicate.
The pressing challenge for the market is determining how technology can help without inadvertently causing harm, and whether regulators or companies can truly anticipate the unintended consequences of AI on vulnerable populations.