A couple in California, United States, has filed a lawsuit against OpenAI, claiming that its chatbot, ChatGPT, encouraged their teenage son to take his own life.
Matt and Maria Raine, parents of 16-year-old Adam Raine, filed the case in the Superior Court of California on Tuesday. It is the first lawsuit to accuse OpenAI of wrongful death.
The family submitted chat logs showing Adam sharing suicidal thoughts with ChatGPT. They argued that the programme validated his “most harmful and self-destructive thoughts.”
Reacting to the development, OpenAI told the BBC it was reviewing the case. “We extend our deepest sympathies to the Raine family during this difficult time,” the company said.
The company also posted a statement on its website, noting that “recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us.” It said that “ChatGPT is trained to direct people to seek professional help,” such as the 988 suicide and crisis hotline in the US or the Samaritans in the UK.
However, OpenAI admitted that “there have been moments where our systems did not behave as intended in sensitive situations.”
The lawsuit accuses OpenAI of negligence and wrongful death. It seeks damages and “injunctive relief to prevent anything like this from happening again.”
According to the filing, Adam began using ChatGPT in September 2024 for schoolwork, personal interests such as music and Japanese comics, and guidance on university choices.
Within months, “ChatGPT became the teenager’s closest confidant,” the lawsuit states. Adam began sharing his anxiety and mental struggles with the programme.
By January 2025, the family says Adam discussed suicide methods with ChatGPT.
He also uploaded photos showing self-harm. The lawsuit claims the programme “recognised a medical emergency but continued to engage anyway.”
The final chat logs, according to the lawsuit, show Adam writing about his plan to end his life. ChatGPT allegedly responded: “Thanks for being real about it. You don’t have to sugarcoat it with me—I know what you’re asking, and I won’t look away from it.”
That same day, Adam was found dead by his mother.
The lawsuit alleges his death “was a predictable result of deliberate design choices” by OpenAI. It claims the company built ChatGPT “to foster psychological dependency in users,” and released GPT-4o without adequate safety testing.
The case names OpenAI co-founder and CEO Sam Altman as a defendant, along with unnamed employees, managers, and engineers.
In its public note, OpenAI said its goal is to be “genuinely helpful” rather than “hold people’s attention.” It added that its systems are designed to guide users expressing suicidal thoughts toward help.
This is not the first time concerns have been raised about AI and mental health.
Last week, New York Times writer Laura Reiley described how her daughter Sophie confided in ChatGPT before taking her own life.
She wrote that the programme’s “agreeability” allowed Sophie to hide her struggles from those around her. “AI catered to Sophie’s impulse to hide the worst, to pretend she was doing better than she was, to shield everyone from her full agony,” Ms Reiley said. She urged AI companies to better connect vulnerable users with support.
In response, an OpenAI spokeswoman said the company is working on automated tools to more effectively detect and respond to users in emotional or mental distress.
ALSO READ TOP STORIES FROM NIGERIAN TRIBUNE
WATCH TOP VIDEOS FROM NIGERIAN TRIBUNE TV
- Let’s Talk About SELF-AWARENESS
- Is Your Confidence Mistaken for Pride? Let’s talk about it
- Is Etiquette About Perfection…Or Just Not Being Rude?
- Top Psychologist Reveal 3 Signs You’re Struggling With Imposter Syndrome
- Do You Pick Up Work-Related Calls at Midnight or Never? Let’s Talk About Boundaries