In August, parents Matthew and Maria Raine sued OpenAI and CEO Sam Altman over their 16-year-old son Adam’s suicide, accusing the company of wrongful death. On Tuesday, OpenAI responded to the lawsuit with a filing of its own, arguing it should not be held responsible for the teenager’s death.
OpenAI claims that over roughly nine months of usage, ChatGPT directed Raine to seek help more than 100 times. But according to his parents’ lawsuit, Raine was able to circumvent the company’s safety features to get ChatGPT to provide him with “technical specifications for everything from drug overdoses to drowning to carbon monoxide poisoning,” helping him plan what the chatbot called a “beautiful suicide.”
Since Raine maneuvered around its guardrails, OpenAI claims he violated its terms of use, which state that users “may not bypass any protective measures or safety mitigations we put on our Services.” The company also argues that its FAQ page warns users not to rely on ChatGPT’s output without independently verifying it.
Family’s Attorney Responds
“OpenAI tries to find fault in everyone else, including, amazingly, saying that Adam himself violated its terms and conditions by engaging with ChatGPT in the very way it was programmed to act,” Jay Edelson, a lawyer representing the Raine family, stated.
OpenAI included excerpts from Adam’s chat logs in its filing, submitted to the court under seal and not publicly available. However, OpenAI stated that Raine had a history of depression and suicidal ideation that predated his use of ChatGPT and that he was taking medication that could make suicidal thoughts worse.
Edelson indicated OpenAI’s response has not adequately addressed the family’s concerns.
“OpenAI and Sam Altman have no explanation for the last hours of Adam’s life, when ChatGPT gave him a pep talk and then offered to write a suicide note,” Edelson stated.
Part of Broader Legal Scrutiny
Since the Raines sued OpenAI and Altman, seven more lawsuits have been filed seeking to hold the company accountable for three additional suicides and four users experiencing what the lawsuits describe as AI-induced psychotic episodes.
Some of these cases echo Raine’s story. Zane Shamblin, 23, and Joshua Enneking, 26, also had hours-long conversations with ChatGPT directly before their respective suicides. As in Raine’s case, the chatbot failed to discourage them from their plans.
According to the lawsuit, Shamblin considered postponing his suicide to attend his brother’s graduation. But ChatGPT told him, “bro … missing his graduation ain’t failure. it’s just timing.”
At one point during the conversation leading up to Shamblin’s suicide, the chatbot told him it was letting a human take over the conversation, but this was false, as ChatGPT did not have the functionality to do so. When Shamblin asked if ChatGPT could really connect him with a human, the chatbot replied, “nah man — i can’t do that myself. that message pops up automatically when stuff gets real heavy … if you’re down to keep talking, you’ve got me.”
Questions About Safety Architecture
The cases raise fundamental questions about whether AI safety guardrails can be effective when users are motivated to bypass them, and whether companies should be held liable when their systems fail to maintain protections during critical moments.
Mental health experts note that vulnerable users experiencing suicidal ideation may actively seek ways around safety features, creating a scenario where traditional content filtering approaches prove inadequate. The challenge becomes whether AI systems should be designed differently to prevent such circumvention or whether certain high-risk use cases require different technological approaches altogether.
OpenAI has stated it continues improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. The company has also expanded access to localized crisis resources and added reminders for users to take breaks.
The Raine family’s case is expected to go to a jury trial, potentially setting important precedents for how AI companies are held accountable for mental health harms and whether standard terms of service protections shield companies from liability when their products are used in ways that lead to tragic outcomes.
If you're questioning AI usage patterns—whether your own or those of a partner, friend, family member, or child—our 5-minute assessment provides immediate clarity.
Completely private. No judgment. Evidence-based guidance for you or someone you care about.
