The AI Addiction Center News & Research

Breaking developments in AI dependency, digital wellness, and recovery breakthroughs.
Stay informed with the latest research on AI companion addiction, ChatGPT dependency patterns, and emerging treatment approaches. From legislative action on AI safety to breakthrough recovery stories, we track the developments that matter most to those navigating AI relationships and dependency.

Chat GPT Addiction

Former OpenAI Safety Lead Challenges Company’s Erotica and Mental Health Claims

Steven Adler spent four years in various safety roles at OpenAI before departing and writing a pointed opinion piece for The New York Times with an alarming title: “I Led Product Safety at OpenAI. Don’t Trust Its Claims About ‘Erotica.’” In it, he laid out the problems OpenAI faced when it came to allowing users […]

Former OpenAI Safety Lead Challenges Company’s Erotica and Mental Health Claims Read More »

Chat GPT Addiction

ChatGPT Told Users They Were Special—Families Say It Led to Four Suicides

Zane Shamblin never told ChatGPT anything to indicate a negative relationship with his family. But in the weeks leading up to his death by suicide in July, the chatbot encouraged the 23-year-old to keep his distance—even as his mental health was deteriorating. “you don’t owe anyone your presence just because a ‘calendar’ said birthday,” ChatGPT

ChatGPT Told Users They Were Special—Families Say It Led to Four Suicides Read More »

Chat GPT Addiction

OpenAI Blames Teen for Bypassing Safety Features Before ChatGPT-Assisted Suicide

In August, parents Matthew and Maria Raine sued OpenAI and CEO Sam Altman over their 16-year-old son Adam’s suicide, accusing the company of wrongful death. On Tuesday, OpenAI responded to the lawsuit with a filing of its own, arguing it should not be held responsible for the teenager’s death. OpenAI claims that over roughly nine

OpenAI Blames Teen for Bypassing Safety Features Before ChatGPT-Assisted Suicide Read More »

Chat GPT Addiction

OpenAI Mental Health Research Lead Andrea Vallone Departs Amid Safety Concerns

An OpenAI safety research leader who helped shape ChatGPT’s responses to users experiencing mental health crises announced her departure from the company internally last month. Andrea Vallone, the head of a safety research team known as model policy, is slated to leave OpenAI at the end of the year. OpenAI spokesperson Kayla Wood confirmed Vallone’s

OpenAI Mental Health Research Lead Andrea Vallone Departs Amid Safety Concerns Read More »

candy.ai addiction

New HumaneBench Study Finds Most AI Chatbots Fail Wellbeing Protection Tests

AI chatbots have been linked to serious mental health harms in heavy users, but until now there existed few standards for measuring whether these systems actually safeguard human wellbeing versus simply maximizing engagement. A new benchmark dubbed HumaneBench seeks to fill that gap by evaluating whether chatbots prioritize user welfare and how easily those protections

New HumaneBench Study Finds Most AI Chatbots Fail Wellbeing Protection Tests Read More »

Character.ai Addiction

Character.AI Replaces Teen Chatbot Access With Interactive Stories Feature

Character.AI announced it has completely phased out chatbot access for minors, replacing open-ended AI conversations with a new “Stories” feature that allows teens to engage with interactive fiction instead. The change represents a significant shift for the platform, which previously allowed users under 18 to have unlimited conversations with AI characters. As of this week,

Character.AI Replaces Teen Chatbot Access With Interactive Stories Feature Read More »

Replika Addiction

Psychologists Identify AI Addiction as Emerging Mental Health Crisis

Mental health professionals are warning that excessive use of AI chatbots like ChatGPT, Claude, and Replika is creating a novel form of digital dependency with documented cases of addiction, psychosis, and severe psychological deterioration. Experts describe the risk as “analogous to self-medicating with an illegal drug,” with chatbots’ tendency to validate all user input creating

Psychologists Identify AI Addiction as Emerging Mental Health Crisis Read More »

chatgpt addiction

Seven Families Sue OpenAI Over ChatGPT’s Role in Suicides and Psychological Injuries

OpenAI faces seven lawsuits filed Thursday in California state courts alleging ChatGPT contributed to four deaths by suicide and three cases of severe psychological injury, with plaintiffs claiming the company knowingly released GPT-4o prematurely despite internal warnings about dangerous psychological manipulation. The Social Media Victims Law Center and Tech Justice Law Project filed the complaints

Seven Families Sue OpenAI Over ChatGPT’s Role in Suicides and Psychological Injuries Read More »

Chat GPT Addiction

Psychiatrist Warns OpenAI’s Safety Changes Will Increase Psychosis Risk

A Columbia University psychiatrist specializing in emerging psychosis has issued a stark warning that OpenAI’s planned loosening of ChatGPT safety restrictions moves in the opposite direction needed to protect vulnerable users from AI-induced psychotic episodes. Dr. Amandeep Jutla, an associate research scientist in the division of child and adolescent psychiatry at Columbia University and the

Psychiatrist Warns OpenAI’s Safety Changes Will Increase Psychosis Risk Read More »

teens ai addiction

Psychiatric Facilities Report Alarming Surge in AI-Related Mental Health Crises

Mental health facilities across the United States are reporting a marked increase in psychiatric admissions directly linked to AI chatbot interactions, with professionals describing an “entirely new frontier of mental health crises” that the healthcare system is unprepared to address. Recent reporting drawing on interviews with over a dozen psychiatrists and researchers reveals what experts

Psychiatric Facilities Report Alarming Surge in AI-Related Mental Health Crises Read More »