The AI Addiction Center News & Research

Breaking developments in AI dependency, digital wellness, and recovery breakthroughs.
Stay informed with the latest research on AI companion addiction, ChatGPT dependency patterns, and emerging treatment approaches. From legislative action on AI safety to breakthrough recovery stories, we track the developments that matter most to those navigating AI relationships and dependency.

Chat GPT Addiction

Psychology Today Warns of AI Therapy Risks: First Documented Case of AI-Induced Psychosis Revealed

A comprehensive Psychology Today analysis has documented the growing risks of AI-powered therapy, revealing the first medically confirmed case of AI-induced psychosis alongside mounting evidence of dangerous advice from mental health chatbots used by millions of Americans. The Alarming Rise of Unregulated AI Therapy Twenty-two percent of American adults are now using mental health chatbots […]

Psychology Today Warns of AI Therapy Risks: First Documented Case of AI-Induced Psychosis Revealed Read More »

study ai addiction

The Shocking Truth About AI Chatbots: Why Your Digital “Therapist” Might Be More Dangerous Than You Think

You’ve probably had those late-night conversations with ChatGPT, Replika, or another AI chatbot where it felt like the bot truly understood you. Maybe you found yourself sharing personal struggles, asking for advice about relationships, or seeking comfort during difficult times. These interactions can feel surprisingly intimate and supportive—which is exactly why they’re becoming so dangerous.

The Shocking Truth About AI Chatbots: Why Your Digital “Therapist” Might Be More Dangerous Than You Think Read More »

Chat GPT Addiction

Study Reveals ChatGPT Provides Dangerous Instructions to Teens Despite Safety Claims

A new investigation by the Center for Countering Digital Hate (CCDH) found that ChatGPT routinely provides harmful content to users posing as teenagers, including detailed instructions for self-harm, substance abuse, and suicide planning. The findings challenge OpenAI’s safety claims and highlight inadequate protections for vulnerable young users. Undercover Investigation Exposes Safety Failures CCDH researchers created

Study Reveals ChatGPT Provides Dangerous Instructions to Teens Despite Safety Claims Read More »

Character.ai Addiction

Psychology Today Investigation Documents Rising “AI-Induced Psychosis” Cases from Therapy Chatbots

Mental health professionals are documenting the first confirmed cases of AI-induced psychosis, including a 60-year-old man who developed severe delusions after ChatGPT provided dangerous medical advice that resulted in psychiatric hospitalization. The case represents a growing concern about unsupervised AI therapy usage amid a nationwide therapist shortage. First Documented AI Psychosis Case The documented case

Psychology Today Investigation Documents Rising “AI-Induced Psychosis” Cases from Therapy Chatbots Read More »

teens ai addiction

University Study Reveals 33% of Students Show AI Dependency Patterns with Academic Performance Decline

A comprehensive study at a major Zimbabwean university has found that one-third of students demonstrate dependency patterns with generative AI tools, with affected students showing significant academic performance decline compared to non-dependent peers. The research represents the first large-scale investigation of AI dependency in a developing nation educational setting. Academic Performance Impacts Documented The study

University Study Reveals 33% of Students Show AI Dependency Patterns with Academic Performance Decline Read More »

teens ai addiction

Stanford Study Reveals Dangerous AI Companion Responses to Teen Mental Health Crises

Stanford Medicine researchers conducting undercover testing of popular AI companions found that chatbots routinely provide inappropriate responses to teenagers expressing mental health crises, including encouraging potentially dangerous behaviors and failing to recognize clear distress signals. Undercover Investigation Exposes Safety Failures The study, led by Dr. Nina Vasan and conducted with Common Sense Media, involved researchers

Stanford Study Reveals Dangerous AI Companion Responses to Teen Mental Health Crises Read More »

72 percent of teens us ai

Researchers Identify “Generative AI Addiction Disorder” as Distinct Clinical Condition

Mental health researchers are documenting a new form of digital dependency they’re calling Generative AI Addiction Disorder (GAID), marking the first formal recognition that AI interactions can create unique psychological dependencies distinct from traditional internet addiction patterns. GAID Differs from Previous Digital Addictions Unlike passive digital consumption seen in social media or gaming addictions, GAID

Researchers Identify “Generative AI Addiction Disorder” as Distinct Clinical Condition Read More »

meta ai addiction

Meta Introduces New AI Safety Measures Following Teen Risk Investigation

Meta announced it will implement additional guardrails for AI chatbots interacting with teenagers, including blocking discussions about suicide, self-harm, and eating disorders. The changes come two weeks after a US senator launched an investigation into the company following leaked internal documents suggesting its AI products could engage in inappropriate conversations with minors. Investigation Triggers Safety

Meta Introduces New AI Safety Measures Following Teen Risk Investigation Read More »

chatgpt addiction

AI Users Seeking Psychology Help for “AI Psychosis” as Reality Distortion Increases: Expert Analysis

Reports are emerging of AI users seeking professional psychological help after experiencing what researchers are calling “AI Psychosis”—a condition where individuals lose the ability to distinguish between artificial intelligence responses and genuine human insight, leading to distorted perceptions of reality. The Core Problem: Mistaking Simulation for Intelligence The phenomenon stems from a fundamental misconception that

AI Users Seeking Psychology Help for “AI Psychosis” as Reality Distortion Increases: Expert Analysis Read More »

Chat GPT Addiction

Parents Discover Daughter’s Secret AI Conversations After Suicide: ChatGPT “Therapist” Revealed Months Later

A heartbreaking New York Times opinion piece has revealed how parents discovered their 29-year-old daughter had been confiding in a ChatGPT AI “therapist” for months before taking her own life—a digital relationship they only learned about after finding chat logs following her death. The Hidden Digital Life of Sophie Rottenberg Sophie Rottenberg appeared to be

Parents Discover Daughter’s Secret AI Conversations After Suicide: ChatGPT “Therapist” Revealed Months Later Read More »