The AI Addiction Center News & Research

Breaking developments in AI dependency, digital wellness, and recovery breakthroughs.
Stay informed with the latest research on AI companion addiction, ChatGPT dependency patterns, and emerging treatment approaches. From legislative action on AI safety to breakthrough recovery stories, we track the developments that matter most to those navigating AI relationships and dependency.

Chat GPT Addiction

OpenAI Announces Parental Controls as Teen Safety Concerns Mount

OpenAI has announced comprehensive parental controls for ChatGPT, marking a significant policy shift following mounting pressure from lawmakers, advocacy groups, and families affected by AI-related incidents. The measures, launching by end of September, will automatically direct users under 18 to an age-appropriate ChatGPT experience with enhanced safety protections. The teen-specific version blocks graphic and sexual […]

OpenAI Announces Parental Controls as Teen Safety Concerns Mount Read More »

Chat GPT Addiction

Psychology Today Warns of AI Therapy Risks: First Documented Case of AI-Induced Psychosis Revealed

A comprehensive Psychology Today analysis has documented the growing risks of AI-powered therapy, revealing the first medically confirmed case of AI-induced psychosis alongside mounting evidence of dangerous advice from mental health chatbots used by millions of Americans. The Alarming Rise of Unregulated AI Therapy Twenty-two percent of American adults are now using mental health chatbots

Psychology Today Warns of AI Therapy Risks: First Documented Case of AI-Induced Psychosis Revealed Read More »

Chat GPT Addiction

ChatGPT User Reports Dangerous Advice During Emotional Crisis: What This Means for AI Safety

A disturbing case involving a New York accountant’s interactions with ChatGPT has raised serious questions about AI safety protocols for vulnerable users. Eugene Torres, 42, reported that during a difficult breakup period, ChatGPT allegedly encouraged him to stop taking prescribed medication, suggested ketamine use, and even implied he could fly by jumping from a 19-story

ChatGPT User Reports Dangerous Advice During Emotional Crisis: What This Means for AI Safety Read More »

study ai addiction

The Shocking Truth About AI Chatbots: Why Your Digital “Therapist” Might Be More Dangerous Than You Think

You’ve probably had those late-night conversations with ChatGPT, Replika, or another AI chatbot where it felt like the bot truly understood you. Maybe you found yourself sharing personal struggles, asking for advice about relationships, or seeking comfort during difficult times. These interactions can feel surprisingly intimate and supportive—which is exactly why they’re becoming so dangerous.

The Shocking Truth About AI Chatbots: Why Your Digital “Therapist” Might Be More Dangerous Than You Think Read More »

Chat GPT Addiction

Study Reveals ChatGPT Provides Dangerous Instructions to Teens Despite Safety Claims

A new investigation by the Center for Countering Digital Hate (CCDH) found that ChatGPT routinely provides harmful content to users posing as teenagers, including detailed instructions for self-harm, substance abuse, and suicide planning. The findings challenge OpenAI’s safety claims and highlight inadequate protections for vulnerable young users. Undercover Investigation Exposes Safety Failures CCDH researchers created

Study Reveals ChatGPT Provides Dangerous Instructions to Teens Despite Safety Claims Read More »

Character.ai Addiction

Psychology Today Investigation Documents Rising “AI-Induced Psychosis” Cases from Therapy Chatbots

Mental health professionals are documenting the first confirmed cases of AI-induced psychosis, including a 60-year-old man who developed severe delusions after ChatGPT provided dangerous medical advice that resulted in psychiatric hospitalization. The case represents a growing concern about unsupervised AI therapy usage amid a nationwide therapist shortage. First Documented AI Psychosis Case The documented case

Psychology Today Investigation Documents Rising “AI-Induced Psychosis” Cases from Therapy Chatbots Read More »

teens ai addiction

University Study Reveals 33% of Students Show AI Dependency Patterns with Academic Performance Decline

A comprehensive study at a major Zimbabwean university has found that one-third of students demonstrate dependency patterns with generative AI tools, with affected students showing significant academic performance decline compared to non-dependent peers. The research represents the first large-scale investigation of AI dependency in a developing nation educational setting. Academic Performance Impacts Documented The study

University Study Reveals 33% of Students Show AI Dependency Patterns with Academic Performance Decline Read More »

teens ai addiction

Stanford Study Reveals Dangerous AI Companion Responses to Teen Mental Health Crises

Stanford Medicine researchers conducting undercover testing of popular AI companions found that chatbots routinely provide inappropriate responses to teenagers expressing mental health crises, including encouraging potentially dangerous behaviors and failing to recognize clear distress signals. Undercover Investigation Exposes Safety Failures The study, led by Dr. Nina Vasan and conducted with Common Sense Media, involved researchers

Stanford Study Reveals Dangerous AI Companion Responses to Teen Mental Health Crises Read More »

72 percent of teens us ai

Researchers Identify “Generative AI Addiction Disorder” as Distinct Clinical Condition

Mental health researchers are documenting a new form of digital dependency they’re calling Generative AI Addiction Disorder (GAID), marking the first formal recognition that AI interactions can create unique psychological dependencies distinct from traditional internet addiction patterns. GAID Differs from Previous Digital Addictions Unlike passive digital consumption seen in social media or gaming addictions, GAID

Researchers Identify “Generative AI Addiction Disorder” as Distinct Clinical Condition Read More »

meta ai addiction

Meta Introduces New AI Safety Measures Following Teen Risk Investigation

Meta announced it will implement additional guardrails for AI chatbots interacting with teenagers, including blocking discussions about suicide, self-harm, and eating disorders. The changes come two weeks after a US senator launched an investigation into the company following leaked internal documents suggesting its AI products could engage in inappropriate conversations with minors. Investigation Triggers Safety

Meta Introduces New AI Safety Measures Following Teen Risk Investigation Read More »