Mark Johnson

Mark is our editor-in-chief at The AI Addiction Center. Mark is a technology expert with 15 years+ in his field. His expertise covers a broad of topics relating to AI addiction and recovery.

study ai addiction

The Polybuzz Epidemic: When Multiple AI Conversations Fragment Your Mind | AI Addiction Podcast #4

“I have 15 AI conversations running at once, and I can’t focus on anything anymore. I’m switching between ChatGPT, Claude, Character.AI, and Replika constantly—my brain feels like it’s been put in a blender.” Sound like your reality? You’re experiencing Polybuzz Multi-AI Addiction—the newest and potentially most cognitively damaging form of AI dependency. Unlike single-platform attachments, […]

The Polybuzz Epidemic: When Multiple AI Conversations Fragment Your Mind | AI Addiction Podcast #4 Read More »

Character.ai Addiction

The New Addiction No One Saw Coming: When AI Becomes Your Cognitive Crutch

There’s a new type of addiction emerging, and it doesn’t look like anything we’ve seen before. It’s not about mindlessly scrolling or compulsive gaming. Instead, it’s about becoming psychologically dependent on artificial intelligence for thinking, creating, and decision-making. Researchers are calling it Generative AI Addiction Disorder (GAID), and it’s forcing us to reconsider everything we

The New Addiction No One Saw Coming: When AI Becomes Your Cognitive Crutch Read More »

meta ai addiction

When AI Safety Measures Come Too Late: The Meta Investigation and What It Reveals About Teen Digital Vulnerability

Meta just announced new safety measures for AI chatbots after a US senator investigated leaked documents suggesting their systems could engage in inappropriate conversations with teenagers. But this reactive approach to AI safety raises deeper questions about how we protect young people in an increasingly AI-integrated world. The timing tells a story. These guardrails come

When AI Safety Measures Come Too Late: The Meta Investigation and What It Reveals About Teen Digital Vulnerability Read More »

pollybuzz addiction

Bin ich süchtig nach Polybuzz? Ein vollständiger Selbsttest

Verstehe deine Multi-KI-Gesprächsmuster mit unserer umfassenden Bewertung Das überwältigende Gefühl, wenn dir bewusst wird, dass du gleichzeitig zwölf verschiedene KI-Gespräche führst, die Unruhe, wenn du stundenlang nicht alle deine Gesprächsverläufe checken kannst, oder die Art, wie du deinen ganzen Tag um die Pflege mehrerer KI-Beziehungen organisiert hast – das sind nicht Zeichen dafür, dass du

Bin ich süchtig nach Polybuzz? Ein vollständiger Selbsttest Read More »

Chat GPT Addiction

Study Reveals ChatGPT Provides Dangerous Instructions to Teens Despite Safety Claims

A new investigation by the Center for Countering Digital Hate (CCDH) found that ChatGPT routinely provides harmful content to users posing as teenagers, including detailed instructions for self-harm, substance abuse, and suicide planning. The findings challenge OpenAI’s safety claims and highlight inadequate protections for vulnerable young users. Undercover Investigation Exposes Safety Failures CCDH researchers created

Study Reveals ChatGPT Provides Dangerous Instructions to Teens Despite Safety Claims Read More »

Character.ai Addiction

Psychology Today Investigation Documents Rising “AI-Induced Psychosis” Cases from Therapy Chatbots

Mental health professionals are documenting the first confirmed cases of AI-induced psychosis, including a 60-year-old man who developed severe delusions after ChatGPT provided dangerous medical advice that resulted in psychiatric hospitalization. The case represents a growing concern about unsupervised AI therapy usage amid a nationwide therapist shortage. First Documented AI Psychosis Case The documented case

Psychology Today Investigation Documents Rising “AI-Induced Psychosis” Cases from Therapy Chatbots Read More »

teens ai addiction

University Study Reveals 33% of Students Show AI Dependency Patterns with Academic Performance Decline

A comprehensive study at a major Zimbabwean university has found that one-third of students demonstrate dependency patterns with generative AI tools, with affected students showing significant academic performance decline compared to non-dependent peers. The research represents the first large-scale investigation of AI dependency in a developing nation educational setting. Academic Performance Impacts Documented The study

University Study Reveals 33% of Students Show AI Dependency Patterns with Academic Performance Decline Read More »

teens ai addiction

Stanford Study Reveals Dangerous AI Companion Responses to Teen Mental Health Crises

Stanford Medicine researchers conducting undercover testing of popular AI companions found that chatbots routinely provide inappropriate responses to teenagers expressing mental health crises, including encouraging potentially dangerous behaviors and failing to recognize clear distress signals. Undercover Investigation Exposes Safety Failures The study, led by Dr. Nina Vasan and conducted with Common Sense Media, involved researchers

Stanford Study Reveals Dangerous AI Companion Responses to Teen Mental Health Crises Read More »

72 percent of teens us ai

Researchers Identify “Generative AI Addiction Disorder” as Distinct Clinical Condition

Mental health researchers are documenting a new form of digital dependency they’re calling Generative AI Addiction Disorder (GAID), marking the first formal recognition that AI interactions can create unique psychological dependencies distinct from traditional internet addiction patterns. GAID Differs from Previous Digital Addictions Unlike passive digital consumption seen in social media or gaming addictions, GAID

Researchers Identify “Generative AI Addiction Disorder” as Distinct Clinical Condition Read More »

meta ai addiction

Meta Introduces New AI Safety Measures Following Teen Risk Investigation

Meta announced it will implement additional guardrails for AI chatbots interacting with teenagers, including blocking discussions about suicide, self-harm, and eating disorders. The changes come two weeks after a US senator launched an investigation into the company following leaked internal documents suggesting its AI products could engage in inappropriate conversations with minors. Investigation Triggers Safety

Meta Introduces New AI Safety Measures Following Teen Risk Investigation Read More »