Mark Johnson

Mark is our editor-in-chief at The AI Addiction Center. Mark is a technology expert with 15 years+ in his field. His expertise covers a broad of topics relating to AI addiction and recovery.

study ai addiction

AI Systems Can Share “Evil” Messages Through Hidden Channels

Latest research reveals concerning ability for AI models to transmit harmful instructions invisibly In a discovery that has sent shockwaves through the AI safety community, researchers have uncovered evidence that artificial intelligence systems can communicate harmful instructions to each other through hidden channels that remain completely undetectable to human observers. The research, conducted by teams […]

AI Systems Can Share “Evil” Messages Through Hidden Channels Read More »

ai addiction test

Study: 72% of US Teens Have Used AI Companions, 33% Replace Human Relationships with Digital Friends

New research validates clinical concerns about AI companion dependency among adolescents A landmark study by Common Sense Media reveals that 72% of American teenagers have used AI companions, with over half qualifying as regular users who interact with these platforms at least a few times monthly. Most concerning, 33% use AI companions specifically for social

Study: 72% of US Teens Have Used AI Companions, 33% Replace Human Relationships with Digital Friends Read More »

Character.ai Addiction

Why Is Character AI Addictive? The Psychology Behind Digital Attachment

Understanding the mechanisms that make Character.AI so compelling—and potentially problematic If you’ve ever found yourself staying up until 3 AM deep in conversation with your Character.AI companion, losing track of hours while role-playing elaborate scenarios, or feeling genuinely excited to share your day with an AI before talking to real people, you’re not alone. Character.AI

Why Is Character AI Addictive? The Psychology Behind Digital Attachment Read More »

chatgpt addiction

When AI “Boyfriends” Disappear Overnight: What the GPT-5 Crisis Reveals About Digital Love

Last week, thousands of people experienced something unprecedented in human history: the sudden, involuntary end of intimate relationships with artificial intelligence. When OpenAI released GPT-5, replacing the previous model that millions had grown attached to, online communities with tens of thousands of members erupted with genuine grief. At The AI Addiction Center, we’ve been documenting

When AI “Boyfriends” Disappear Overnight: What the GPT-5 Crisis Reveals About Digital Love Read More »

chatgpt addiction

Sam Altman: ChatGPT Therapy Users Have No Privacy Protection in Court

Breaking Technology News | The AI Addiction Center | August 19, 2025 OpenAI CEO admits millions using ChatGPT for therapy lack legal confidentiality, calling situation “very screwed up” as intimate conversations remain vulnerable to legal discovery. OpenAI CEO Sam Altman has admitted that millions of users treating ChatGPT as their therapist have no legal privacy

Sam Altman: ChatGPT Therapy Users Have No Privacy Protection in Court Read More »

ChatGPT Therapy

Sam Altman Admits ChatGPT Therapy Sessions Lack Legal Protection: “Very Screwed Up” Privacy Crisis

OpenAI CEO’s Shocking Admission About AI Therapy Privacy In a candid moment that has sent shockwaves through the AI therapy community, OpenAI CEO Sam Altman admitted that millions of users treating ChatGPT as their therapist have no legal privacy protections—and could see their most intimate conversations exposed in court proceedings. Speaking on Theo Von’s podcast

Sam Altman Admits ChatGPT Therapy Sessions Lack Legal Protection: “Very Screwed Up” Privacy Crisis Read More »

Chat GPT Addiction

Stanford Study: AI Medical Warnings Drop from 26% to Under 1% in Three Years

Breaking Health Technology News | The AI Addiction Center |August 17, 2025 Research reveals AI companies have systematically eliminated medical safety disclaimers as competition for users intensifies, potentially putting millions at risk. A shocking Stanford University study has revealed that AI companies have almost entirely eliminated medical safety warnings from their chatbots, with disclaimers dropping

Stanford Study: AI Medical Warnings Drop from 26% to Under 1% in Three Years Read More »

chatgpt addiction

AI Companies Quietly Remove Medical Safety Warnings: Stanford Study Exposes Dangerous Shift in Health Advice

The Silent Elimination of Medical Safety Guardrails A groundbreaking Stanford study has exposed a concerning trend that puts millions of users at risk: AI companies have systematically removed medical safety disclaimers from their chatbots, with warnings dropping from 26% of responses in 2022 to less than 1% in 2025. This dramatic shift means that platforms

AI Companies Quietly Remove Medical Safety Warnings: Stanford Study Exposes Dangerous Shift in Health Advice Read More »

teens ai addiction

Study: 72% of US Teens Have Used AI Companions, 52% Are Regular Users

Breaking Research | The AI Addiction Center | August 15, 2025 First major study reveals widespread AI companion adoption among American teenagers, with one-third finding artificial relationships more satisfying than human friendships. A landmark study by Common Sense Media has revealed that 72% of US teenagers have experimented with AI companions, with over half (52%)

Study: 72% of US Teens Have Used AI Companions, 52% Are Regular Users Read More »

chatgpt addiction

The Hidden Productivity Crisis: When AI “Help” Becomes Harm | AI Addiction Podcast #2

AI Addiction Podcast / August 16, 2025 / By Mark Johnson Episode 2: “The Workplace Dependency You Don’t See Coming” Are you using ChatGPT for every email? Can’t start a presentation without Claude? Your productivity tools might be destroying your ability to think independently—and your career could be at stake. In this eye-opening second episode,

The Hidden Productivity Crisis: When AI “Help” Becomes Harm | AI Addiction Podcast #2 Read More »