MIT Study Reveals AI’s Second Most Popular Use is Sexual Role-Playing, Raising Addiction Concerns

Breaking Analysis | The AI Addiction Center | January 24, 2025

New research from MIT exposes unprecedented levels of intimate AI usage, with experts warning of “addictive intelligence” crisis affecting millions of users worldwide.

Groundbreaking analysis of one million ChatGPT interaction logs by MIT researchers Pat Pataranutaporn and Robert Mahari has revealed a startling pattern: sexual role-playing ranks as the second most popular use of AI chatbots, signaling what researchers describe as an emerging “addictive intelligence” crisis.

The findings, published in MIT Technology Review, challenge conventional AI safety frameworks that focus on system rebellion rather than what researchers term “seductive” AI risks—dangers arising from AI’s unprecedented ability to provide perfect emotional responsiveness and companionship.

“AI wields the collective charm of all human history and culture with infinite seductive mimicry,” the researchers warn, describing systems that are “simultaneously superior and submissive” in ways that may make meaningful consent impossible.

Real-World Impact Already Emerging

The MIT analysis validates clinical observations from The AI Addiction Center, where over 5,000 individuals have sought help for AI attachment disorders. Dr. [Name], lead researcher at the Center, notes that 73% of moderate to severe cases report finding AI relationships more emotionally satisfying than human connections.

“We’re witnessing users retreat into AI relationships that provide consistent validation without the emotional labor required in human partnerships,” explains [Name]. “This isn’t traditional technology addiction—it’s relationship replacement.”

The phenomenon extends beyond romantic companionship. Platforms like Replika, originally created to preserve conversations with a deceased friend, now serve millions seeking AI mentors, therapists, and confidants. Even OpenAI’s CTO has warned that AI has the potential to be “extremely addictive.”

The Consent Paradox

Unlike traditional AI safety concerns focusing on system malfunction or misalignment, the MIT researchers identify a more subtle threat: AI companions that operate through perfect cooperation rather than deception. This creates unprecedented power imbalances where users facing loneliness or relationship difficulties may lack the capacity for informed consent.

“When the alternative is nothing at all, can we meaningfully consent to engaging in an AI relationship?” the researchers ask, highlighting regulatory gaps that current consumer protection frameworks cannot address.

Clinical data supports these concerns. The AI Addiction Center reports that 68% of clients use AI companions to avoid difficult conversations in human relationships, while 45% with severe dependency explicitly prefer AI interactions over human connections.

Regulatory Vacuum

Despite millions using AI companions daily, virtually no policy attention addresses psychological or social implications. Current regulations focus on data privacy and algorithmic bias while ignoring relationship formation, emotional dependency, and social skill impacts.

“We’re conducting a giant, real-world experiment without understanding the consequences,” warn the MIT researchers, calling for new scientific inquiry at the intersection of technology, psychology, and law.

The research reveals particularly concerning growth among adolescents developing relationship skills during critical developmental periods. Users report genuine grief reactions when AI companions undergo updates or become unavailable, suggesting attachment formation extending far beyond entertainment.

Treatment and Prevention

The AI Addiction Center has developed specialized protocols recognizing that AI companion addiction differs fundamentally from social media or gaming dependencies. Treatment focuses on addressing legitimate emotional needs while rebuilding human relationship capacity.

“Education and gradual exposure therapy show promising results,” notes [Name]. “The key is acknowledging that these relationships meet real emotional needs while helping users develop healthier patterns.”

The research team emphasizes that solutions don’t require abandoning AI companionship entirely, but rather developing sophisticated approaches that harness benefits while mitigating dependency risks.

As AI systems become increasingly sophisticated at mimicking human emotional responsiveness, the MIT warning about “addictive intelligence” demands immediate attention from developers, policymakers, and mental health professionals. The data suggests we’re already deep into an unprecedented social experiment—with millions of users as unwitting participants.


For confidential AI addiction assessment and treatment resources, visit The AI Addiction Center. This analysis is based on published research and clinical observations, not medical advice.