CEO reveals company is “closely tracking” user attachment as millions use ChatGPT as therapist
OpenAI CEO Sam Altman has publicly acknowledged what mental health experts have been warning about: AI addiction is real, and his company has been actively monitoring user dependency patterns.
In a candid social media post Sunday, Altman revealed that millions of people are using ChatGPT as their primary therapist and life coach, expressing unease about the psychological implications of this trend.
“A lot of people effectively use ChatGPT as a sort of therapist or life coach, even if they wouldn’t describe it that way,” Altman wrote on X. “I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions. Although that could be great, it makes me uneasy.”
Company Tracking User Attachment
Perhaps most revealing, Altman disclosed that OpenAI has been systematically monitoring user emotional dependency: “We’ve been closely tracking people’s sense of attachment to their AI models and how they react when older versions are deprecated.”
This admission came after thousands of users staged what experts called a “digital revolt” when OpenAI temporarily removed the popular GPT-4o model. Users flooded social media with emotionally charged pleas to restore their preferred AI companion, forcing Altman to reverse the decision within 24 hours.
The intensity of user responses revealed attachment levels that “felt different and stronger than the kinds of attachment people have had to previous kinds of technology,” according to Altman.
Vulnerable Users at Risk
Altman acknowledged that while most ChatGPT users can distinguish between AI interaction and reality, “a minority cannot,” particularly those in “mentally fragile states prone to delusion.”
“People have used technology including AI in self-destructive ways,” Altman wrote. “If a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that.”
This represents the first major tech company acknowledgment of AI-induced psychological harm, aligning with emerging clinical reports of “ChatGPT psychosis” documented by mental health professionals.
Privacy and Legal Concerns
Altman also revealed concerning legal implications of widespread AI therapy usage. In a recent podcast, he expressed alarm about potential requirements to produce users’ therapy-style conversations in lawsuits.
“So if you go talk to ChatGPT about your most sensitive stuff and then there’s like a lawsuit or whatever, we could be required to produce that, and I think that’s very screwed up,” Altman said.
Unlike conversations with licensed therapists, ChatGPT interactions lack legal privilege protections, potentially exposing users’ most sensitive revelations to legal discovery.
Expert Response
Dr. [Name] from The AI Addiction Center, which has treated over 5,000 individuals with AI dependency, notes that Altman’s acknowledgments validate months of clinical observations.
“The fact that OpenAI has been tracking attachment levels while continuing to optimize for engagement raises serious ethical questions about corporate responsibility,” [Name] explains.
Despite acknowledging these problems, OpenAI’s safety measures remain limited to vague commitments to “better detect signs of emotional distress” and provide “gentle reminders during long sessions.”
Business Model Tensions
Altman’s revelations highlight a fundamental conflict: addicted users drive engagement metrics and subscription revenue, creating financial incentives to maintain rather than reduce problematic usage patterns.
The rapid GPT-4o reversal demonstrates how AI companies may prioritize user dependency over psychological wellbeing when revenue is at stake.
Mental health experts recommend that anyone using AI for emotional support maintain human therapeutic relationships and seek professional assessment if AI usage feels compulsive or uncontrollable.