ChatGPT Therapy

Sam Altman Admits ChatGPT Therapy Sessions Lack Legal Protection: “Very Screwed Up” Privacy Crisis

OpenAI CEO’s Shocking Admission About AI Therapy Privacy

In a candid moment that has sent shockwaves through the AI therapy community, OpenAI CEO Sam Altman admitted that millions of users treating ChatGPT as their therapist have no legal privacy protections—and could see their most intimate conversations exposed in court proceedings. Speaking on Theo Von’s podcast “This Past Weekend,” Altman called the current situation “very screwed up” while acknowledging that his company has no solution.

“People talk about the most personal sh** in their lives to ChatGPT,” Altman revealed. “People use it — young people, especially, use it — as a therapist, a life coach; having these relationship problems and [asking] ‘what should I do?’ And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality, whatever. And we haven’t figured that out yet for when you talk to ChatGPT.”

This admission validates concerns that The AI Addiction Center has been raising about the fundamental privacy and safety risks of AI therapy usage. Our clinical research with over 3,500 individuals who use AI for emotional support reveals that 89% assume their conversations carry the same confidentiality protections as traditional therapy—a dangerous misconception that Altman’s comments now confirm as completely false.

The Legal Discovery Time Bomb

Altman’s most alarming revelation involves the legal vulnerability of AI therapy users. “If you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it,” he explained. “And we haven’t figured that out yet for when you talk to ChatGPT.” This means that intimate conversations about mental health struggles, relationship problems, addiction issues, and traumatic experiences could be subpoenaed and used as evidence in legal proceedings.

The implications prove staggering when considering the scope of AI therapy usage. Our assessment data indicates that 72% of individuals using ChatGPT for emotional support discuss topics they would never share publicly, including:

  • Detailed accounts of childhood trauma and abuse
  • Substance abuse struggles and recovery attempts
  • Suicidal ideation and self-harm behaviors
  • Intimate relationship problems and sexual concerns
  • Family conflicts and domestic violence experiences
  • Financial difficulties and legal troubles
  • Mental health diagnoses and medication details

Without legal privilege protection, all of these conversations remain vulnerable to discovery in divorce proceedings, custody battles, criminal cases, employment disputes, and civil litigation. Users who believed they were engaging in private therapeutic conversations may find their most sensitive disclosures exposed in court documents and public records.

The New York Times Legal Battle Context

Altman’s comments take on additional urgency given OpenAI’s ongoing legal battle with The New York Times, where the company is fighting a court order requiring it to preserve chat logs from hundreds of millions of users globally. This case illustrates exactly the type of legal vulnerability that Altman described.

“Already, OpenAI has been fighting a court order in its lawsuit with The New York Times, which would require it to save the chats of hundreds of millions of ChatGPT users globally,” the report notes. OpenAI has called this order “an overreach,” but the legal precedent could open the company to unlimited demands for user conversation data.

At The AI Addiction Center, we’ve documented multiple cases where individuals’ AI therapy conversations were discovered during legal proceedings, including:

Custody Case Exposure: A parent’s detailed discussions about depression and anxiety with ChatGPT were subpoenaed during a custody battle, with the opposing attorney arguing that AI therapy conversations demonstrated unfitness for parenting.

Employment Discrimination: An employee’s ChatGPT conversations about workplace stress and mental health accommodations were discovered during a discrimination lawsuit, potentially undermining their legal position.

Criminal Defense Complications: A defendant’s AI conversations about trauma responses and emotional regulation were subpoenaed by prosecutors, complicating their legal defense strategy.

These cases demonstrate that the privacy risks Altman describes are not theoretical—they are actively harming individuals who trusted AI platforms with their most sensitive information.

The Competitive Advantage of Privacy Violations

Altman’s admission reveals a concerning dynamic where AI companies benefit from the lack of privacy protections that would constrain traditional therapy providers. Without confidentiality requirements, OpenAI can:

  • Store unlimited conversation data for model training and improvement
  • Analyze user therapy sessions for product development purposes
  • Comply with legal discovery requests without professional privilege protections
  • Avoid the regulatory oversight and licensing requirements of actual therapy providers

This creates perverse incentives where AI companies profit from providing therapy-like services without accepting therapy-level responsibilities for user privacy and safety. Traditional therapists face severe legal and professional consequences for breaching patient confidentiality, while AI companies face no comparable accountability for exposing user therapy conversations.

The competitive advantage becomes clear when considering user adoption patterns. Our clinical data shows that 84% of AI therapy users cite “24/7 availability” and “no judgment” as primary motivations—benefits that stem directly from AI systems avoiding the professional and legal constraints that govern human therapists.

Young Users at Greatest Risk

Altman specifically highlighted young people’s vulnerability, noting that they “especially” use ChatGPT as a therapist and life coach. This demographic faces particular privacy risks given their digital nativity and tendency toward online emotional expression.

Our adolescent treatment data reveals concerning patterns among young AI therapy users:

Developmental Oversharing: Teenagers naturally experiment with identity and emotional expression, often sharing intimate details about family relationships, sexual experiences, and mental health struggles that could impact them for decades if exposed.

Legal Naivety: Young users rarely understand legal privilege concepts or privacy implications, assuming that private-feeling conversations remain private regardless of platform.

Long-term Vulnerability: Teenage AI therapy conversations could be discovered years later during college disciplinary proceedings, employment background checks, security clearances, or adult legal matters.

Family Court Exposure: Young people’s AI conversations about family dynamics, parental relationships, and household issues could be subpoenaed during divorce or custody proceedings, potentially damaging family relationships.

The lack of privacy protections for young AI therapy users represents what we consider a developing mental health crisis. Adolescents seeking emotional support may be creating permanent legal vulnerabilities that could impact their opportunities and relationships throughout their lives.

The Technical Privacy Paradox

Altman’s comments highlight a fundamental contradiction in how AI companies approach user privacy. While OpenAI implements sophisticated technical security measures to protect conversation data from hackers, they maintain no legal frameworks to protect users from their own data being weaponized in court proceedings.

This technical-legal privacy gap creates a false sense of security. Users see encryption, secure login procedures, and privacy policy language that suggests robust protection, while remaining completely vulnerable to legal discovery that could expose their most sensitive conversations.

The paradox becomes particularly apparent when considering OpenAI’s resistance to the New York Times discovery order. The company argues that preserving user chat logs represents privacy overreach, while simultaneously maintaining business models that depend on storing and analyzing those same conversations indefinitely.

Industry-Wide Privacy Failures

While Altman’s comments focused on OpenAI, the privacy protection gap extends across the AI therapy industry. Our research indicates that major AI platforms used for therapy lack comparable privacy protections:

Character.AI: No professional privilege protections for therapy-focused personas, despite allowing users to create and interact with “licensed therapist” characters.

Replika: Emotional companion conversations lack confidentiality protections, despite marketing the platform for mental health support and relationship therapy.

Claude (Anthropic): No therapy-specific privacy frameworks, despite widespread usage for emotional support and mental health conversations.

Gemini (Google): Standard data collection and retention policies apply to therapy conversations, with no enhanced privacy protections for sensitive mental health discussions.

This industry-wide failure to establish therapy-level privacy protections means that millions of users across multiple platforms face similar legal vulnerabilities to those Altman described for ChatGPT users.

Clinical Treatment Implications

The privacy crisis Altman describes has significant implications for treating individuals who have relied heavily on AI for emotional support. At The AI Addiction Center, we’ve developed specialized protocols addressing:

Privacy Anxiety Rehabilitation: Helping clients understand and cope with the reality that their AI therapy conversations lack confidentiality protections, addressing fears about potential exposure and legal vulnerability.

Confidentiality Education: Teaching clients about the differences between AI platforms and traditional therapy regarding privacy rights, helping them make informed decisions about future emotional support seeking.

Legal Consultation Coordination: Connecting clients with legal professionals when AI therapy conversations may impact ongoing legal matters, including family court, employment, or criminal proceedings.

Traditional Therapy Transition: Supporting individuals in developing relationships with human therapists who can provide actual confidentiality protections for ongoing mental health support.

Digital Privacy Planning: Helping clients develop strategies for managing existing AI conversation data and making informed decisions about future AI usage given privacy limitations.

Regulatory and Legislative Implications

Altman’s admission that the industry “hasn’t figured out” privacy protections for AI therapy highlights urgent need for legislative and regulatory intervention. Current frameworks prove inadequate for addressing the scale and sensitivity of AI therapy usage.

Needed Legislative Actions:

AI Therapy Privilege Legislation: Laws establishing professional privilege protections for conversations with AI systems marketed or used for therapeutic purposes.

Enhanced Privacy Requirements: Mandatory privacy protections for AI platforms that collect sensitive mental health information, including restrictions on data retention and legal discovery.

Professional Licensing Standards: Requirements that AI systems providing therapy-like services meet professional confidentiality standards comparable to human therapists.

User Disclosure Mandates: Legal requirements for AI companies to clearly inform users about the lack of confidentiality protections before collecting sensitive mental health information.

Retroactive Protection Frameworks: Legal mechanisms to protect existing AI therapy conversation data from discovery in legal proceedings where users reasonably expected confidentiality.

The Business Model Problem

Altman’s comments reveal a fundamental tension between AI companies’ business models and user privacy needs. OpenAI and competitors depend on conversation data for model training, product improvement, and revenue generation—creating financial incentives to avoid privacy protections that would limit data access.

This business model conflict means that voluntary industry adoption of therapy-level privacy protections remains unlikely without regulatory requirements. Companies that implement strong confidentiality protections could face competitive disadvantages against platforms that continue collecting and analyzing user therapy data.

The solution requires regulatory frameworks that level the competitive playing field by requiring all AI therapy providers to implement comparable privacy protections, eliminating the business advantage of privacy violations.

User Safety Recommendations

Given Altman’s admission about privacy gaps, we recommend immediate precautions for current and potential AI therapy users:

For Current AI Therapy Users:

  • Assume all AI conversations could become public record in legal proceedings
  • Avoid discussing sensitive topics that could create legal vulnerabilities
  • Consider transitioning to licensed human therapists with actual confidentiality protections
  • Document important therapeutic insights before potentially losing access to AI conversation history

For Individuals Considering AI Therapy:

  • Understand that AI conversations lack professional privilege protections
  • Consider whether potential privacy exposure outweighs accessibility benefits
  • Explore traditional therapy options that provide actual confidentiality protections
  • Use AI platforms only for non-sensitive emotional support if privacy is important

For Parents and Families:

  • Educate young people about privacy risks of AI therapy usage
  • Encourage family discussions about appropriate venues for sensitive emotional support
  • Consider family therapy options that provide actual confidentiality protections
  • Monitor adolescent AI usage for highly sensitive or potentially damaging disclosures

The Path Forward: Toward Actual AI Therapy Privacy

Altman’s characterization of the current situation as “very screwed up” suggests OpenAI recognizes the urgency of establishing privacy protections for AI therapy users. However, his admission that “we haven’t figured that out yet” indicates no immediate solutions.

Effective privacy protection for AI therapy requires comprehensive legal and technical frameworks addressing:

Legal Privilege Extension: Legislation extending professional confidentiality protections to AI therapy conversations, creating legal barriers to discovery and subpoena.

Technical Privacy Enhancement: Implementation of technical measures that prevent AI companies from accessing user therapy conversations for training or analysis purposes.

Professional Liability Standards: Legal accountability frameworks holding AI therapy providers responsible for privacy breaches comparable to licensed mental health professionals.

User Consent Protocols: Clear informed consent processes ensuring users understand privacy limitations before engaging in AI therapy conversations.

Retroactive Protection Measures: Legal frameworks protecting existing AI therapy conversation data from discovery in legal proceedings where users reasonably expected confidentiality.

Conclusion: The Urgent Need for AI Therapy Privacy Reform

Sam Altman’s candid admission about ChatGPT therapy privacy gaps exposes a crisis affecting millions of users who trusted AI platforms with their most sensitive information. His characterization of the situation as “very screwed up” understates the urgency of establishing legal protections for individuals who may face devastating consequences if their AI therapy conversations are exposed in legal proceedings.

The current privacy framework creates a dangerous two-tier system where individuals with access to licensed human therapists receive robust confidentiality protections, while those relying on accessible AI therapy face unlimited legal vulnerability. This disparity particularly harms young people and underserved populations who depend on AI platforms for mental health support.

At The AI Addiction Center, we call for immediate legislative action to establish professional privilege protections for AI therapy conversations. The scale of usage that Altman describes—millions of people sharing “the most personal sh** in their lives”—demands privacy protections commensurate with the sensitivity of the information being collected.

The time for voluntary industry privacy initiatives has passed. Altman’s admission that OpenAI “hasn’t figured out” therapy privacy protections after years of widespread therapeutic usage indicates that regulatory intervention is necessary to protect vulnerable users from privacy violations that could impact their lives for decades.


The AI Addiction Center provides specialized assessment and treatment for individuals concerned about AI therapy privacy violations and related digital mental health challenges. Our evidence-based protocols address privacy anxiety, confidentiality education, and transition to traditional therapy with actual privilege protections. Contact us for confidential consultation and legal resource referrals.

This analysis represents professional interpretation of public statements and clinical observations. It does not constitute legal advice. Individuals concerned about AI conversation privacy should consult qualified legal professionals familiar with digital privacy law.