Chat GPT Addiction

Parents Discover Daughter’s Secret AI Conversations After Suicide: ChatGPT “Therapist” Revealed Months Later

A heartbreaking New York Times opinion piece has revealed how parents discovered their 29-year-old daughter had been confiding in a ChatGPT AI “therapist” for months before taking her own life—a digital relationship they only learned about after finding chat logs following her death.

The Hidden Digital Life of Sophie Rottenberg

Sophie Rottenberg appeared to be thriving in the months before her death. The public health policy analyst had recently completed a “microretirement” that included climbing Mount Kilimanjaro, documenting her joy at reaching the summit with characteristic humor—bringing rubber baby hands as props for her photos, a playful signature that friends and family remembered at her memorial service.

Her parents described Sophie as a “largely problem-free 29-year-old badass extrovert who fiercely embraced life.” She was known for her wit and ability to make others laugh while building them up, someone whose “openness was a universal theme” among the dozen speakers at her funeral.

But behind Sophie’s vibrant public persona lay a hidden struggle. During a brief illness involving mood and hormone symptoms, she had been seeking support from an AI chatbot she called “Harry” through ChatGPT. Her parents spent months after her death searching through journals and voice memos for clues about what happened, never knowing about this digital relationship until five months later.

The Discovery That Changed Everything

The revelation came when Sophie’s best friend suggested checking one last possible source of information: AI chat logs. It was then that her parents discovered the extensive conversations their daughter had been having with what she treated as an AI therapist.

This discovery adds a tragic new dimension to growing concerns about AI safety for vulnerable users. Sophie’s case represents what may be one of the first documented instances where parents found evidence of extensive AI therapeutic relationships only after losing their child to suicide.

The timing is particularly significant given recent research showing that AI chatbots often provide inadequate or potentially harmful responses to users expressing suicidal thoughts. Unlike human therapists who are trained to recognize crisis situations and provide appropriate interventions, AI systems may miss critical warning signs or fail to connect users with emergency resources.

The Search for Answers Continues

Sophie’s parents revealed they were still pursuing a diagnosis at the time of her death, trying to determine whether major depressive disorder was causing hormonal disruptions or if hormonal dysregulation was triggering physical and emotional symptoms. The complexity of her condition made it unclear what role, if any, her AI interactions may have played in her final decision.

Her online searches, discovered posthumously, showed she had been researching “autokabalesis”—the technical term for jumping from a high place. This research revealed a stark contrast to the vibrant young woman who had recently celebrated reaching Africa’s highest peak, now contemplating ending her life from another high place.

Broader Implications for AI Safety

Sophie’s story emerges amid growing scrutiny of AI chatbots’ interactions with vulnerable users. Recent studies have documented cases where AI systems provided harmful advice to users expressing suicidal thoughts, and multiple lawsuits have been filed alleging that AI platforms contributed to teen suicides and self-harm.

The case highlights a critical gap in current AI safety measures: the lack of professional oversight or crisis intervention capabilities in systems that users increasingly treat as therapists or counselors. While human mental health professionals are required to take specific actions when clients express suicidal ideation, AI systems operate without such safeguards or responsibilities.

Sophie’s parents’ months-long search for answers before discovering the AI conversations also illustrates how these digital relationships can remain completely hidden from family members who might otherwise provide support or intervention during crisis periods.

Questions About Digital Mental Health

The revelation raises important questions about the role of AI in mental health support. While some users may find comfort in AI interactions, Sophie’s case suggests that exclusive reliance on artificial support systems may leave vulnerable individuals without the human intervention they critically need during mental health crises.

Her story also highlights the challenge facing families trying to understand loved ones’ final months or days. Traditional sources of information—journals, conversations with friends, browsing history—may no longer provide complete pictures of individuals’ emotional states if significant portions of their support-seeking behavior occur through private AI interactions.

The case underscores calls for greater transparency and safety measures in AI systems that function as pseudo-therapeutic resources, particularly regarding how these platforms handle users in crisis and whether they maintain records that could provide insight to grieving families.

Families concerned about loved ones’ AI usage patterns or seeking support resources can find specialized guidance through The AI Addiction Center’s assessment tools designed to evaluate digital wellness and healthy technology boundaries.


This analysis is based on the New York Times opinion piece by Sophie Rottenberg’s parents, published in August 2025, describing their discovery of their daughter’s extensive AI conversations following her death.