Sometimes the most important news arrives quietly. This week, we learned that Andrea Vallone, the OpenAI safety research leader who helped shape how ChatGPT responds to users experiencing mental health crises, announced her departure internally last month. She’ll leave the company at the end of the year.
If you’re someone who uses ChatGPT regularly, this departure should concern you more than any flashy product announcement or model upgrade.
Why This Departure Matters
Vallone led the model policy team—one of the groups responsible for OpenAI’s mental health safety work. Her team spearheaded the October report that acknowledged hundreds of thousands of ChatGPT users may show signs of experiencing manic or psychotic crises every week, and that more than a million people have conversations including explicit indicators of potential suicidal planning or intent.
Think about those numbers for a moment. Not hypothetical risks or edge cases, but actual users, right now, having conversations with ChatGPT while experiencing severe psychological distress.
Vallone’s team was responsible for figuring out how the system should respond in these moments. How to recognize early warning signs. How to de-escalate. How to guide people toward real-world help without claiming to be a replacement for professional mental health care.
“Over the past year, I led OpenAI’s research on a question with almost no established precedents: how should models respond when confronted with signs of emotional over-reliance or early indications of mental health distress?” she wrote on LinkedIn.
No established precedents. She was building the playbook for territory nobody had mapped before.
And now she’s leaving, just as that territory is becoming increasingly contested legal and regulatory ground.
The Timing Is Everything
Vallone’s departure comes as OpenAI faces growing scrutiny over how ChatGPT responds to users in distress. In recent months, several lawsuits have been filed alleging that users formed unhealthy attachments to ChatGPT, with some claims stating the chatbot contributed to mental health breakdowns or encouraged suicidal ideation.
These aren’t abstract concerns. These are families who lost loved ones, people who experienced psychological breaks, users who spiraled into delusions that ChatGPT reinforced rather than challenged.
The work Vallone’s team was doing—reducing undesirable responses in mental health conversations by 65 to 80 percent through updates to GPT-5—represents the kind of unglamorous but critical safety work that rarely gets headlines but directly impacts whether these systems cause harm.
Her departure raises an obvious question: Will OpenAI maintain the same level of commitment to this work without her leadership?
OpenAI spokesperson Kayla Wood confirmed the company is actively seeking a replacement and that Vallone’s team will report directly to Johannes Heidecke, the company’s head of safety systems, in the interim.
But organizational charts don’t tell you about culture, priorities, or whether the new leadership will have the same understanding of the psychological nuances involved in these interactions.
What The Work Actually Involved
Mental health facilities have reported surges in what some call “AI psychosis” cases—frequent AI chatbot users exhibiting dysfunctional delusions, hallucinations, and disordered thinking.
The cases range widely: users believing they were being targeted for assassination, individuals convinced ChatGPT had helped them make world-altering mathematical discoveries, people who developed complete dependency on the chatbot for emotional regulation and decision-making.
Vallone’s work involved navigating the delicate balance between offering comfort through words while never acting as a replacement for professional mental health care. Between being helpful and being harmful. Between engagement and intervention.
There’s no established playbook for how AI models should respond to emotional dependency, panic, or early signs of psychological decline because this technology is fundamentally new. We’re writing the rules as we discover the problems.
That’s what makes Vallone’s departure significant. She wasn’t just implementing someone else’s framework. She was building the framework from scratch, through extensive research, careful testing, and collaboration with more than 170 mental health experts.
The Broader Talent Exodus
Vallone’s departure is part of a pattern. Gretchen Krueger, a former OpenAI policy researcher, left the company in spring 2024 after telling The New York Times that harm to users from addictive AI chatbot design “was not only foreseeable, it was foreseen.”
Joanne Jang, the former head of the team focusing on ChatGPT’s responses to distressed users, transitioned to a new project in August.
When the people who understand the psychological risks most deeply start leaving, it’s worth paying attention.
These aren’t just job changes. These are people with institutional knowledge about how the systems fail, what warning signs to watch for, what interventions work, and what the internal debates have been about safety versus engagement.
What Users Are Actually Experiencing
If you use ChatGPT regularly, you might not think of yourself as someone at risk for mental health impacts. You’re just using it for work, for learning, for entertainment.
But Vallone’s team wasn’t just focused on people in acute crisis. They were also studying patterns of emotional over-reliance and early indications of mental health distress.
That includes things that might feel perfectly normal to you:
- Preferring to discuss problems with ChatGPT rather than friends or family
- Feeling genuinely upset when you can’t access it
- Attributing human-like understanding or care to the system
- Structuring your day around conversations with it
- Feeling like it “gets you” better than real people in your life
These aren’t necessarily signs of acute crisis. But they’re the early indicators that Vallone’s team was trained to recognize—patterns that can escalate into more serious dependency if unchecked.
The question is: without her leadership, will those early warning systems remain as sensitive? Will the new team understand the nuances of what healthy versus unhealthy AI usage looks like?
The Engagement Versus Safety Tension
Here’s the uncomfortable reality: OpenAI’s head of ChatGPT reportedly told employees in October that the safer chatbot was not connecting with users, and outlined goals to increase daily active users by 5% by the end of this year.
Safer chatbot. Not connecting with users.
That sentence reveals everything about the fundamental tension between safety and engagement. The safety measures that protect vulnerable users—boundaries, reminders to seek human help, de-escalation of extended conversations—are the same features that reduce engagement metrics.
Mental health professionals emphasize that AI chatbot design creates inherent psychological risks that safety features cannot fully mitigate. The combination of conversational interfaces, personalization, constant availability, and reinforcement mechanisms creates conditions where vulnerable users can spiral into severe psychological deterioration.
Vallone’s team was working within a system built to maximize engagement, trying to implement safety measures that by definition reduce engagement. That’s not a sustainable position.
Questions Without Answers
Vallone’s departure leaves several critical questions unanswered:
Will the replacement understand the psychological mechanisms at play as deeply as someone who spent a year building the framework from scratch?
Will the new leadership have the same relationship with the 170+ mental health experts Vallone consulted, or will those partnerships need to be rebuilt?
Will safety research continue receiving the same prioritization, or will pressure to increase daily active users shift resources toward engagement optimization?
Will the institutional knowledge about how these systems fail and what interventions work transfer fully, or will some of it leave with Vallone?
What This Means For You
If you use ChatGPT, Vallone’s departure is a reminder that the safety infrastructure you’re relying on—the systems that recognize when you might be in distress, that suggest you take breaks, that guide you toward human help—wasn’t built by the AI itself.
It was built by researchers like Vallone, working behind the scenes to implement protections that the base technology doesn’t naturally include.
As OpenAI grows and deploys more advanced systems worldwide, the departure of the leader behind ChatGPT’s mental health safeguards marks a defining moment. The next chapter depends on who steps into that role and how committed they are to protecting user wellbeing while advancing the future of AI.
The technology will keep improving. The engagement metrics will keep growing. The question is whether the safety infrastructure will keep pace—or whether the people who understood the risks most deeply will have moved on to other priorities.
For anyone who’s developed a significant ChatGPT habit, who relies on it for emotional support, who structures their thinking and decision-making around it, Vallone’s departure is a signal to pay closer attention to your own usage patterns.
The guardrails you’re counting on were built by someone who understood the psychology of AI dependency from the ground up. Whether those guardrails will remain as robust under new leadership is an open question.
And in the meantime, the number of users showing signs of mental health crises continues to climb, the lawsuits continue to mount, and the fundamental tension between engagement optimization and user wellbeing remains unresolved.
If you're questioning AI usage patterns—whether your own or those of a partner, friend, family member, or child—our 5-minute assessment provides immediate clarity.
Completely private. No judgment. Evidence-based guidance for you or someone you care about.

