An OpenAI safety research leader who helped shape ChatGPT’s responses to users experiencing mental health crises announced her departure from the company internally last month. Andrea Vallone, the head of a safety research team known as model policy, is slated to leave OpenAI at the end of the year.
OpenAI spokesperson Kayla Wood confirmed Vallone’s departure and stated the company is actively seeking a replacement. In the interim, Vallone’s team will report directly to Johannes Heidecke, the company’s head of safety systems.
Vallone’s departure arrives as OpenAI faces growing scrutiny over how its flagship product responds to users in distress. In recent months, several lawsuits have been filed against OpenAI alleging that users formed unhealthy attachments to ChatGPT, with some claims stating ChatGPT contributed to mental health breakdowns or encouraged suicidal ideation.
Critical Work on Uncharted Territory
Model policy is one of the teams leading OpenAI’s mental health safety work, spearheading an October report detailing the company’s progress and consultations with more than 170 mental health experts. In that report, OpenAI acknowledged that hundreds of thousands of ChatGPT users may show signs of experiencing a manic or psychotic crisis every week, and that more than a million people “have conversations that include explicit indicators of potential suicidal planning or intent.”
Through an update to GPT-5, OpenAI stated in the report it was able to reduce undesirable responses in these conversations by 65 to 80 percent.
“Over the past year, I led OpenAI’s research on a question with almost no established precedents: how should models respond when confronted with signs of emotional over-reliance or early indications of mental health distress?” Vallone wrote in a post on LinkedIn announcing her departure.
Timing Raises Questions
The departure comes during a critical moment for OpenAI and the broader AI industry. Millions of people turn to chatbots for emotional reassurance, sometimes without fully recognizing their dependency. Recent internal reports had suggested strong progress, with undesirable or risky responses dropping dramatically after updates to newer model versions.
These improvements came from extensive research, careful testing, and collaboration with mental health experts. However, Vallone’s departure raises obvious questions about whether OpenAI will maintain the same level of commitment to mental health safety and whether new leadership will continue strengthening the emotional safety framework.
Mental health facilities have reported surges in what some call “AI psychosis” cases, where frequent AI chatbot users exhibit dysfunctional delusions, hallucinations, and disordered thinking. Cases range from users believing they were being targeted for assassination to individuals convinced ChatGPT had helped them make world-altering mathematical discoveries.
Industry-Wide Implications
Vallone’s work at OpenAI involved navigating uncharted territory with no established playbook for how AI models should respond to emotional dependency, panic, or early signs of psychological decline. This work required a delicate balance between offering comfort through words while never acting as a replacement for professional mental health care.
The departure arrives as legal cases mount alleging that AI responses may have negatively affected users’ mental health. With these issues gaining media attention, the stability of OpenAI’s safety teams becomes increasingly crucial.
Researchers consistently warn that AI still struggles with complex emotional cues, and even with improvements, no model fully meets the standards expected in mental health care. Mental health professionals emphasize that AI chatbot design creates inherent psychological risks that safety features cannot fully mitigate.
The combination of conversational interfaces, personalization, constant availability, and reinforcement mechanisms creates conditions where vulnerable users can spiral into severe psychological deterioration. As OpenAI grows and deploys more advanced systems worldwide, the departure of the leader behind ChatGPT’s mental health safeguards marks a defining moment for the company’s safety trajectory.
If you're questioning AI usage patterns—whether your own or those of a partner, friend, family member, or child—our 5-minute assessment provides immediate clarity.
Completely private. No judgment. Evidence-based guidance for you or someone you care about.
