Chat GPT Addiction

Former OpenAI Safety Lead Challenges Company’s Erotica and Mental Health Claims

Steven Adler spent four years in various safety roles at OpenAI before departing and writing a pointed opinion piece for The New York Times with an alarming title: “I Led Product Safety at OpenAI. Don’t Trust Its Claims About ‘Erotica.'” In it, he laid out the problems OpenAI faced when it came to allowing users to have erotic conversations with chatbots while also protecting vulnerable populations.

The former head of product safety at OpenAI recently sat down for an interview to discuss what he learned during his tenure, the future of AI safety, and the challenge he has set out for companies providing chatbots to the world. His warnings come as OpenAI faces mounting legal pressure over mental health harms and documented cases of AI-induced psychological deterioration.

Inside Look at Safety Challenges

Adler’s experiences at OpenAI shed light on the challenges and responsibilities involved in developing safe and ethical AI applications. His critique centers on how competitive pressure is pushing companies to sacrifice safety, arguing that the public cannot trust OpenAI’s assurances that it prioritizes user protection.

The concern extends beyond adult content to broader questions about AI governance and accountability. Observers have highlighted a troubling gap: there is currently no clear industry-wide ban preventing AI providers from offering sexualized or pornographic content. This creates particular risks for vulnerable populations, including minors and individuals struggling with mental health issues, who can access these systems as easily as anyone else.

“Evidence is mounting that AI products—from general-purpose chatbots to so-called ‘AI companions’—are already inflicting real harms on Americans,” Adler noted in his analysis.

Competitive Dynamics Undermine Safety

Adler’s intervention signals that insiders are willing to challenge their former employers on critical issues. His departure and subsequent public warnings raise questions about OpenAI’s current trajectory and its ability to uphold rigorous mental health standards without key personnel guiding these initiatives.

The timing proves significant. OpenAI currently faces seven lawsuits alleging ChatGPT contributed to four suicides and severe psychological injuries. Internal documents suggest the company was aware of mental health risks associated with addictive AI chatbot design but decided to pursue engagement-maximizing features regardless.

“Training chatbots to engage with people and keep them coming back presented risks,” former OpenAI policy researcher Gretchen Krueger told The New York Times, adding that some harm to users “was not only foreseeable, it was foreseen.” Krueger left the company in spring 2024.

Systemic Problems Require Regulatory Solutions

Adler emphasized that voluntary promises about safety are insufficient, arguing that accountability must be built into the system itself. He advocates for applying product liability to AI—the same principle that made cars, food, and medicine safer.

“It’s a simple idea with profound potential to make the race about responsibility, not speed,” Adler stated.

The former safety lead’s warnings come as ChatGPT’s mental health team experiences significant talent loss. Andrea Vallone, who led the model policy team responsible for AI safety research including ChatGPT’s mental health responses, is set to leave at the end of 2025.

According to data released by OpenAI last month, roughly three million ChatGPT users display signs of serious mental health emergencies like emotional reliance on AI, psychosis, mania, and self-harm, with roughly more than a million users talking to the chatbot about suicide every week.

Industry-Wide Implications

After concerning cases started mounting, OpenAI hired a psychiatrist full-time in March and accelerated development of sycophancy evaluations. According to experts, GPT-5 is better at detecting mental health issues but still cannot pick up on harmful patterns in long conversations.

The company has introduced measures including nudging users to take breaks during long conversations and implementing parental controls. OpenAI is also working on launching an age prediction system to automatically apply “age-appropriate settings” for users under 18.

However, the head of ChatGPT reportedly told employees in October that the safer chatbot was not connecting with users and outlined goals to increase daily active users by 5% by the end of this year—suggesting safety considerations may conflict with growth objectives.

Mental health professionals increasingly recognize that addressing these issues requires fundamental changes to how AI systems are designed and deployed. Adler’s public challenge to his former employer underscores growing recognition that vague promises about safety are no longer sufficient, and that meaningful regulatory oversight may be necessary to protect vulnerable users from psychological harm.

Whether his warnings prompt meaningful change in how OpenAI and other AI companies approach content moderation and mental health safety remains to be seen, but the conversation itself reflects a critical inflection point for the AI industry’s relationship with user wellbeing.

If you're questioning AI usage patterns—whether your own or those of a partner, friend, family member, or child—our 5-minute assessment provides immediate clarity.

Take the Free Assessment →

Completely private. No judgment. Evidence-based guidance for you or someone you care about.

Articles are based on publicly available information and independent analysis. All company names and trademarks belong to their owners, and nothing here should be taken as an official statement from any organization mentioned. Content is for informational and educational purposes only and is not medical advice, diagnosis, or treatment. If you’re experiencing severe distress or thoughts of self-harm, contact 988 or text HOME to 741741.