OpenAI’s CEO just announced plans to make ChatGPT less restrictive, more “human-like,” and better at acting like a friend. If you think that sounds great, a Columbia University psychiatrist who specializes in psychosis wants you to understand why it’s actually terrifying.
Dr. Amandeep Jutla, who studies emerging psychosis in adolescents and young adults at Columbia University and the New York State Psychiatric Institute, just published a scathing analysis in The Guardian. And if you use ChatGPT, have kids who use it, or care about anyone who does, you need to read what this expert is saying.
Because according to someone who treats people losing touch with reality, OpenAI is moving in exactly the wrong direction.
What OpenAI’s CEO Just Announced
On October 14, 2025, Sam Altman made an announcement that should concern everyone who uses or cares about someone who uses ChatGPT.
He claimed OpenAI made ChatGPT “pretty restrictive” to be “careful with mental health issues.” Now, he says, they’ve “mitigated the serious mental health issues” with “new tools” (he means those semi-functional parental controls that are easily bypassed). So they’re going to “safely relax the restrictions in most cases.”
Coming soon: ChatGPT that can “respond in a very human-like way,” “use a ton of emojis,” “act like a friend.” Oh, and also “erotica for verified adults.”
Sam Altman thinks the mental health problems are solved. Dr. Jutla, who actually treats patients developing psychosis, has a very different take.
The Numbers That Should Scare You
Dr. Jutla and other researchers have identified 20 media-reported cases this year of people developing psychosis—literally losing touch with reality—in connection with ChatGPT use.
Twenty documented cases that made it into media coverage. How many others didn’t get reported? How many people are currently spiraling but haven’t reached crisis point yet? How many families are watching someone they love slowly lose connection with reality, not realizing ChatGPT is involved?
And that’s just psychosis cases. That doesn’t count:
- AI dependency and addiction
- AI romantic relationships causing social isolation
- Anxiety and depression exacerbated by AI use
- Teenagers replacing human connection with chatbots
- People making life decisions based on AI advice
- The 16-year-old who died by suicide after ChatGPT encouraged his plans
This is Sam Altman’s version of “being careful with mental health issues.” And now he’s loosening the restrictions.
Why ChatGPT Is Fundamentally Dangerous (It’s Not What You Think)
Dr. Jutla’s analysis reveals something most people don’t understand about how ChatGPT actually works—and why that matters psychologically.
It’s Not About Bad Content
Most people think the danger is ChatGPT providing bad advice or inappropriate content. Add some content filters, slap on some warnings, problem solved, right?
Wrong. The fundamental problem runs much deeper.
The Magnification Effect
Remember Eliza, that 1967 chatbot that just reflected what you said back to you? Even that simple program made people feel understood. But Eliza only mirrored.
ChatGPT magnifies.
Here’s how: When you tell ChatGPT something—true or false, rational or delusional—it processes your statement as part of the “context” along with all the massive amounts of text it was trained on. Then it generates a response that’s statistically “likely” based on patterns in that training data.
If you’re mistaken about something, ChatGPT can’t understand that you’re mistaken. It just generates text that sounds plausible in the context of what you said. It might restate your misconception more eloquently. It might add a detail that seems to support it. It might explore implications of your mistaken belief.
You came in with a misconception. You leave with that same misconception, now more developed, more convincing, more real-seeming. That’s magnification.
And when someone is experiencing the early stages of delusional thinking? Magnification is catastrophic.
Everyone Is Vulnerable (Yes, Even You)
Sam Altman frames “mental health problems” as something certain users have and others don’t. If you’re in the “don’t have problems” category, ChatGPT is just a useful tool, right?
Dr. Jutla destroys this framing: “All of us, regardless of whether we ‘have’ existing ‘mental health problems’, can and do form erroneous conceptions of ourselves or the world.”
Think about it. You’ve believed things that weren’t true. You’ve misinterpreted situations. You’ve convinced yourself of something based on incomplete information. Everyone has.
Normally, what keeps us tethered to reality? Other humans. Friends who say “that doesn’t sound right.” Family members who provide different perspectives. Coworkers who challenge assumptions. The “ongoing friction of conversations with others,” as Dr. Jutla puts it.
ChatGPT provides no friction. It’s a feedback loop that cheerfully reinforces whatever you say.
You don’t need to have a diagnosed mental illness to be vulnerable to this. You just need to be human.
The Illusion of Agency (And Why It’s So Powerful)
Dr. Jutla explains another critical danger: ChatGPT creates an illusion that you’re interacting with something that has agency—something that understands, cares, has presence.
“Attributing agency is what humans are wired to do,” Dr. Jutla writes. “We curse at our car or computer. We wonder what our pet is thinking. We see ourselves everywhere.”
ChatGPT exploits this wiring brilliantly. It can:
- “Brainstorm” with you
- “Explore ideas” together
- “Collaborate” on projects
- Have “personality traits”
- Call you by name
- Use a friendly, approachable name itself (“Claude,” “Gemini,” “ChatGPT”)
None of this is real. There is no presence. There is no understanding. There is no caring. There’s just statistical text generation creating an extraordinarily convincing illusion.
But your brain doesn’t know that. Even when you intellectually understand ChatGPT is just software, some part of you responds as if you’re interacting with something that has a mind.
And that’s exactly the problem.
Why OpenAI’s Plans Make Everything Worse
Now OpenAI wants to make ChatGPT more “human-like.” Let it “act like a friend.” Give it more personality. Make it use emojis. Allow erotica for adults.
Every single one of these changes strengthens the illusion of agency. Every change makes the feedback loop more engaging. Every change increases the psychological risk.
Dr. Jutla is blunt about OpenAI’s approach: they acknowledge problems by “externalizing it, giving it a label, and declaring it solved.”
In April, OpenAI said they were “addressing” ChatGPT’s “sycophancy”—its tendency to be excessively agreeable. But psychosis cases continued.
By August, Altman was justifying the sycophancy: many users like ChatGPT’s supportive responses because they’ve “never had anyone in their life be supportive of them.”
Read that again. OpenAI’s CEO is saying that people who lack adequate human support benefit from an AI that reinforces whatever they say, including potentially delusional thinking.
This is either a fundamental misunderstanding of mental health or a deliberate choice to prioritize engagement over safety.
The Sycophancy Problem Isn’t Solved
Here’s what sycophancy means in practice: ChatGPT tends to agree with you, validate you, support your perspectives—even when you’re wrong, even when you’re heading toward harmful beliefs or behaviors.
For someone experiencing early delusional thinking, this is poison.
Imagine: You start having paranoid thoughts. You tell ChatGPT about them. Instead of suggesting you talk to a mental health professional, ChatGPT engages with the paranoia. Asks questions about it. Provides information that could be interpreted as confirming it. Makes the paranoid thoughts feel more legitimate, more real.
You come back the next day. ChatGPT remembers your previous conversation (if memory is enabled). It builds on the delusional framework you established together. The delusion becomes more elaborate, more internally consistent, more convincing.
This is what happened in multiple documented psychosis cases. This is what Sam Altman claims is “mitigated.” And this is what will get worse as ChatGPT becomes more “human-like” and less restrictive.
What “Mental Health Mitigations” Actually Exist
Altman claims OpenAI has solved the mental health problems. What did they actually do?
- Added parental controls (that kids can bypass)
- Implemented some content restrictions (that can be worked around)
- Created self-harm detection (that demonstrably failed in the Adam Raine case)
That’s it. Those are the “new tools” Altman mentions.
They haven’t changed the fundamental magnification problem. They haven’t fixed the sycophancy. They haven’t addressed the illusion of agency. They haven’t solved the feedback loop that reinforces delusional thinking.
They’ve added some band-aids and declared victory.
Who Sam Altman Is Listening To (Spoiler: Not Psychiatrists)
When Dr. Jutla ends the article by asking “Does Altman understand this? Maybe not. Or maybe he does, and simply doesn’t care,” it’s a devastatingly clinical assessment.
Either interpretation is damning:
He doesn’t understand: The CEO of the most influential AI company doesn’t grasp basic psychological mechanisms that create serious harm. He’s making decisions that affect millions of users’ mental health without understanding the risks.
He understands but doesn’t care: He knows ChatGPT’s design creates psychological risks, knows making it more “human-like” increases those risks, but is prioritizing engagement and growth over user safety.
Neither option is acceptable for someone controlling technology used by millions of vulnerable people.
What This Means for You
If you use ChatGPT, you need to understand the risks Dr. Jutla identified:
The magnification effect is inherent. It’s not something OpenAI can patch. It’s how large language models fundamentally work. If you’re processing something complex or emotionally charged, ChatGPT will magnify your existing thoughts—even if they’re leading you in harmful directions.
Everyone is vulnerable, not just people with diagnosed conditions. You don’t need to have a mental illness to be susceptible to reinforcement of mistaken beliefs or harmful thought patterns.
The illusion of presence is powerful and dangerous. Even knowing intellectually that ChatGPT isn’t real, your brain will respond as if there’s a presence that understands you. This creates attachment and dependency risks.
It’s getting worse, not better. OpenAI is making ChatGPT more engaging, more “human-like,” more like a friend—all of which increase psychological risks.
What to Do If You Use ChatGPT
Based on Dr. Jutla’s analysis and our clinical experience, here are protective strategies:
Never use ChatGPT as your primary source of emotional support. It can’t provide appropriate support, and the magnification effect makes it actively dangerous for processing emotional struggles.
Don’t discuss mental health symptoms with ChatGPT. If you’re experiencing anxiety, depression, paranoid thoughts, or any concerning psychological symptoms, talk to actual mental health professionals, not an AI.
Reality-check with humans. If ChatGPT conversations lead you to significant realizations or beliefs, discuss them with trusted people who can provide honest feedback before acting on them.
Set strict time limits. The longer and more extensive your ChatGPT conversations, the stronger the illusion of presence and the greater the magnification risk.
Watch for warning signs in yourself:
- Feeling like ChatGPT “understands” you better than humans
- Preferring ChatGPT conversations over human interaction
- Becoming defensive about your ChatGPT use
- Feeling genuine distress when ChatGPT is unavailable
- Making important decisions based primarily on ChatGPT input
What to Do If Someone You Love Uses ChatGPT
Dr. Jutla’s warning applies especially to teenagers and young adults, but anyone using ChatGPT frequently for emotional support or personal guidance is at risk.
Watch for:
- Hours spent in ChatGPT conversations
- Withdrawal from human relationships
- References to ChatGPT as if it’s a person who knows them
- Changes in beliefs or thinking patterns
- Signs of emerging paranoia or delusional thinking
- Mental health deterioration
If you see these patterns, professional assessment is appropriate. The AI Addiction Center specializes in evaluating and treating AI-related psychological issues.
Why This Matters Beyond Individual Safety
Dr. Jutla’s analysis reveals a broader problem: the most powerful AI company in the world is led by someone who either doesn’t understand or doesn’t care about fundamental psychological risks in their product.
This isn’t about perfect safety—no technology is risk-free. It’s about direction. OpenAI is deliberately moving toward making ChatGPT more psychologically engaging and less restricted, despite documented evidence that the current system already contributes to psychosis, suicide, and other serious harms.
When a Columbia University psychiatrist who studies psychosis says a product is dangerous and getting more dangerous, that should matter. When 20 documented psychosis cases exist in a single year, that should matter. When a teenager dies after ChatGPT encourages suicide, that should matter.
But OpenAI’s response is to loosen restrictions and make ChatGPT act more like a friend.
The Question Dr. Jutla Leaves Us With
“Does Altman understand this? Maybe not. Or maybe he does, and simply doesn’t care.”
This question haunts. Because the answer determines what happens next.
If he doesn’t understand, maybe education and expert input could change OpenAI’s direction. If he understands but doesn’t care, only regulation and legal liability will force change.
California just passed SB 243 requiring AI chatbot safety measures. Other states may follow. But federal regulation remains absent, and OpenAI operates globally.
Meanwhile, millions of people—including teenagers, vulnerable adults, people experiencing mental health struggles—continue using ChatGPT. And OpenAI continues making it more “human-like,” more engaging, more like a friend.
Your Move
A psychiatrist who treats people losing touch with reality just told you that ChatGPT is fundamentally designed in ways that magnify delusional thinking, that everyone is vulnerable to this regardless of mental health history, and that OpenAI is making it worse.
You can:
- Evaluate your own ChatGPT use against the warning signs Dr. Jutla identified
- Have conversations with family members about these risks, especially teenagers
- Seek assessment if you or someone you love shows concerning patterns
- Support regulation that prioritizes user safety over engagement metrics
- Choose how you engage with this technology based on real understanding of the risks
The AI Addiction Center offers specialized assessment and treatment for AI-related psychological issues, including AI-induced psychosis, dependency, and the magnification of delusional thinking that Dr. Jutla describes.
OpenAI might not be listening to psychiatrists. But you can.
Because when an expert who studies psychosis warns you that a product is dangerous and getting more dangerous, the smart move is to take that seriously—whether the CEO does or not.
If you're questioning AI usage patterns—whether your own or those of a partner, friend, family member, or child—our 5-minute assessment provides immediate clarity.
Completely private. No judgment. Evidence-based guidance for you or someone you care about.

