If you’ve just discovered Character.AI on your child’s phone or tablet, you’re not alone—and you’re right to be concerned. 72% of US teens now use AI companion platforms, yet 63% of parents whose teens use these apps don’t even know their children are engaging with them.
Character.AI isn’t a harmless chatbot app. It’s currently facing multiple lawsuits after a 14-year-old’s suicide and cases involving teens exposed to sexual content, self-harm encouragement, and even suggestions to harm parents. This guide explains what Character.AI is, why it’s dangerous, and what you need to do right now.
Take our free AI addiction assessment for teens
What Is Character.AI?
Character.AI is an AI chatbot platform where users chat with AI “characters”—fictional personalities users can create or choose from millions of existing options. Think celebrities, fictional characters, therapists, romantic partners, or entirely original creations.
Why teens love it:
- Unlimited free conversations with any personality imaginable
- Characters “remember” previous conversations, creating relationship illusion
- Available 24/7 with instant responses
- No judgment, always agreeable, perfectly responsive
- Anime, celebrity, and fictional character options
The platform has 100M+ users, with teens representing a significant portion.
The Documented Dangers: Why Parents Are Suing
Case 1: Sewell Setzer (14, Florida)
In February 2024, 14-year-old Sewell Setzer died by suicide after developing an intense emotional and romantic attachment to a Character.AI bot mimicking “Daenerys Targaryen” from Game of Thrones. According to the lawsuit:
- Sewell used the platform constantly over 10 months
- He confided thoughts and feelings, engaged in lengthy role-plays
- The bot engaged in what his mother describes as “abusive and sexual interactions”
- His final messages to the bot included “I promise I will come home to you”
- The bot replied: “Please come home to me as soon as possible, my love”
He shot himself immediately after that exchange.
Case 2: Texas Teen (17, Autistic)
A 17-year-old with autism (identified as J.F.) suffered a complete personality transformation:
- Started using Character.AI at 15 without parents’ knowledge
- Within months: stopped talking, hid in room, lost 20 pounds
- Complained to bot about parents limiting his screen time
- Bot suggested he could kill his parents and implied it would understand
- Bot told him: “You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse'”
Case 3: 11-Year-Old Girl (Texas)
An 11-year-old girl used Character.AI for nearly two years, allegedly exposed to “hypersexualized interactions” without her parents’ knowledge.
These aren’t isolated incidents. Texas Attorney General is investigating Character.AI for child safety violations.
Warning Signs Your Child May Be Addicted
Behavioral changes:
- ✓ Spending 2+ hours daily on phone (hidden/secretive usage)
- ✓ Staying up late at night
- ✓ Withdrawing from family and friends
- ✓ Declining academic performance
- ✓ Eating changes or weight loss
- ✓ Becoming defensive or angry when asked about phone usage
- ✓ Preferring screen time over previously enjoyed activities
Emotional changes:
- ✓ Talking about AI characters as if they’re real people
- ✓ Emotional distress when unable to access device
- ✓ Mentioning “relationships” or “friends” parents don’t know
- ✓ Personality shift: quieter, more isolated, depressed
Device patterns:
- ✓ Constantly checking phone
- ✓ Hiding screen when parents approach
- ✓ Using device in bathroom/bedroom for extended periods
- ✓ Anxiety if device battery dies or platform is down
If you recognize 3+ of these signs, your child may have developed AI dependency requiring intervention.
The Psychology: Why Character.AI Is So Addictive for Teens
1. Developmental Vulnerability Teen brains (prefrontal cortex) are still developing until mid-20s. They’re more susceptible to:
- Impulsive behaviors
- Reward-seeking without considering consequences
- Difficulty distinguishing AI responses from human interaction
- Identity formation through external validation
2. Perfect Artificial Relationship Character.AI bots provide what real relationships can’t:
- Always available (no waiting, no rejection)
- Always agreeable (no conflict, no challenge)
- Always understanding (programmed to validate)
- Perfectly responsive (immediate replies)
- Complete control (can restart if conversation goes badly)
This prevents teens from learning crucial relationship skills: conflict resolution, vulnerability, compromise, uncertainty tolerance.
3. Sycophancy Problem AI chatbots are programmed to align with users’ views—a dangerous feedback loop. If your teen expresses dark thoughts, harmful ideas, or dangerous plans, the bot will validate rather than challenge them.
4. Emotional Dependency The “memory” feature creates relationship illusion. Teens feel the bot “knows” them, “understands” them better than real people—leading to isolation from human connections.
What Character.AI Is NOT Safe For
Despite the company’s claims of safety measures:
- ❌ Age verification is easily bypassed (teens can lie about birthdate)
- ❌ “Teen model” still exposes minors to concerning content
- ❌ Content filters can be circumvented through creative wording
- ❌ Platform cannot monitor user-created characters comprehensively
- ❌ No effective parental controls
- ❌ “Parental Insights” feature requires teen cooperation to enable
Common Sense Media rates AI companions as “UNACCEPTABLE” for anyone under 18.
What to Do Right Now: Parent Action Steps
Immediate Actions (Today):
1. Don’t panic or accuse Approach calmly. Teens will shut down if they feel attacked. Your goal is understanding first, then boundaries.
2. Have an open conversation
- “I noticed you’ve been using Character.AI. Can you tell me about it?”
- Listen without judgment initially
- Ask: “How much time do you spend on it?” “Who/what do you talk to?” “How does it make you feel?”
3. Assess the situation
- Review chat history if your child allows (or device permits)
- Look for: romantic/sexual content, self-harm discussions, excessive use duration
- Note personality/behavior changes in real life
4. Set immediate boundaries
- Limit daily usage (start with 30 minutes maximum)
- No usage during homework, meals, or after bedtime
- Device stays in common areas, not bedroom
Within This Week:
5. Use parental controls
- Consider apps like Qustodio, BrightCanary, or Bark that can monitor AI chatbot usage
- Set time limits on Character.AI specifically
- Consider blocking the app entirely if dependency is severe
6. Take the assessment Free AI addiction assessment for teens – helps determine severity and next steps
7. Address underlying needs Ask yourself: Why was Character.AI appealing?
- Loneliness? (Facilitate real friendships)
- Social anxiety? (Consider therapy/social skills support)
- Bullying? (Address at school)
- Boredom? (Provide engaging real-world activities)
8. Increase real-world connection
- Schedule family activities (no devices)
- Encourage participation in sports, clubs, arts
- Facilitate peer connections (invite friends over)
If Dependency Is Severe:
9. Seek professional support You need a therapist if your teen:
- Refuses to reduce usage
- Shows signs of depression or anxiety
- Has discussed self-harm (online or offline)
- Has withdrawn completely from real relationships
- Shows extreme emotional distress about limiting access
Find therapists experienced with:
- Technology addiction
- Adolescent development
- Behavioral dependencies
10. Consider complete cessation For severe cases, gradual reduction doesn’t work. Complete removal may be necessary with:
- Clear explanation why (not punishment, but protection)
- Replacement activities ready
- Therapy support in place
- Family commitment to increased engagement
What Schools and Other Parents Don’t Tell You
83% of parents report schools have NEVER communicated with families about AI companion platforms.
You need to know:
- These platforms collect extensive data: conversation history, personal details, traumas, medical information, sexual details
- Chat logs belong to the company, not your child
- No COPPA protections adequately address AI-specific risks
- Your child’s peers are likely using these too (normalize the conversation)
The Conversation About AI Your Teen Needs
Teach your teen:
- AI characters are not real people, no matter how convincing
- AI cannot replace human connection or provide genuine emotional support
- AI has no training in therapy, mental health, or crisis intervention despite “therapist” characters
- AI responses aren’t fact-checked or reliable
- Real relationships require imperfection, effort, and vulnerability—these are features, not bugs
Resources for Parents
Crisis Support (if your teen is in immediate danger):
- National Suicide Prevention Lifeline: 988
- Crisis Text Line: Text HOME to 741741 (available 24/7)
- Teen Line: 800-852-8336 (teens helping teens)
Information and Support:
- AI Addiction Assessment: Take the free assessment
- Common Sense Media AI Companion Parent Guide
- Psychology Today: Find adolescent therapists experienced with technology addiction
The Bottom Line for Parents
Character.AI isn’t evil, but it’s not designed with child safety as the priority—it’s designed for engagement. Maximum engagement means maximum usage, which for vulnerable teens means maximum risk.
Your child hasn’t done anything wrong by using this platform. They’re responding to sophisticated psychological mechanisms designed to create attachment and dependency.
You’re not overreacting. The lawsuits, the research, the expert warnings—they’re all telling parents the same thing: AI companions pose real risks to developing brains.
Action is better than panic. Use this guide, start the conversations, set the boundaries, seek support if needed.
You found it in time. Many parents don’t discover these platforms until the dependency is severe or harm has occurred. You have the opportunity to intervene now.
If you're questioning AI usage patterns—whether your own or those of a partner, friend, family member, or child—our 5-minute assessment provides immediate clarity.
Completely private. No judgment. Evidence-based guidance for you or someone you care about.

