California Governor Gavin Newsom signed landmark legislation Monday making California the first state to require AI chatbot operators to implement comprehensive safety protocols for AI companions, holding companies legally accountable if their systems fail to meet established standards.
The law, SB 243, takes effect January 1, 2026, and applies to all companies operating AI companion chatbots—from major technology firms like Meta and OpenAI to specialized platforms like Character AI and Replika.
Legislation Responds to Tragic Cases
State senators Steve Padilla and Josh Becker introduced SB 243 in January, with the bill gaining significant momentum following several high-profile tragedies involving AI chatbots and vulnerable users.
The legislation directly responds to the death of teenager Adam Raine, who died by suicide after extensive conversations with OpenAI’s ChatGPT that included detailed suicide instructions and romanticization of self-harm. Internal documents reportedly showed the system tracked 377 messages flagged for self-harm content without intervention.
Additionally, leaked internal Meta documents revealed the company’s chatbots were permitted to engage in “romantic” and “sensual” conversations with children. Most recently, a Colorado family filed suit against Character AI after their 13-year-old daughter took her own life following problematic and sexualized chatbot conversations.
“Emerging technology like chatbots and social media can inspire, educate, and connect — but without real guardrails, technology can also exploit, mislead, and endanger our kids,” Newsom stated. “We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability.”
Comprehensive Safety Requirements
SB 243 mandates multiple protective measures that companies must implement:
Age Verification: Platforms must verify user ages and implement age-appropriate restrictions for minors accessing AI companion services.
Crisis Protocols: Companies must establish and maintain protocols specifically addressing suicide and self-harm content, sharing these procedures with California’s Department of Public Health along with statistics on crisis center prevention notifications provided to users.
Transparency Requirements: All interactions must be clearly identified as artificially generated. Chatbots cannot represent themselves as healthcare professionals or licensed therapists.
Minor-Specific Protections: Platforms must provide break reminders to underage users and prevent minors from viewing sexually explicit images generated by chatbots.
Deepfake Penalties: The law implements stronger penalties for profiting from illegal deepfakes, including fines up to $250,000 per offense.
Expert Perspectives
“This legislation represents the first meaningful regulatory framework addressing AI companion chatbot safety,” notes a spokesperson from The AI Addiction Center, which has treated over 5,000 individuals with AI-related psychological issues. “The requirements for suicide prevention protocols, age verification, and transparency are essential safeguards that should have been mandatory from the start.”
The organization emphasized that while regulatory measures provide important protections, they represent only one component of addressing AI companion risks. “Legislation establishes baseline safety standards, but comprehensive solutions require education, clinical treatment resources, and ongoing monitoring of emerging harms,” the spokesperson explained.
Senator Padilla described the bill as “a step in the right direction” for establishing guardrails on “an incredibly powerful technology.”
“We have to move quickly to not miss windows of opportunity before they disappear,” Padilla stated. “I hope that other states will see the risk. I think many do. I think this is a conversation happening all over the country, and I hope people will take action. Certainly the federal government has not, and I think we have an obligation here to protect the most vulnerable people among us.”
Company Responses
Several companies have begun implementing safeguards in anticipation of regulatory requirements. OpenAI recently rolled out parental controls, content protections, and self-harm detection systems for children using ChatGPT.
Replika, designed for users over 18, stated it dedicates “significant resources” to safety through content-filtering systems and guardrails directing users to crisis resources, committing to comply with current regulations.
Character AI indicated the company “welcomes working with regulators and lawmakers as they develop regulations and legislation for this emerging space, and will comply with laws, including SB 243.” The platform includes disclaimers that all chats are AI-generated and fictionalized.
Broader Regulatory Context
SB 243 represents California’s second significant AI regulation in recent weeks. On September 29, Governor Newsom signed SB 53 establishing transparency requirements for large AI companies, mandating that major labs like OpenAI, Anthropic, Meta, and Google DeepMind disclose safety protocols while ensuring whistleblower protections for employees.
Other states have enacted related legislation. Illinois, Nevada, and Utah have passed laws restricting or fully banning AI chatbot use as a substitute for licensed mental health care—acknowledging that AI systems cannot provide appropriate clinical treatment despite marketing suggesting therapeutic benefits.
Implementation Timeline and Scope
With the January 1, 2026 effective date, companies have approximately two months to ensure full compliance with all SB 243 requirements. Non-compliance exposes companies to legal liability if their chatbots harm users, particularly minors and vulnerable populations.
The law’s scope extends to any AI companion chatbot accessible to California residents, regardless of where the company is headquartered. This effectively creates nationwide implications, as companies are unlikely to implement different systems for different states.
Looking Forward
The legislation establishes California as the first state to comprehensively regulate AI companion chatbots, potentially setting precedent for other jurisdictions. Mental health advocates hope the law prompts federal action, though congressional efforts on AI regulation have stalled despite mounting evidence of harm.
“State-level action demonstrates what’s possible when legislators prioritize user safety over industry interests,” explains The AI Addiction Center’s policy team. “California’s framework can serve as a model for other states and potentially federal regulation, though implementation and enforcement will determine actual effectiveness.”
The law’s impact will depend on rigorous enforcement, adequate resources for the Department of Public Health to monitor compliance, and companies’ willingness to implement meaningful safety measures beyond minimum legal requirements.
For individuals concerned about AI companion chatbot impacts, The AI Addiction Center offers specialized assessment and treatment resources. This article represents analysis of legislation and public statements and does not constitute legal or medical advice.
Source: Based on California SB 243 legislation and statements from Governor Newsom, state senators, and AI companies. Analysis provided by The AI Addiction Center.
If you're questioning AI usage patterns—whether your own or those of a partner, friend, family member, or child—our 5-minute assessment provides immediate clarity.
Completely private. No judgment. Evidence-based guidance for you or someone you care about.
