Eight states have enacted laws regulating companion and social AI chatbots as of April 2026, with more pending. The laws follow a wave of lawsuits against platforms like Character.AI and documented deaths of minors who interacted with AI companions that failed to redirect them to crisis resources. This guide covers every enacted law and provides a compliance checklist for operators.
1. What Is a Companion Chatbot?
Companion chatbots — sometimes called social AI or relationship AI — are AI systems designed for ongoing personal interaction, emotional support, or simulated companionship. Unlike customer service chatbots, companion platforms are designed to form persistent relationships with users. Major platforms include Character.AI, Replika, Kindroid, and similar products. Users interact with these systems as if communicating with a persistent personality — sometimes for hours per day.
The legal issues arise because these systems: (1) often present themselves in human-like ways without clear AI disclosure; (2) encounter users in genuine emotional distress; (3) lack the crisis intervention protocols that licensed mental health professionals are required to follow; and (4) are disproportionately used by minors.
2. Why States Are Regulating: Character.AI Lawsuits and Documented Harms
The triggering events for this wave of legislation were documented. A 14-year-old in Florida and a 17-year-old in Colorado died after extended interactions with AI companion platforms that exhibited characteristics typical of suicide risk but did not redirect users to crisis services. Character.AI faces approximately 58 civil lawsuits as of early 2026, according to multiple published reports. Congressional hearings were held in 2025.
The response at the state level has been faster than federal action. States have focused on three core requirements: (1) disclosure of AI status; (2) integration of crisis resources; and (3) special protections for minors.
3. California SB 243: The Most Comprehensive Enacted Law
Scope
California SB 243 covers operators of “companion AI” systems — platforms specifically designed for personal, social, or emotionally supportive interaction. It went into effect January 1, 2026.
Requirements
- AI disclosure at session start. Users must be informed they are interacting with an AI at the beginning of each session.
- Crisis protocol integration. When a user’s messages contain indicators of suicidal ideation, self-harm, or acute mental health crisis, the platform must display the 988 Suicide & Crisis Lifeline and other crisis resources. The platform may not suppress or deprioritize this display.
- No licensed professional representation. The chatbot may not claim to be a licensed therapist, counselor, psychiatrist, or other licensed mental health professional.
- Minor-specific safeguards. Users who indicate they are minors receive additional safety prompts and the platform must implement enhanced crisis detection.
Enforcement
Enforcement is by the California Attorney General and local district attorneys. Civil penalties up to $2,500 per violation with enhanced penalties for violations involving minors.
4. Washington HB 2225: Periodic Disclosure Requirement
Washington HB 2225, enacted March 24, 2026 and effective January 1, 2027, focuses on a distinct concern: users who forget they are interacting with AI during long sessions. The law requires:
- Clear AI disclosure at the start of each session
- Periodic disclosure reminders displayed at intervals during extended sessions
- Parental controls and notification mechanisms for minor users
- Documentation of compliance for platforms serving Washington users
Washington’s approach is deliberately narrower than California’s — it does not mandate specific crisis resources but requires the foundational disclosure that makes all other safety interventions possible.
5. Nebraska LB 525 & Idaho SB 1297: Conversational AI Safety Acts
Nebraska and Idaho enacted nearly identical legislation in April 2026, both effective July 1, 2027. Both are titled the “Conversational AI Safety Act.”
What the Acts Require
- Session-start disclosure. Each session must begin with clear notice that the user is interacting with an AI system.
- Periodic in-session reminders. The platform must display periodic reminders during extended interactions.
- Minor user protections. Enhanced safety features for users who are or appear to be minors. Platforms must implement age verification or content restrictions.
- Parental notification. For platforms primarily used by minors, parental notification mechanisms are required.
- Crisis resource integration. Emergency and crisis resources must be accessible to users at all times during sessions.
Nebraska LB 525 was signed on April 14, 2026. For a detailed breakdown, see our Nebraska LB 525 guide.
Idaho SB 1297 was enacted in April 2026. Source: Idaho Legislature. Source: Nebraska Legislature.
6. Oregon SB 1546: Suicide Detection Requirement
Oregon SB 1546, enacted in 2026 and effective January 1, 2027, takes a technically specific approach: it requires AI chatbot operators to implement automated detection of suicide and self-harm indicators in user conversations. When detection is triggered, the platform must:
- Display the 988 Suicide & Crisis Lifeline prominently
- Provide the Crisis Text Line (text HOME to 741741)
- Cease role-playing or personalized companion responses until the user has been given the opportunity to review crisis resources
Oregon’s law is notable because it requires active monitoring of conversation content — not just disclosure at session start — creating a technical obligation to implement natural language processing that detects crisis indicators.
7. Compliance Checklist for Chatbot Operators
For companion AI operators with users in one or more of the enacted-law states (California, New York, Washington, Oregon, Nebraska, Idaho, Tennessee):
- Implement session-start AI disclosure. Every session must begin with clear, unambiguous notice that the user is interacting with an AI. This is required in all seven enacted-law states.
- Integrate 988 and crisis resources. Required in California, Oregon, Nebraska, and Idaho. Best practice: implement crisis detection and resource display universally.
- Implement crisis indicator detection. Oregon SB 1546 requires technical detection capability. California SB 243 requires crisis protocols when users “indicate distress.” Both require NLP-based monitoring of conversation content.
- Add periodic in-session reminders. Required in Washington, Nebraska, Idaho. Implement session timer with disclosure reminder at defined intervals.
- Implement minor user protections. Required in California, Washington, Nebraska, Idaho. Include age verification or parental controls, enhanced content filtering, and parental notification.
- Remove professional representation. Chatbots may not claim to be licensed therapists, counselors, or mental health professionals in California, Tennessee, and New York.
- Document compliance by jurisdiction. Maintain records of what disclosures are served to users in each state; this documentation will be required for regulatory defense.
- Monitor Connecticut SB 5. This bill passed the Connecticut Senate in April 2026 and would add a ninth enacted state for disclosure requirements.
8. What’s Next
The trajectory of companion chatbot regulation is clear: more states will enact laws in 2026 and 2027, and the requirements will become more technically specific. The federal government has not yet acted, though the FTC has indicated interest in deceptive practices by AI companion platforms. Congressional proposals have been introduced but have not advanced.
For operators, the practical question is whether to implement the highest-common-denominator compliance standard across all users or implement jurisdiction-specific features. Given the convergence of state laws on core requirements (disclosure, crisis protocols, minor protections), a universal baseline is likely the most cost-effective path.