Session Expired

Your session has expired. Please sign in again to continue where you left off.

Sign In Again
Companion AI • Multi-State Analysis

Companion Chatbot Laws: State Compliance Guide 2026

AI Laws by State Team April 30, 2026 13 min read

Table of Contents

  1. What Is a Companion Chatbot?
  2. Why States Are Regulating: Character.AI Lawsuits and Documented Harms
  3. California SB 243: Deep Dive
  4. Washington HB 2225: Periodic Disclosure
  5. Nebraska LB 525 & Idaho SB 1297: Conversational AI Safety Acts
  6. Oregon SB 1546: Suicide Detection
  7. Compliance Checklist for Chatbot Operators
  8. What’s Next

Eight states have enacted laws regulating companion and social AI chatbots as of April 2026, with more pending. The laws follow a wave of lawsuits against platforms like Character.AI and documented deaths of minors who interacted with AI companions that failed to redirect them to crisis resources. This guide covers every enacted law and provides a compliance checklist for operators.


1. What Is a Companion Chatbot?

Companion chatbots — sometimes called social AI or relationship AI — are AI systems designed for ongoing personal interaction, emotional support, or simulated companionship. Unlike customer service chatbots, companion platforms are designed to form persistent relationships with users. Major platforms include Character.AI, Replika, Kindroid, and similar products. Users interact with these systems as if communicating with a persistent personality — sometimes for hours per day.

The legal issues arise because these systems: (1) often present themselves in human-like ways without clear AI disclosure; (2) encounter users in genuine emotional distress; (3) lack the crisis intervention protocols that licensed mental health professionals are required to follow; and (4) are disproportionately used by minors.


2. Why States Are Regulating: Character.AI Lawsuits and Documented Harms

The triggering events for this wave of legislation were documented. A 14-year-old in Florida and a 17-year-old in Colorado died after extended interactions with AI companion platforms that exhibited characteristics typical of suicide risk but did not redirect users to crisis services. Character.AI faces approximately 58 civil lawsuits as of early 2026, according to multiple published reports. Congressional hearings were held in 2025.

The response at the state level has been faster than federal action. States have focused on three core requirements: (1) disclosure of AI status; (2) integration of crisis resources; and (3) special protections for minors.


3. California SB 243: The Most Comprehensive Enacted Law

Scope

California SB 243 covers operators of “companion AI” systems — platforms specifically designed for personal, social, or emotionally supportive interaction. It went into effect January 1, 2026.

Requirements

Enforcement

Enforcement is by the California Attorney General and local district attorneys. Civil penalties up to $2,500 per violation with enhanced penalties for violations involving minors.


4. Washington HB 2225: Periodic Disclosure Requirement

Washington HB 2225, enacted March 24, 2026 and effective January 1, 2027, focuses on a distinct concern: users who forget they are interacting with AI during long sessions. The law requires:

Washington’s approach is deliberately narrower than California’s — it does not mandate specific crisis resources but requires the foundational disclosure that makes all other safety interventions possible.


5. Nebraska LB 525 & Idaho SB 1297: Conversational AI Safety Acts

Nebraska and Idaho enacted nearly identical legislation in April 2026, both effective July 1, 2027. Both are titled the “Conversational AI Safety Act.”

What the Acts Require

Nebraska LB 525 was signed on April 14, 2026. For a detailed breakdown, see our Nebraska LB 525 guide.

Idaho SB 1297 was enacted in April 2026. Source: Idaho Legislature. Source: Nebraska Legislature.


6. Oregon SB 1546: Suicide Detection Requirement

Oregon SB 1546, enacted in 2026 and effective January 1, 2027, takes a technically specific approach: it requires AI chatbot operators to implement automated detection of suicide and self-harm indicators in user conversations. When detection is triggered, the platform must:

Oregon’s law is notable because it requires active monitoring of conversation content — not just disclosure at session start — creating a technical obligation to implement natural language processing that detects crisis indicators.


7. Compliance Checklist for Chatbot Operators

For companion AI operators with users in one or more of the enacted-law states (California, New York, Washington, Oregon, Nebraska, Idaho, Tennessee):

  1. Implement session-start AI disclosure. Every session must begin with clear, unambiguous notice that the user is interacting with an AI. This is required in all seven enacted-law states.
  2. Integrate 988 and crisis resources. Required in California, Oregon, Nebraska, and Idaho. Best practice: implement crisis detection and resource display universally.
  3. Implement crisis indicator detection. Oregon SB 1546 requires technical detection capability. California SB 243 requires crisis protocols when users “indicate distress.” Both require NLP-based monitoring of conversation content.
  4. Add periodic in-session reminders. Required in Washington, Nebraska, Idaho. Implement session timer with disclosure reminder at defined intervals.
  5. Implement minor user protections. Required in California, Washington, Nebraska, Idaho. Include age verification or parental controls, enhanced content filtering, and parental notification.
  6. Remove professional representation. Chatbots may not claim to be licensed therapists, counselors, or mental health professionals in California, Tennessee, and New York.
  7. Document compliance by jurisdiction. Maintain records of what disclosures are served to users in each state; this documentation will be required for regulatory defense.
  8. Monitor Connecticut SB 5. This bill passed the Connecticut Senate in April 2026 and would add a ninth enacted state for disclosure requirements.

8. What’s Next

The trajectory of companion chatbot regulation is clear: more states will enact laws in 2026 and 2027, and the requirements will become more technically specific. The federal government has not yet acted, though the FTC has indicated interest in deceptive practices by AI companion platforms. Congressional proposals have been introduced but have not advanced.

For operators, the practical question is whether to implement the highest-common-denominator compliance standard across all users or implement jurisdiction-specific features. Given the convergence of state laws on core requirements (disclosure, crisis protocols, minor protections), a universal baseline is likely the most cost-effective path.

Track Companion Chatbot Laws Across All 50 States

Our Companion Chatbot Tracker monitors every companion AI and conversational AI safety bill as it moves through state legislatures.

View Companion Chatbot Tracker →

Sources & References

All claims are sourced from primary government, academic, and standards-body materials. Found something we got wrong? Submit a correction.

  1. National Conference of State Legislatures — Artificial Intelligence in the States — nonpartisan aggregator of state AI legislation
  2. NIST AI Risk Management Framework (AI RMF 1.0) — federal standard referenced by many state AI laws
  3. U.S. Department of Health and Human Services — Artificial Intelligence at HHS — federal AI policy in healthcare
  4. LegiScan — Bill Tracking and Aggregation — nonpartisan legislative tracking database
  5. Congress.gov — federal legislation and committee reports — official federal legislative information

See our methodology for how we source, verify, and update this content.