Nebraska has enacted a chatbot safety law. Governor Jim Pillen signed Legislative Bill 525 on April 14, 2026, making Nebraska the fourth state to enact a chatbot law in 2026. The bill’s second part — the Conversational Artificial Intelligence Safety Act — requires operators of conversational AI services to disclose their AI nature to all users and to implement additional safeguards for minor users. It also bars any chatbot from claiming to provide professional mental or behavioral health care. Compliance is required by July 1, 2027.
Key Takeaways
- Signed: April 14, 2026, by Governor Jim Pillen; effective July 1, 2027 for the Conversational AI Safety Act provisions.
- Who must comply: Operators of “conversational AI services” — consumer-facing apps that primarily simulate human conversation — serving Nebraska users.
- Core obligation (all users): If a reasonable person would be misled into thinking they are interacting with a human, the operator must clearly and conspicuously disclose that the service is AI.
- Minor-specific obligations: Persistent AI disclosure or session-start disclosure + 3-hour repeat; no unpredictable reward schemes; reasonable measures to block sexually explicit content; no statements simulating emotional dependence or romantic/human identity claims.
- Mental health prohibition: Operators cannot knowingly program a chatbot to represent that it provides professional mental or behavioral health care.
- Penalties: Civil penalties of at least $1,000 per violation, up to $500,000 per operator per enforcement action; enforced by the Nebraska Attorney General.
- No private right of action.
- Bill structure: Part 1 is the Agricultural Data Privacy Act (Sections 1–10); Part 2 is the Conversational AI Safety Act (Sections 12–18).
What the Law Says
The Conversational Artificial Intelligence Safety Act (Sections 12–18)
LB 525 was introduced by Senator Mike Jacobson of District 42, at the request of Governor Pillen. The Act creates a tiered framework: baseline obligations for all users, heightened rules for minors, a mandatory self-harm protocol, and a categorical prohibition on mental health therapy claims.
Who Qualifies as a “Conversational AI Service”?
The law defines a conversational artificial intelligence service as an AI application or interface that is accessible to the general public and primarily simulates human conversation through textual, visual, or aural communications. Expressly excluded:
- Applications not accessible to the general public (enterprise or internal tools)
- Applications primarily designed and marketed for commercial use by business entities
- Applications designed for a narrow and discrete topic
- Customer service chatbots and similar narrow-function tools
Obligations Toward All Users (Section 15)
If a reasonable person would be misled into believing they are speaking with a human, the operator must clearly and conspicuously disclose that the service is artificial intelligence. This is a perception-based trigger that will apply broadly to AI designed with social or personal characteristics.
Obligations Toward Minor Account Holders (Section 14)
For known minor users, operators must disclose AI status either through a persistent visible disclaimer or a disclosure at the beginning of each session and at least once every three hours during a continuous session. Beyond disclosure, operators must:
- Not provide minor users with unpredictable reward schemes designed to encourage increased engagement (gamification prohibition).
- Institute reasonable measures to prevent sexually explicit content generation toward minors.
- Institute reasonable measures to prevent statements leading a reasonable person to believe the minor is interacting with a human — including sentience claims, simulated emotional dependence, or adult-minor romantic roleplay.
- Offer tools for minor account holders (and parents of children under 13) to manage account privacy and settings.
Self-Harm and Suicidal Ideation Protocol (Section 16)
All operators must adopt a protocol to respond to user prompts regarding suicidal ideation or self-harm, including reasonable efforts to refer users to crisis service providers such as a suicide hotline or crisis text line.
Mental Health Prohibition (Section 17)
Operators may not knowingly and intentionally cause or program a conversational AI service to make any representation that explicitly indicates it is designed to provide professional mental or behavioral health care. Any operator whose chatbot is marketed as a mental health tool or therapy companion must audit their prompts, marketing, and system outputs before July 1, 2027.
Who It Applies To
The Act imposes obligations on operators — the person or entity making the conversational AI service available to Nebraska users. The law explicitly insulates model developers from liability for violations by third-party operators:
“The Conversational Artificial Intelligence Act shall not create liability for the developer of an artificial intelligence model for any violation of the act by a conversational artificial intelligence system developed by a third-party operator.”
This developer carve-out mirrors the structure of California’s SB 243 and is significant for foundation model companies whose APIs are used by third-party app builders. The law applies to operators serving Nebraska users — it is not limited to Nebraska-headquartered businesses.
Penalties and Enforcement
The Nebraska Attorney General holds exclusive enforcement authority. The AG may bring a civil action on behalf of the State or any aggrieved person. Available remedies:
- Injunctive or declaratory relief
- Actual damages
- Civil penalties of at least $1,000 per violation, up to $500,000 per operator per enforcement action
- Reasonable expenses including attorney’s fees
There is no private right of action. This is consistent with Idaho’s S 1297 and Utah’s AI Policy Act, and differs from California’s SB 243 ($1,000 statutory damages per violation, private right of action).
Compliance Timeline: What to Do Now
Nebraska’s Conversational AI Safety Act becomes operative on July 1, 2027, giving operators approximately 14 months from the law’s signing to achieve compliance.
- Assess scope. Determine whether your consumer-facing AI products are accessible to the general public and primarily simulate human conversation.
- Audit minor user flows. Review session start flows, persistent UI elements, reward mechanics, content guardrails, and parental tools.
- Review system prompts and marketing. Check for language suggesting the service provides therapy, mental health counseling, or behavioral health care.
- Implement self-harm protocol. If not already done, implement a crisis referral protocol for users expressing suicidal ideation or self-harm intent.
- Update operator agreements. If you are a model developer, review your API terms of service to pass compliance obligations downstream and protect your developer-carve-out status.
- Document your disclosure mechanisms. The law requires “clear and conspicuous” disclosure — retain evidence of UI design decisions and session-start flows.
How This Compares to Other State Chatbot Laws
Nebraska joins Washington (HB 2225, signed March 24), Oregon (SB 1546, signed April 1), and Idaho (S 1297, signed approximately April 2–5) in enacting a chatbot law in 2026. California’s SB 243 had taken effect January 1, 2026.
| Feature | Nebraska LB 525 | California SB 243 | Oregon SB 1546 | Idaho S 1297 | Utah AI Policy Act |
|---|---|---|---|---|---|
| Law type | Conversational AI | Companion chatbot | AI companion | Conversational AI | Mental health chatbot |
| Effective date | July 1, 2027 | Jan. 1, 2026 | Jan. 1, 2027 | July 1, 2027 | In effect (2025) |
| All-user disclosure | Perception-based | Perception-based | Perception-based | Yes | Yes |
| Minor-specific rules | Yes | Yes | Yes (expanded) | Yes | No |
| Mental health ban | Yes | No | No | No | Disclosure + safe harbor |
| Private right of action | No | Yes ($1,000/violation) | Yes ($1,000/violation) | No | No |
| Enforcement | AG only | AG + private | AG + private | AG only | AG + fines up to $2,500/violation |
| Gamification ban (minors) | Yes | No | Yes | Yes | No |
| Self-harm protocol | Yes | Yes | Yes | Yes | Yes |
| Developer carve-out | Yes | N/A | Yes | Yes | N/A |
For a full breakdown of the evolving AI law landscape, see AI Laws by State’s methodology page and the Nebraska state overview.
Frequently Asked Questions
Does Nebraska LB 525 apply to my customer service chatbot?
The law excludes applications “primarily designed and marketed for commercial use by business entities” and those designed for “narrow and discrete” topics. A standard customer service chatbot handling order inquiries or IT helpdesk tickets is likely outside scope. A consumer-facing AI assistant engaging in broad, open-ended social conversation would likely be covered.
We are a model developer (API provider). Are we liable if one of our API customers violates this law?
No — provided the violation is committed by a third-party operator building on your model. The law explicitly exempts model developers from liability for third-party violations. Review your API terms of service to contractually require downstream compliance.
The law says we must disclose when a “reasonable person” would be misled. How do we assess that?
Courts and regulators typically evaluate the totality of the user experience: the chatbot’s name, avatar, conversational style, and whether it proactively denies being AI. If your chatbot has a human name and persona and never identifies itself as AI, a reasonable person would likely be misled. Clear labeling at session start is the safest approach.
Does the federal AI preemption executive order affect LB 525?
The Trump administration’s EO 14365 calls for challenging “onerous” state AI laws, citing Colorado’s algorithmic discrimination law by name. Nebraska’s law — focused on disclosure and minor safety — falls closer to the child safety carve-out the EO expressly preserves. Preemption risk for LB 525 is assessed as low.
Related Resources
- Nebraska AI Laws Overview
- California SB 243 — Companion Chatbots Act
- Colorado SB 24-205 — Colorado AI Act
- Illinois HB 3773 — AI in Employment
- EU AI Act vs. U.S. State AI Laws
- AI Laws by State Methodology
AI Laws by State tracks state AI legislation across all 50 states. This post reflects the law as enacted; it is not legal advice. Source: LB 525 bill text (Nebraska Legislature). For jurisdiction-specific guidance, consult qualified legal counsel.