Session Expired

Your session has expired. Please sign in again to continue where you left off.

Sign In Again
State Spotlight

Nebraska Chatbot Law (LB 525): What the Conversational AI Safety Act Means for Your Business

AI Laws by State Team April 23, 2026 9 min read

Nebraska has enacted a chatbot safety law. Governor Jim Pillen signed Legislative Bill 525 on April 14, 2026, making Nebraska the fourth state to enact a chatbot law in 2026. The bill’s second part — the Conversational Artificial Intelligence Safety Act — requires operators of conversational AI services to disclose their AI nature to all users and to implement additional safeguards for minor users. It also bars any chatbot from claiming to provide professional mental or behavioral health care. Compliance is required by July 1, 2027.


Key Takeaways


What the Law Says

The Conversational Artificial Intelligence Safety Act (Sections 12–18)

LB 525 was introduced by Senator Mike Jacobson of District 42, at the request of Governor Pillen. The Act creates a tiered framework: baseline obligations for all users, heightened rules for minors, a mandatory self-harm protocol, and a categorical prohibition on mental health therapy claims.

Who Qualifies as a “Conversational AI Service”?

The law defines a conversational artificial intelligence service as an AI application or interface that is accessible to the general public and primarily simulates human conversation through textual, visual, or aural communications. Expressly excluded:

Obligations Toward All Users (Section 15)

If a reasonable person would be misled into believing they are speaking with a human, the operator must clearly and conspicuously disclose that the service is artificial intelligence. This is a perception-based trigger that will apply broadly to AI designed with social or personal characteristics.

Obligations Toward Minor Account Holders (Section 14)

For known minor users, operators must disclose AI status either through a persistent visible disclaimer or a disclosure at the beginning of each session and at least once every three hours during a continuous session. Beyond disclosure, operators must:

Self-Harm and Suicidal Ideation Protocol (Section 16)

All operators must adopt a protocol to respond to user prompts regarding suicidal ideation or self-harm, including reasonable efforts to refer users to crisis service providers such as a suicide hotline or crisis text line.

Mental Health Prohibition (Section 17)

Operators may not knowingly and intentionally cause or program a conversational AI service to make any representation that explicitly indicates it is designed to provide professional mental or behavioral health care. Any operator whose chatbot is marketed as a mental health tool or therapy companion must audit their prompts, marketing, and system outputs before July 1, 2027.


Who It Applies To

The Act imposes obligations on operators — the person or entity making the conversational AI service available to Nebraska users. The law explicitly insulates model developers from liability for violations by third-party operators:

“The Conversational Artificial Intelligence Act shall not create liability for the developer of an artificial intelligence model for any violation of the act by a conversational artificial intelligence system developed by a third-party operator.”

This developer carve-out mirrors the structure of California’s SB 243 and is significant for foundation model companies whose APIs are used by third-party app builders. The law applies to operators serving Nebraska users — it is not limited to Nebraska-headquartered businesses.


Penalties and Enforcement

The Nebraska Attorney General holds exclusive enforcement authority. The AG may bring a civil action on behalf of the State or any aggrieved person. Available remedies:

There is no private right of action. This is consistent with Idaho’s S 1297 and Utah’s AI Policy Act, and differs from California’s SB 243 ($1,000 statutory damages per violation, private right of action).


Compliance Timeline: What to Do Now

Nebraska’s Conversational AI Safety Act becomes operative on July 1, 2027, giving operators approximately 14 months from the law’s signing to achieve compliance.

  1. Assess scope. Determine whether your consumer-facing AI products are accessible to the general public and primarily simulate human conversation.
  2. Audit minor user flows. Review session start flows, persistent UI elements, reward mechanics, content guardrails, and parental tools.
  3. Review system prompts and marketing. Check for language suggesting the service provides therapy, mental health counseling, or behavioral health care.
  4. Implement self-harm protocol. If not already done, implement a crisis referral protocol for users expressing suicidal ideation or self-harm intent.
  5. Update operator agreements. If you are a model developer, review your API terms of service to pass compliance obligations downstream and protect your developer-carve-out status.
  6. Document your disclosure mechanisms. The law requires “clear and conspicuous” disclosure — retain evidence of UI design decisions and session-start flows.

How This Compares to Other State Chatbot Laws

Nebraska joins Washington (HB 2225, signed March 24), Oregon (SB 1546, signed April 1), and Idaho (S 1297, signed approximately April 2–5) in enacting a chatbot law in 2026. California’s SB 243 had taken effect January 1, 2026.

Feature Nebraska LB 525 California SB 243 Oregon SB 1546 Idaho S 1297 Utah AI Policy Act
Law typeConversational AICompanion chatbotAI companionConversational AIMental health chatbot
Effective dateJuly 1, 2027Jan. 1, 2026Jan. 1, 2027July 1, 2027In effect (2025)
All-user disclosurePerception-basedPerception-basedPerception-basedYesYes
Minor-specific rulesYesYesYes (expanded)YesNo
Mental health banYesNoNoNoDisclosure + safe harbor
Private right of actionNoYes ($1,000/violation)Yes ($1,000/violation)NoNo
EnforcementAG onlyAG + privateAG + privateAG onlyAG + fines up to $2,500/violation
Gamification ban (minors)YesNoYesYesNo
Self-harm protocolYesYesYesYesYes
Developer carve-outYesN/AYesYesN/A

For a full breakdown of the evolving AI law landscape, see AI Laws by State’s methodology page and the Nebraska state overview.


Frequently Asked Questions

Does Nebraska LB 525 apply to my customer service chatbot?

The law excludes applications “primarily designed and marketed for commercial use by business entities” and those designed for “narrow and discrete” topics. A standard customer service chatbot handling order inquiries or IT helpdesk tickets is likely outside scope. A consumer-facing AI assistant engaging in broad, open-ended social conversation would likely be covered.

We are a model developer (API provider). Are we liable if one of our API customers violates this law?

No — provided the violation is committed by a third-party operator building on your model. The law explicitly exempts model developers from liability for third-party violations. Review your API terms of service to contractually require downstream compliance.

The law says we must disclose when a “reasonable person” would be misled. How do we assess that?

Courts and regulators typically evaluate the totality of the user experience: the chatbot’s name, avatar, conversational style, and whether it proactively denies being AI. If your chatbot has a human name and persona and never identifies itself as AI, a reasonable person would likely be misled. Clear labeling at session start is the safest approach.

Does the federal AI preemption executive order affect LB 525?

The Trump administration’s EO 14365 calls for challenging “onerous” state AI laws, citing Colorado’s algorithmic discrimination law by name. Nebraska’s law — focused on disclosure and minor safety — falls closer to the child safety carve-out the EO expressly preserves. Preemption risk for LB 525 is assessed as low.


Related Resources

AI Laws by State tracks state AI legislation across all 50 states. This post reflects the law as enacted; it is not legal advice. Source: LB 525 bill text (Nebraska Legislature). For jurisdiction-specific guidance, consult qualified legal counsel.

Struggling with AI compliance?

Describe your situation and we'll connect you with a specialist who understands your state's AI laws.

Get Compliance Help

Free consultation request · No obligation