The federal government is now actively targeting state AI laws. Since January 2025, the Trump administration has issued two executive orders on artificial intelligence, established a dedicated litigation task force at the Department of Justice, released a White House legislative framework calling for federal preemption, and directed the Department of Commerce to identify which state AI statutes it views as “onerous.” The result is the most significant federal challenge to state AI regulation since legislatures began enacting AI-specific statutes in 2023.
For compliance officers, in-house counsel, and legal teams managing AI governance programs across jurisdictions, the central question is: which state AI laws are most at risk, and which are likely to survive? This analysis reviews the full federal action timeline and provides a preemption risk assessment for the major state AI frameworks.
Key Takeaways
- EO 14179 (Jan. 23, 2025): Revoked Biden AI orders; directed an AI Action Plan; does not target state laws.
- EO 14365 (Dec. 11, 2025): The operative preemption order — creates DOJ task force, directs Commerce to identify onerous state laws, conditions BEAD broadband funding on state AI policy compliance.
- DOJ AI Litigation Task Force (Jan. 9, 2026): Established by AG Pam Bondi; tasked to challenge state AI laws on constitutional and preemption grounds.
- Commerce report deadline (Mar. 11, 2026): Required evaluation of onerous state AI laws; as of late April 2026, not publicly released.
- White House National Policy Framework (Mar. 20, 2026): Non-binding legislative recommendations preserving child safety, consumer protection, and state procurement rules.
- Only state law named in EO 14365: Colorado’s AI Act and its “algorithmic discrimination” provisions.
- Carve-outs not targeted: Child safety laws, AI compute/data center infrastructure, and state government procurement and use of AI.
- What companies should do now: Continue complying with state AI laws — no state law has been invalidated.
The Federal AI Regulatory Timeline
The administration has acted through a sequenced chain of executive actions over 15 months.
January 23, 2025 — EO 14179: EO 14179 revoked Biden’s October 2023 AI executive order (EO 14110), rescinded OMB memoranda on federal AI governance, and directed development of an AI Action Plan. EO 14179 did not target state laws.
December 11, 2025 — EO 14365: The operative preemption order, EO 14365, signed after Congress twice failed to enact a statutory moratorium on state AI regulation, operates through four mechanisms: (1) a DOJ AI Litigation Task Force tasked to challenge state AI laws; (2) a Commerce Department evaluation of onerous state laws within 90 days; (3) BEAD broadband funding conditionality for states with identified onerous AI laws; and (4) FCC and FTC preemption proceedings to issue federal AI standards. The order explicitly names the Colorado AI Act — the only state law called out — as an example: “a new Colorado law banning ‘algorithmic discrimination’ may even force AI models to produce false results.”
January 9, 2026 — DOJ AI Litigation Task Force: AG Pam Bondi established the task force exactly 30 days after EO 14365 was signed. As of publication, no lawsuits have been filed.
March 6, 2026 — GSA AI Procurement Clause: The General Services Administration released a draft contract clause, GSAR 552.239-7001, for inclusion in all GSA Schedule contracts for AI capabilities. This governs federal procurement, not state regulation.
March 11, 2026 — Commerce Report Deadline: As of late April 2026, the Commerce Department’s evaluation, due March 11, 2026, has not been publicly released. Organizations should treat its eventual publication as the trigger for federal preemption challenges.
March 20, 2026 — White House National Policy Framework: The White House released its National Policy Framework for Artificial Intelligence — non-binding legislative recommendations. The framework preserves “traditional state police powers, particularly laws of general applicability that protect children, prevent fraud, and protect consumers,” and state zoning laws for AI infrastructure.
What EO 14365 does not do: It does not directly invalidate any state law. All state AI laws remain in full force until a federal court grants an injunction or a federal statute is enacted. The Section 8(b) carve-outs instruct that proposed federal legislation should not preempt state laws relating to: (i) child safety protections; (ii) AI compute and data center infrastructure; (iii) state government procurement and use of AI; and (iv) other topics as shall be determined.
Which State AI Laws Are at Risk?
| State Law | Effective Date | Key Features at Risk | Preemption Risk |
|---|---|---|---|
| Colorado SB 24-205 — Colorado AI Act | June 30, 2026 | Algorithmic discrimination ban; developer/deployer liability; impact assessments | High — only law named in EO 14365 |
| California SB 53 — TFAIA | Effective 2026 | Frontier model safety reporting; safety framework publication | Medium-High — compelled disclosure in EO’s crosshairs |
| Texas HB 149 — TRAIGA | January 1, 2026 | Risk assessments; high-risk AI controls; deployer obligations | Medium — Republican state complicates political calculus |
| Illinois HB 3773 — AI in Employment | Effective 2025 | Algorithmic bias in employment; disclosure requirements | Medium — mirrors Colorado concern; narrower employment scope |
| State chatbot disclosure laws (CA SB 243, NE LB 525, OR SB 1546, etc.) | Various | User disclosure; minor safety; mental health bans | Low — fall within child safety carve-out |
| State deepfake laws | Various | Content attribution; political advertising disclosure | Low-Medium — narrowly scoped |
Colorado AI Act: Highest Risk
Colorado’s SB 24-205 is the administration’s stated target. The law requires developers and deployers of “high-risk AI systems” to exercise reasonable care to protect consumers from “algorithmic discrimination” and imposes impact assessment and disclosure requirements. Its effective date has been pushed to June 30, 2026 amid legislative efforts to amend it. The DOJ task force is most likely to challenge this law first.
California TFAIA: High Risk with First Amendment Vector
California’s TFAIA (SB 53) requires developers of powerful AI models to publish safety frameworks and report safety incidents to California regulators. The administration’s FTC preemption theory — that compelled disclosure constitutes impermissible compelled speech — may be deployed against this law.
Texas TRAIGA: Medium Risk, Politically Complicated
Texas’s TRAIGA (HB 149) requires risk assessments and deployer controls for high-risk AI systems. The political calculus is complicated by Texas’s Republican-dominated government. Compliance teams should treat TRAIGA as enforceable and continue implementation.
Illinois HB 3773: Medium Risk
Illinois HB 3773 prohibits algorithmic bias in employment decisions. Its employment-specific scope may reduce the attack surface, though the algorithmic discrimination framing mirrors EO 14365’s stated concern.
What Companies Should Do Now
The critical operational point: no state AI law has been invalidated. State AI laws remain fully enforceable.
- Continue state AI law compliance. Do not pause compliance work based on executive order activity. The risk of non-compliance is more immediate than the risk of a federal preemption order.
- Map your exposure. Focus near-term attention on Colorado (effective June 30, 2026), California TFAIA (in effect), and Texas TRAIGA (in effect since January 1, 2026).
- Monitor three specific developments: (a) Publication of the Commerce Department’s evaluation; (b) the FTC’s policy statement on Section 5 preemption of AI disclosure requirements; (c) the first DOJ task force lawsuit.
- Build contingency into compliance programs. Design frameworks so that impact assessments, transparency disclosures, and risk management documentation serve compliance purposes under multiple possible legal regimes.
- Assess BEAD program exposure. Evaluate whether your state’s receipt of BEAD broadband funding is relevant to your regulatory relationships.
For comparison of how U.S. state AI laws relate to the EU AI Act, see our EU AI Act vs. U.S. State AI Laws comparison. For our methodology on assessing preemption risk, see the AI Laws by State methodology page.
How Courts Are Likely to View These Challenges
Legal experts have identified several threshold obstacles to the administration’s preemption strategy (Alston & Bird, Sidley Austin, Ropes & Gray, Baker Botts):
- No federal AI statute exists to serve as the preemptive framework. Conflict preemption typically requires a federal law the state law directly conflicts with.
- Executive orders do not have the force of federal statutes. Preemption through executive order alone faces significant constitutional obstacles.
- The Major Questions Doctrine may limit FCC and FTC authority to issue preemptive AI regulations without specific congressional authorization.
- First Amendment compelled speech doctrine could cut both ways: the administration argues disclosure mandates are unconstitutional compelled speech, while states could counter that AI output manipulation is itself subject to regulation.
These obstacles explain why the task force has not yet filed suit despite being established over four months ago.
Frequently Asked Questions
Is the Colorado AI Act still enforceable?
Yes. As of publication, no court has enjoined the Colorado AI Act. Its effective date is June 30, 2026. Companies subject to the Act should continue compliance preparations unless and until a court grants a preliminary injunction.
Can the Trump administration preempt state AI laws through an executive order alone?
No — not directly. An executive order applies to federal agencies, not state governments. Only a federal statute or binding agency regulation backed by statutory authority can directly preempt state law.
What does the DOJ task force actually do?
It is empowered to challenge state AI laws in federal court — arguing they unconstitutionally burden interstate commerce, are preempted by existing federal regulations, or are otherwise unlawful. No lawsuits have been filed as of publication.
Which states are most likely to be sued first?
Colorado is the most likely initial target — the only state named in the EO, with an imminent effective date of June 30, 2026. California is a close second given the TFAIA’s frontier model reporting requirements.
Do child safety chatbot laws face preemption risk?
Low. EO 14365 and the White House Framework both explicitly carve out child safety protections. See our analysis of the Nebraska LB 525 Conversational AI Safety Act.
Related Resources
- Colorado SB 24-205 — Colorado AI Act
- California SB 53 — Transparency in Frontier AI Act
- Texas HB 149 — TRAIGA
- Illinois HB 3773 — AI in Employment
- EU AI Act vs. U.S. State AI Laws Comparison
- AI Laws by State Methodology
AI Laws by State tracks state AI legislation across all 50 states. This post reflects publicly available information as of April 22, 2026, and is not legal advice. Sources: EO 14179 (White House); EO 14365 (White House); DOJ Task Force (Broadband Breakfast); White House National Policy Framework (WilmerHale); GSA GSAR 552.239-7001 (Holland & Knight). For jurisdiction-specific guidance, consult qualified legal counsel.