Session Expired

Your session has expired. Please sign in again to continue where you left off.

Sign In Again
Federal Policy

Trump’s AI Executive Order and DOJ Task Force: Which State AI Laws Survive?

AI Laws by State Team April 28, 2026 12 min read

The federal government is now actively targeting state AI laws. Since January 2025, the Trump administration has issued two executive orders on artificial intelligence, established a dedicated litigation task force at the Department of Justice, released a White House legislative framework calling for federal preemption, and directed the Department of Commerce to identify which state AI statutes it views as “onerous.” The result is the most significant federal challenge to state AI regulation since legislatures began enacting AI-specific statutes in 2023.

For compliance officers, in-house counsel, and legal teams managing AI governance programs across jurisdictions, the central question is: which state AI laws are most at risk, and which are likely to survive? This analysis reviews the full federal action timeline and provides a preemption risk assessment for the major state AI frameworks.


Key Takeaways


The Federal AI Regulatory Timeline

The administration has acted through a sequenced chain of executive actions over 15 months.

January 23, 2025 — EO 14179: EO 14179 revoked Biden’s October 2023 AI executive order (EO 14110), rescinded OMB memoranda on federal AI governance, and directed development of an AI Action Plan. EO 14179 did not target state laws.

December 11, 2025 — EO 14365: The operative preemption order, EO 14365, signed after Congress twice failed to enact a statutory moratorium on state AI regulation, operates through four mechanisms: (1) a DOJ AI Litigation Task Force tasked to challenge state AI laws; (2) a Commerce Department evaluation of onerous state laws within 90 days; (3) BEAD broadband funding conditionality for states with identified onerous AI laws; and (4) FCC and FTC preemption proceedings to issue federal AI standards. The order explicitly names the Colorado AI Act — the only state law called out — as an example: “a new Colorado law banning ‘algorithmic discrimination’ may even force AI models to produce false results.”

January 9, 2026 — DOJ AI Litigation Task Force: AG Pam Bondi established the task force exactly 30 days after EO 14365 was signed. As of publication, no lawsuits have been filed.

March 6, 2026 — GSA AI Procurement Clause: The General Services Administration released a draft contract clause, GSAR 552.239-7001, for inclusion in all GSA Schedule contracts for AI capabilities. This governs federal procurement, not state regulation.

March 11, 2026 — Commerce Report Deadline: As of late April 2026, the Commerce Department’s evaluation, due March 11, 2026, has not been publicly released. Organizations should treat its eventual publication as the trigger for federal preemption challenges.

March 20, 2026 — White House National Policy Framework: The White House released its National Policy Framework for Artificial Intelligence — non-binding legislative recommendations. The framework preserves “traditional state police powers, particularly laws of general applicability that protect children, prevent fraud, and protect consumers,” and state zoning laws for AI infrastructure.

What EO 14365 does not do: It does not directly invalidate any state law. All state AI laws remain in full force until a federal court grants an injunction or a federal statute is enacted. The Section 8(b) carve-outs instruct that proposed federal legislation should not preempt state laws relating to: (i) child safety protections; (ii) AI compute and data center infrastructure; (iii) state government procurement and use of AI; and (iv) other topics as shall be determined.


Which State AI Laws Are at Risk?

State Law Effective Date Key Features at Risk Preemption Risk
Colorado SB 24-205 — Colorado AI Act June 30, 2026 Algorithmic discrimination ban; developer/deployer liability; impact assessments High — only law named in EO 14365
California SB 53 — TFAIA Effective 2026 Frontier model safety reporting; safety framework publication Medium-High — compelled disclosure in EO’s crosshairs
Texas HB 149 — TRAIGA January 1, 2026 Risk assessments; high-risk AI controls; deployer obligations Medium — Republican state complicates political calculus
Illinois HB 3773 — AI in Employment Effective 2025 Algorithmic bias in employment; disclosure requirements Medium — mirrors Colorado concern; narrower employment scope
State chatbot disclosure laws (CA SB 243, NE LB 525, OR SB 1546, etc.) Various User disclosure; minor safety; mental health bans Low — fall within child safety carve-out
State deepfake laws Various Content attribution; political advertising disclosure Low-Medium — narrowly scoped

Colorado AI Act: Highest Risk

Colorado’s SB 24-205 is the administration’s stated target. The law requires developers and deployers of “high-risk AI systems” to exercise reasonable care to protect consumers from “algorithmic discrimination” and imposes impact assessment and disclosure requirements. Its effective date has been pushed to June 30, 2026 amid legislative efforts to amend it. The DOJ task force is most likely to challenge this law first.

California TFAIA: High Risk with First Amendment Vector

California’s TFAIA (SB 53) requires developers of powerful AI models to publish safety frameworks and report safety incidents to California regulators. The administration’s FTC preemption theory — that compelled disclosure constitutes impermissible compelled speech — may be deployed against this law.

Texas TRAIGA: Medium Risk, Politically Complicated

Texas’s TRAIGA (HB 149) requires risk assessments and deployer controls for high-risk AI systems. The political calculus is complicated by Texas’s Republican-dominated government. Compliance teams should treat TRAIGA as enforceable and continue implementation.

Illinois HB 3773: Medium Risk

Illinois HB 3773 prohibits algorithmic bias in employment decisions. Its employment-specific scope may reduce the attack surface, though the algorithmic discrimination framing mirrors EO 14365’s stated concern.


What Companies Should Do Now

The critical operational point: no state AI law has been invalidated. State AI laws remain fully enforceable.

  1. Continue state AI law compliance. Do not pause compliance work based on executive order activity. The risk of non-compliance is more immediate than the risk of a federal preemption order.
  2. Map your exposure. Focus near-term attention on Colorado (effective June 30, 2026), California TFAIA (in effect), and Texas TRAIGA (in effect since January 1, 2026).
  3. Monitor three specific developments: (a) Publication of the Commerce Department’s evaluation; (b) the FTC’s policy statement on Section 5 preemption of AI disclosure requirements; (c) the first DOJ task force lawsuit.
  4. Build contingency into compliance programs. Design frameworks so that impact assessments, transparency disclosures, and risk management documentation serve compliance purposes under multiple possible legal regimes.
  5. Assess BEAD program exposure. Evaluate whether your state’s receipt of BEAD broadband funding is relevant to your regulatory relationships.

For comparison of how U.S. state AI laws relate to the EU AI Act, see our EU AI Act vs. U.S. State AI Laws comparison. For our methodology on assessing preemption risk, see the AI Laws by State methodology page.


How Courts Are Likely to View These Challenges

Legal experts have identified several threshold obstacles to the administration’s preemption strategy (Alston & Bird, Sidley Austin, Ropes & Gray, Baker Botts):

These obstacles explain why the task force has not yet filed suit despite being established over four months ago.


Frequently Asked Questions

Is the Colorado AI Act still enforceable?

Yes. As of publication, no court has enjoined the Colorado AI Act. Its effective date is June 30, 2026. Companies subject to the Act should continue compliance preparations unless and until a court grants a preliminary injunction.

Can the Trump administration preempt state AI laws through an executive order alone?

No — not directly. An executive order applies to federal agencies, not state governments. Only a federal statute or binding agency regulation backed by statutory authority can directly preempt state law.

What does the DOJ task force actually do?

It is empowered to challenge state AI laws in federal court — arguing they unconstitutionally burden interstate commerce, are preempted by existing federal regulations, or are otherwise unlawful. No lawsuits have been filed as of publication.

Which states are most likely to be sued first?

Colorado is the most likely initial target — the only state named in the EO, with an imminent effective date of June 30, 2026. California is a close second given the TFAIA’s frontier model reporting requirements.

Do child safety chatbot laws face preemption risk?

Low. EO 14365 and the White House Framework both explicitly carve out child safety protections. See our analysis of the Nebraska LB 525 Conversational AI Safety Act.


Related Resources

AI Laws by State tracks state AI legislation across all 50 states. This post reflects publicly available information as of April 22, 2026, and is not legal advice. Sources: EO 14179 (White House); EO 14365 (White House); DOJ Task Force (Broadband Breakfast); White House National Policy Framework (WilmerHale); GSA GSAR 552.239-7001 (Holland & Knight). For jurisdiction-specific guidance, consult qualified legal counsel.

Struggling with AI compliance?

Describe your situation and we'll connect you with a specialist who understands your state's AI laws.

Get Compliance Help

Free consultation request · No obligation