What It Does
Colorado SB 24-205, the Colorado Artificial Intelligence Act, is the most comprehensive state-level AI regulation enacted in the United States. The law requires developers and deployers of high-risk AI systems to exercise “reasonable care” to prevent algorithmic discrimination. It mandates governance frameworks, impact assessments, technical documentation, risk management policies, and consumer disclosures. Originally signed by Governor Polis with concerns about its breadth, the effective date was delayed to June 30, 2026, by SB 25B-004. The Colorado Attorney General has exclusive enforcement authority.
Who It Applies To
The law applies to two categories: developers (companies that build or substantially modify AI systems) and deployers (companies that use AI systems to make or substantially inform “consequential decisions”). Consequential decisions include determinations in education, employment, financial services, government services, healthcare, housing, insurance, and legal services. Both Colorado-based companies and out-of-state companies whose AI systems affect Colorado consumers are covered. Small businesses are not exempt, though the law’s “reasonable care” standard may be applied proportionally.
Key Provisions
- Reasonable care standard: Developers and deployers must use reasonable care to protect consumers from known or foreseeable risks of algorithmic discrimination.
- Developer obligations: Must provide technical documentation including high-level summaries of training data, known limitations, intended use cases, and evaluation results for algorithmic discrimination.
- Deployer obligations: Must adopt a risk management policy, conduct annual impact assessments, provide notice to consumers, and allow consumers to appeal consequential decisions made by AI.
- Impact assessments: Deployers must complete impact assessments before deploying high-risk AI and annually thereafter, analyzing the purpose, intended benefits, potential risks, and steps to mitigate algorithmic discrimination.
- AG enforcement: The Colorado Attorney General has exclusive enforcement authority. There is no private right of action. The AG may consider a deployer’s compliance with recognized frameworks (e.g., NIST AI RMF) as evidence of reasonable care.
Compliance Checklist
If you develop or deploy high-risk AI systems affecting Colorado consumers, before June 30, 2026 you should:
- Classify your AI systems to determine which qualify as “high-risk” under the law’s consequential-decision framework.
- Adopt a risk management policy that governs the use of high-risk AI systems, including accountability structures and monitoring procedures.
- Complete initial impact assessments for all high-risk AI systems, documenting purpose, risks, mitigation measures, and data practices.
- Build consumer notification and appeal mechanisms so consumers know when AI is used in consequential decisions and can challenge outcomes.
- If you are a developer: Prepare technical documentation packages for deployers, including training data summaries, known limitations, and evaluation results.
How This Compares
Colorado SB 24-205 is the broadest state AI law in the U.S. as of April 2026. It goes further than NYC Local Law 144, which covers only hiring AI. It also exceeds New York S 8828, which targets model developers but not deployers. The law’s structure is explicitly modeled on the EU AI Act’s risk-based framework, though Colorado applies a negligence-style “reasonable care” standard rather than prescriptive rules. The AG’s willingness to credit compliance with NIST AI RMF gives deployers a concrete compliance roadmap.
Effective Date Countdown
Compliance deadline: June 30, 2026. As of April 2026, companies have approximately two months to finalize impact assessments, risk management policies, and consumer notification systems. This is the closest major compliance deadline in AI regulation.
Read the Bill
Author: AI Laws by State. This is not legal advice. For compliance questions specific to your operation, consult an attorney licensed in Colorado.