Every US jurisdiction that requires independent bias audits or impact assessments for AI systems used in employment, housing, and other high-risk decisions.
AI bias audits are becoming a baseline compliance requirement for companies deploying automated decision tools in hiring, lending, and insurance. Below are the US jurisdictions with enacted laws or finalized regulations that mandate bias audits or algorithmic impact assessments. Each entry links to the official primary source and, where available, our detailed analysis.
| Jurisdiction | Law / Bill # | What Requires an Audit | Audit Frequency | Deadline | Penalty | Source |
|---|---|---|---|---|---|---|
| New York City | Local Law 144 | Automated employment decision tools (AEDTs) used to screen or evaluate candidates for employment or promotion | Annual independent bias audit | Effective July 5, 2023 (DCWP enforcement) | $500 first violation; $500–$1,500 each subsequent violation per day | Official source |
| Colorado | SB 24-205 (Colorado AI Act) | High-risk AI systems used in consequential decisions (employment, education, financial services, healthcare, housing, insurance, legal services) | Impact assessment before deployment and after significant updates | Effective June 30, 2026 | AG enforcement; up to $20,000 per violation (CCPA enforcement provisions) | Official source |
| Illinois | AIVII (820 ILCS 42) | AI analysis of video interviews for employment; requires notice, consent, and data handling obligations | Consent required for each use; no fixed audit cadence | Effective January 1, 2020 | Enforced by Illinois Department of Human Rights; violations under IHRA | Official source |
| Illinois | HB 3773 (Human Rights Act amendment) | AI used in employment decisions including hiring, promotion, discipline, and termination | Notice to applicants/employees; impact assessment provisions | Effective January 1, 2026 | Enforced by Illinois Department of Human Rights; civil penalties under IHRA | Official source |
| California | FEHA Automated Decision Regulations (CRD) | Automated decision systems used in employment decisions (hiring, promotion, termination) | Ongoing monitoring for adverse impact required | Finalized 2025 (California Civil Rights Department) | FEHA enforcement: CRD investigation, civil penalties, damages | Official source |
Sources: Official state legislature websites, NYC DCWP, Colorado General Assembly, Illinois General Assembly, California CRD. Last updated April 2026.
Describe your AI system and we'll connect you with compliance specialists who understand your jurisdiction's requirements.
Get Compliance HelpFree consultation request · No obligation
An AI bias audit is an independent evaluation of an automated decision system to assess whether it produces discriminatory outcomes based on protected characteristics such as race, gender, or age. NYC Local Law 144, for example, requires an "independent auditor" to test automated employment decision tools (AEDTs) for adverse impact using the EEOC's four-fifths rule. Colorado's SB 24-205 requires broader "impact assessments" covering any high-risk AI system. See our AI hiring laws guide for a state-by-state breakdown.
Any employer or employment agency using automated tools to screen, evaluate, or select job candidates in New York City must conduct annual bias audits under Local Law 144. In Colorado, deployers and developers of "high-risk AI systems" used in employment, education, financial services, healthcare, housing, insurance, or legal services must complete impact assessments. Illinois requires notice and consent for AI video interviews under AIVII (820 ILCS 42), and HB 3773 expands requirements to all AI-assisted employment decisions effective January 1, 2026.
Audit frequency varies by jurisdiction. NYC Local Law 144 requires audits at least annually, with results published on the employer's website. Colorado SB 24-205 requires impact assessments before deployment and when any "significant update" is made. Illinois HB 3773 requires notice to applicants and regular review but does not specify a fixed audit cadence. California FEHA regulations require ongoing monitoring of automated decision systems used in employment.
Documentation requirements include: (1) a description of the AI system's purpose and intended use, (2) the data used for training and testing, (3) the metrics evaluated (e.g., selection rates by demographic group), (4) the results of statistical tests for disparate impact, (5) any mitigation steps taken, and (6) a summary of findings published for public review (required by NYC LL 144). Colorado's impact assessment framework adds risk identification, data governance practices, and human oversight measures.
Penalties vary: NYC imposes $500 for a first violation and $500–$1,500 for each subsequent violation per day under Local Law 144, enforced by the Department of Consumer and Worker Protection. Colorado's AI Act authorizes the Attorney General to bring enforcement actions with penalties up to $20,000 per violation. Illinois AIVII violations are enforced by the Illinois Department of Human Rights. California FEHA violations can result in enforcement actions by the Civil Rights Department.
NYC Local Law 144 explicitly requires an "independent auditor" — someone who is not involved in the development or use of the AEDT being audited. Colorado SB 24-205 does not mandate external auditors but requires that impact assessments be made available to the Attorney General upon request. Illinois HB 3773 requires employers to conduct impact assessments but does not specify independence requirements. Best practice across all jurisdictions is to use a qualified third-party auditor to ensure objectivity.
The EU AI Act (Regulation (EU) 2024/1689) requires conformity assessments for high-risk AI systems before they can be placed on the EU market, including bias testing under Article 10 (data governance) and Article 9 (risk management). US state laws are narrower in scope: NYC covers only employment AEDTs, Colorado covers "high-risk" decisions across multiple domains, and Illinois focuses on video interviews and hiring. See our EU AI Act vs US State Laws crosswalk for a detailed side-by-side comparison.