The regulation of artificial intelligence in hiring has accelerated sharply in 2026. New York City’s Department of Consumer and Worker Protection is ramping up enforcement of Local Law 144, issuing its first round of penalties against employers that failed to complete required bias audits. Colorado’s AI Act (SB 24-205) takes effect on June 30, 2026, bringing impact assessment and risk management obligations to every employer using AI in consequential employment decisions for Colorado residents. And Illinois has expanded its AI hiring disclosure framework beyond video interviews, with new amendments to the Illinois Human Rights Act requiring employers to notify candidates whenever AI is used as a substantial factor in hiring or promotion decisions.
Across the country, at least 26 states have now enacted, proposed, or are actively advancing legislation that directly regulates how employers can use AI-powered tools in recruiting, screening, interviewing, and employment decisions. For HR leaders, legal teams, and hiring technology vendors, keeping pace with this fragmented landscape is no longer optional—it is a compliance imperative.
This guide breaks down the five core regulatory categories emerging across state AI hiring laws, identifies the bills to watch in the coming months, and links to tools that can help you stay compliant.
1. Bias Audit and Impact Assessment Requirements
The most rigorous category of AI hiring regulation is the mandatory bias audit. New York City’s Local Law 144, in effect since July 2023, remains the gold standard. It requires any employer using an Automated Employment Decision Tool (AEDT) for hiring or promotion decisions affecting NYC jobs to commission an independent bias audit at least once every 12 months. The audit must be conducted by a third party—not the employer and not the tool vendor—and must evaluate disparate impact across race, ethnicity, sex, and intersectional categories using impact ratio calculations.
Audit results must be publicly posted on the employer’s website for at least six months, including the date of the audit, the data sources used, applicant counts by demographic group, and the computed impact ratios. An impact ratio below 80 percent (the four-fifths rule) signals potential disparate impact and should trigger further review.
Colorado’s SB 24-205 takes a broader approach. Rather than prescribing a specific audit methodology, it requires deployers of high-risk AI systems—including employment AI—to conduct annual impact assessments. These assessments must evaluate the system’s purpose, intended benefits, known limitations, and potential risks of algorithmic discrimination. Deployers must also maintain a risk management program aligned with recognized frameworks such as the NIST AI Risk Management Framework or ISO/IEC 42001. The Colorado Attorney General has enforcement authority, and the deadline is fast approaching: June 30, 2026. See our Colorado AI Act compliance guide for a step-by-step breakdown.
Several other states, including New Jersey and Washington, have pending bills that would impose similar audit or impact assessment obligations on employers using algorithmic hiring tools. Our Bias Audit Requirements Tracker provides a state-by-state comparison of audit mandates, timelines, and technical specifications.
2. Candidate Notice and Disclosure
Even in states that do not yet require formal bias audits, a growing number of jurisdictions mandate that employers disclose to candidates when AI is being used in the hiring process. This category of regulation is expanding rapidly.
Colorado SB 24-205 requires deployers to provide candidates with a clear, conspicuous notice before an AI system makes or materially influences a consequential employment decision. The notice must describe what the AI system is, the purpose for which it is being used, and the type of data it collects and processes. If an adverse decision results, the employer must provide an additional post-decision notice explaining how to contest the outcome.
The Illinois Artificial Intelligence Video Interview Act (AIVIA, 820 ILCS 42) has required disclosure and consent for AI-analyzed video interviews since 2020. Employers must notify applicants before the interview that AI will analyze the video, explain what characteristics the AI evaluates, and obtain written consent. Illinois has since expanded this framework: recent amendments to the Illinois Human Rights Act now require notice whenever AI serves as a substantial factor in hiring, not just in video interview contexts.
Maryland HB 1202, effective since October 2020, targets a narrower use case: employers cannot use facial recognition technology on job applicants without the applicant’s prior written consent. While limited to facial recognition, this law applies to any AI hiring tool with facial analysis components—a category that includes many modern video interview platforms.
The trend is clear. Transparency requirements are becoming table stakes. Employers should assume that any jurisdiction in which they hire will eventually require some form of pre-decision AI disclosure.
3. Right to Opt Out and Human Review
A second wave of AI hiring regulation goes beyond disclosure to grant candidates substantive rights when AI is involved in employment decisions. The most significant of these is the right to opt out of automated decision-making and request a human reviewer.
Under Colorado SB 24-205, individuals have the right to opt out of profiling in furtherance of decisions that produce legal or similarly significant effects—a category that includes employment decisions. When a candidate exercises this right, the employer must provide a meaningful alternative evaluation process that does not rely on the AI system.
NYC Local Law 144 includes a narrower version of this requirement: employers must offer an alternative assessment process when a candidate objects to AEDT evaluation, provided a reasonable alternative exists. In practice, enforcement of this provision has been limited, but the DCWP’s recent enforcement push suggests increased scrutiny is coming.
Several pending state bills—including proposals in California, Washington, and Connecticut—would formalize a right to human review of any AI-driven adverse employment decision. This reflects a broader regulatory philosophy: AI can assist in hiring, but a human must remain accountable for final decisions that materially affect people’s livelihoods.
For employers, the practical implication is that hiring workflows must be designed with a human-in-the-loop fallback. AI systems that make fully automated hiring or rejection decisions without any human involvement are increasingly exposed to legal risk across multiple jurisdictions.
4. Discrimination and Disparate Impact
Even in the absence of AI-specific legislation, employers face liability for discriminatory outcomes produced by AI hiring tools under existing federal and state civil rights laws. The EEOC has made clear that Title VII of the Civil Rights Act applies to AI-driven employment decisions. In its 2023 technical assistance guidance, the EEOC confirmed that employers can be held liable for disparate impact caused by algorithmic hiring tools, regardless of whether the employer developed the tool or purchased it from a vendor.
The EEOC’s framework is straightforward: if an AI hiring tool produces selection rates that disproportionately disadvantage a protected group, the employer bears the burden of demonstrating that the tool is job-related and consistent with business necessity. The “I just used a vendor’s tool” defense does not relieve the employer of liability.
Several states have gone further by amending their own civil rights acts to explicitly address AI. Illinois amended its Human Rights Act to prohibit the use of AI that has the effect of subjecting employees to discrimination based on protected characteristics. Colorado’s AI Act creates a rebuttable presumption of reasonable care for deployers who comply with its impact assessment and risk management requirements—an incentive structure designed to reward proactive compliance.
The intersection of AI hiring tools and discrimination law also raises concerns in the election context. Government employers and political organizations using AI to screen election workers or campaign staff may trigger overlapping requirements under both employment discrimination laws and election integrity statutes. See our Election AI Tracker for more on this emerging intersection.
5. Recordkeeping and Retention
Compliance with AI hiring laws is not solely about what employers do in real time—it also requires maintaining auditable records of AI system use, decisions, and outcomes over defined retention periods.
Under NYC Local Law 144, employers must retain bias audit results and make them publicly available for at least six months. The DCWP has indicated that employers should also retain records of candidate notifications and any alternative assessment requests.
Colorado SB 24-205 imposes more comprehensive recordkeeping obligations. Deployers must maintain documentation of their impact assessments, risk management policies, and any instances of algorithmic discrimination detected or reported. These records must be available to the Colorado Attorney General upon request.
At the federal level, the EEOC’s existing recordkeeping rules under 29 CFR Part 1602 require employers to retain hiring records—including records related to AI tool use—for at least one year from the date of the hiring decision (or two years for federal contractors). When AI is involved, best practice is to retain the model version, input data, output scores, and the final decision for each candidate evaluated.
Pending legislation in several states would extend retention requirements to three years or longer, and some proposals would require employers to log every instance in which an AI system influenced a hiring decision, including cases where the AI recommendation was overridden by a human reviewer.
What to Watch: Bills in the Pipeline
The legislative landscape continues to evolve. The following bills and regulatory actions represent the most significant developments to watch in 2026 and beyond:
- California FEHA Final Rules: The California Civil Rights Department is expected to finalize regulations under the Fair Employment and Housing Act that will establish specific requirements for employers using AI in hiring. These rules could set a national benchmark given California’s market influence. See our California tracker.
- Texas HB 149: This bill would require employers using AI hiring tools in Texas to provide notice to applicants and conduct periodic validation studies to assess disparate impact. It is currently advancing through committee.
- Federal AEDT Legislation: Multiple federal proposals—including the Algorithmic Accountability Act and the No Robot Bosses Act—would establish nationwide bias audit requirements, disclosure obligations, and a right to human review for AI-driven employment decisions. While federal legislation faces an uncertain timeline, these bills signal the direction of future regulation.
- New Jersey and Washington: Both states have active bills that would impose bias audit requirements modeled on NYC Local Law 144 but expanded to cover statewide employment.
- EEOC Rulemaking: The EEOC has signaled interest in issuing formal guidance or rules specifically addressing AI in hiring. Any EEOC rulemaking would have nationwide applicability and would likely draw on the frameworks already established by state laws.
Track Every AI Hiring Law in One Place
Monitoring 26 states with enacted or pending AI hiring legislation—plus federal proposals—is not something that can be done manually. The Employment AI Laws Tracker provides a continuously updated view of every state and federal bill affecting AI in hiring, with effective dates, compliance deadlines, penalty structures, and amendment histories.
Use it alongside the Bias Audit Requirements Tracker to compare audit methodologies, disclosure templates, and recordkeeping timelines across jurisdictions. For employers operating across multiple states, these tools provide the cross-jurisdictional visibility needed to build a unified compliance program rather than managing requirements state by state.
Open the Employment AI Tracker →
Frequently Asked Questions
Which states require AI hiring tool bias audits?
As of April 2026, New York City requires independent bias audits under Local Law 144, and Colorado mandates impact assessments for high-risk AI systems (including employment AI) under SB 24-205, effective June 30, 2026. Several other states, including New Jersey and Washington, have pending bills that would impose similar audit requirements. Illinois requires demographic reporting when AI is the sole decision-maker in video interviews. Use the Bias Audit Requirements Tracker for a full state-by-state comparison.
What is NYC Local Law 144?
NYC Local Law 144 is a New York City ordinance that prohibits employers from using an Automated Employment Decision Tool (AEDT) for hiring or promotion decisions unless the employer has completed an independent bias audit within the past 12 months and published the results on its website. Employers must also notify candidates at least 10 business days before using an AEDT and offer an alternative assessment process when feasible. Penalties range from $500 for a first violation to $1,500 per subsequent violation. See our AI Hiring Laws compliance map for more detail.
Do employers need to disclose AI use in hiring?
Yes, and the number of states requiring disclosure is growing. Colorado, Illinois, New York City, and Maryland all currently require some form of candidate notification before AI is used in hiring. Colorado’s AI Act requires disclosure before any consequential AI-driven employment decision. Illinois requires notice and written consent for AI video interviews. Maryland requires consent before facial recognition is used on applicants. Even in states without specific AI hiring laws, EEOC guidance recommends disclosure as a best practice to mitigate disparate impact liability. Use the Employment AI Tracker to see which states currently require disclosure and which have pending bills.
This article is for informational purposes only and does not constitute legal advice. Consult a qualified attorney for guidance specific to your situation.