Facial recognition technology has become one of the most contested tools in American law enforcement. After a string of wrongful arrests linked to faulty algorithmic matches—including high-profile cases in Detroit, New Orleans, and New Jersey—civil liberties organizations, lawmakers, and the public have pushed back hard. The ACLU's sustained campaign against police facial recognition, combined with mounting research on racial and gender bias in the underlying algorithms, has transformed what was once a niche surveillance concern into a frontline legislative issue.
San Francisco made headlines in 2019 as the first major U.S. city to ban government use of facial recognition. Since then, the movement has spread to more than a dozen cities and multiple states. By early 2026, the regulatory landscape is a complex patchwork of outright bans, moratoriums, warrant requirements, and narrower restrictions on specific use cases like body-worn cameras and predictive policing. This guide maps the full scope of police-facing facial recognition and AI surveillance legislation across all 50 states.
For the full interactive dataset, visit the Facial Recognition Tracker, which covers every bill we track with filterable status, effective dates, and links to official legislature text.
State Bans & Moratoriums
The most aggressive legislative response to police facial recognition is an outright ban or moratorium on government use. These laws typically prohibit city or state agencies—including law enforcement—from purchasing, deploying, or using facial recognition systems, sometimes with narrow exceptions for specific federal investigations.
San Francisco and the City-Level Wave
San Francisco's 2019 Stop Secret Surveillance Ordinance was the catalyst. The ordinance banned all city departments, including the police, from using facial recognition technology. Oakland followed weeks later, and cities including Boston, Minneapolis, Portland (Oregon), and Cambridge (Massachusetts) enacted similar bans through 2020 and 2021. New Orleans banned police use of facial recognition in 2022 after revelations that the NOPD had been secretly using the technology for years without city council authorization.
Massachusetts: The First Statewide Moratorium
In December 2020, Massachusetts became the first state to enact a statewide moratorium on government use of facial recognition through its comprehensive police reform bill. The law prohibits state and local agencies from using facial recognition except through a narrow process requiring requests to be routed through the state Registry of Motor Vehicles or the FBI. Even then, results cannot serve as the sole basis for an arrest. The moratorium was designed as a temporary measure pending further study, but it remains in effect as of 2026 and has been cited as a model by legislators in other states.
Vermont, Maine, and Emerging State Action
Vermont enacted legislation in 2020 prohibiting law enforcement from using facial recognition technology, making it one of the earliest state-level bans. Maine's 2021 law restricts government use of facial recognition to specific circumstances and requires a court order before law enforcement can run a facial recognition search. Virginia enacted a moratorium on local law enforcement use of facial recognition in 2021, though the law includes exceptions for certain federal partnerships. Several additional states—including Washington, Oregon, and Maryland—have enacted more targeted restrictions that stop short of full bans but significantly limit police deployment.
| Jurisdiction | Type | Year | Key Provision |
|---|---|---|---|
| San Francisco, CA | City ban | 2019 | Prohibits all city department use |
| Oakland, CA | City ban | 2019 | Prohibits city use; surveillance oversight |
| Boston, MA | City ban | 2020 | Bans city government use of facial recognition |
| Massachusetts | Statewide moratorium | 2020 | Requires RMV/FBI routing; not sole basis for arrest |
| Vermont | Statewide ban | 2020 | Prohibits law enforcement use |
| Portland, OR | City ban | 2020 | Bans both government and private-sector use in public |
| Maine | Statewide restriction | 2021 | Court order required; limited exceptions |
| Virginia | Statewide moratorium | 2021 | Local police barred; federal partnership exceptions |
| New Orleans, LA | City ban | 2022 | Bans police use after secret deployment revealed |
Police Use Restrictions & Warrants
Even in states without full bans, a growing number of jurisdictions have enacted laws that require warrants or court authorization before law enforcement can use facial recognition. These laws treat facial recognition searches as analogous to wiretaps or other invasive surveillance techniques, requiring judicial oversight before police can submit a probe image to a facial recognition system.
Warrant Requirements
Maine's 2021 law is among the most restrictive, requiring a court order based on probable cause before any facial recognition search can be conducted. Washington state limits police use of facial recognition to serious offenses and mandates annual public reporting on how the technology is used. Illinois, through its Biometric Information Privacy Act (BIPA), does not specifically regulate police facial recognition but creates a broad framework that has been interpreted to restrict government biometric data collection in certain contexts.
Limitations on Real-Time Surveillance
Real-time facial recognition surveillance—the use of cameras to scan faces in public spaces and match them against watchlists in real time—has drawn the strongest legislative pushback. Several states and cities have specifically targeted this use case. Portland, Oregon, is notable for banning not just government but also private-sector use of facial recognition in public accommodations. King County, Washington (encompassing Seattle), banned government use of facial recognition in 2021. New York's proposed legislation has repeatedly targeted real-time surveillance in public housing and transit systems, though comprehensive state-level legislation has stalled as of early 2026.
- Washington (SB 6280): Requires accountability reports, testing for bias, and meaningful human review before acting on facial recognition results. Applies to state and local agencies.
- Maryland (HB 1202): Restricts use of facial recognition during protests and political assemblies; prohibits use solely based on race, ethnicity, or religious practices.
- Colorado: Requires law enforcement agencies to adopt written policies governing facial recognition use and mandates annual public reporting.
- New York City (POST Act): Requires NYPD to disclose its use of surveillance technologies including facial recognition, though critics argue the disclosure requirements lack teeth.
Body Cam AI / Video Analytics
The integration of artificial intelligence into body-worn camera footage represents a newer frontier in surveillance regulation. As camera vendors increasingly offer AI-powered analytics—including facial recognition, emotion detection, and behavioral analysis applied to recorded footage—legislators are scrambling to define what is and is not permissible.
Illinois BIPA and Body Cameras
Illinois's Biometric Information Privacy Act (BIPA), enacted in 2008 and the strongest biometric privacy law in the country, has significant implications for body camera AI. BIPA requires informed consent before collecting biometric identifiers, including faceprint data. While BIPA contains a law enforcement exemption for certain investigative activities, the scope of that exemption is contested. Courts have ruled that BIPA's private right of action—which allows individuals to sue for statutory damages of $1,000 to $5,000 per violation—applies broadly, creating substantial liability risk for agencies or vendors that apply facial recognition to body camera footage without proper authorization.
Washington and Other State Restrictions
Washington state's 2020 facial recognition law explicitly addresses body cameras, prohibiting the use of real-time facial recognition on body-worn camera feeds and requiring agencies to adopt accountability reports before deploying any facial recognition technology. Oregon's body camera policies similarly restrict post-hoc facial recognition analysis of recorded footage. Colorado requires written policies specifically addressing whether and how AI analytics may be applied to body camera recordings.
Federal Guidance
At the federal level, the Department of Justice issued guidance in 2023 urging caution in the use of facial recognition on body camera footage, particularly given documented accuracy disparities across racial demographics. The guidance recommends that agencies adopt policies requiring human review of any facial recognition match before investigative action and prohibiting facial recognition as the sole basis for probable cause. While not legally binding, the DOJ guidance has influenced state-level policy development and agency procurement decisions.
Predictive Policing & Risk Scoring
Predictive policing tools—algorithms that forecast where crimes are likely to occur or which individuals are likely to commit or be victims of crime—have drawn intense scrutiny over documented racial bias. These systems, which often rely on historical arrest data that reflects decades of racially disproportionate policing, have been shown to create feedback loops that concentrate enforcement in communities of color.
Bias Concerns and Research Findings
A 2021 study published in the journal Science found that predictive policing algorithms systematically over-predicted crime in Black and Latino neighborhoods, even after controlling for underlying crime rates. The RAND Corporation's evaluation of predictive policing in several cities found mixed results on effectiveness and raised concerns about civil liberties implications. These findings have fueled legislative action at both the state and city level.
State Restrictions
Several jurisdictions have moved to restrict or ban predictive policing. In 2020, Santa Cruz, California, became the first U.S. city to ban predictive policing outright. The Los Angeles Police Department discontinued its use of PredPol (now Geolitica) in 2020 after an inspector general report found the tool was deployed disproportionately in Black neighborhoods. Illinois enacted legislation requiring transparency reporting for any algorithmic tools used in policing decisions. New Jersey's Attorney General issued a directive in 2021 requiring law enforcement agencies to disclose their use of algorithmic decision-making tools and subjecting those tools to bias audits. Vermont and Washington have proposed legislation that would require impact assessments before deploying any AI-based policing tool.
For more on algorithmic bias requirements across sectors, see the Bias Audit Requirements tracker.
Court & Pretrial AI Tools
Artificial intelligence has made significant inroads into the court system, particularly through risk assessment tools used in bail, sentencing, and parole decisions. These tools generate scores that predict a defendant's likelihood of reoffending or failing to appear in court, and judges use these scores—alongside other factors—to inform pretrial detention and sentencing decisions.
The COMPAS Controversy
The most well-known pretrial risk assessment tool is COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), developed by Equivant (formerly Northpointe). In 2016, a ProPublica investigation found that COMPAS was nearly twice as likely to incorrectly flag Black defendants as future criminals compared to white defendants, while white defendants were more likely to be incorrectly labeled as low risk. The investigation sparked a national debate about the use of algorithmic tools in criminal justice that continues to shape legislation today.
State Responses
Idaho became the first state to ban the use of algorithmic risk scores as the sole basis for pretrial detention in 2023. California's experience has been contentious: the state eliminated cash bail in 2019 through SB 10, which relied heavily on risk assessment tools, but voters overturned the law via Proposition 25 in 2020 amid concerns about algorithmic bias. New Jersey's bail reform system uses a risk assessment tool called the Public Safety Assessment (PSA), but the state has implemented guardrails including judicial override authority and regular bias audits. Illinois's Pretrial Fairness Act, which took effect in 2023, eliminated cash bail and includes provisions requiring transparency in any algorithmic tools used in pretrial decisions.
Several states have enacted or proposed legislation requiring that pretrial risk assessment tools be independently validated for accuracy and bias, that defendants be informed when such tools are used, and that algorithmic scores never serve as the sole determinant of pretrial detention or sentencing.
What to Watch
The regulatory landscape for police facial recognition and AI surveillance is evolving rapidly. Several developments are likely to shape the next phase of legislation:
- Federal facial recognition legislation: Multiple bills have been introduced in Congress to regulate or ban federal law enforcement use of facial recognition. While no comprehensive federal law has passed as of early 2026, bipartisan momentum is growing. A federal standard would provide a floor for state laws and resolve the current patchwork. The Election AI Tracker covers federal legislative activity that intersects with surveillance and AI policy.
- Expanding city bans: The city-level ban movement shows no signs of slowing. As more municipalities pass facial recognition bans, the pressure on state legislatures to establish uniform standards increases. Cities that have not yet acted are watching outcomes in San Francisco, Boston, and Portland for evidence of whether bans create public safety gaps or successfully protect civil liberties without measurable harm.
- EU AI Act influence on U.S. policy: The European Union's AI Act, which took phased effect beginning in 2024, categorizes real-time biometric identification in public spaces as a prohibited practice with narrow exceptions. The EU approach has been cited by U.S. lawmakers as a model, and several state bills introduced in 2025 and 2026 borrow language and concepts directly from the EU framework. As multinational companies align their products with EU requirements, the practical baseline for U.S. law enforcement technology may shift regardless of domestic legislation.
- Deepfake and facial recognition overlap: As deepfake technology advances, the intersection with facial recognition raises new questions. Can facial recognition systems be fooled by deepfakes? Should anti-deepfake laws address the integrity of biometric databases? For the latest on this evolving intersection, see the Deepfake Penalty Tracker.
Explore the complete bill-level data on our Facial Recognition Tracker, which provides filterable access to every facial recognition and biometric surveillance bill we track across all 50 states. For related regulatory areas, see the Bias Audit Requirements tracker and the Election AI Tracker.
Subscribe to the daily AI law digest →