Session Expired

Your session has expired. Please sign in again to continue where you left off.

Sign In Again
Surveillance & Biometrics

Police Facial Recognition Laws: State Bans, Restrictions & What's Next (2026)

AI Laws by State April 27, 2026 12 min read

Facial recognition technology has become one of the most contested tools in American law enforcement. After a string of wrongful arrests linked to faulty algorithmic matches—including high-profile cases in Detroit, New Orleans, and New Jersey—civil liberties organizations, lawmakers, and the public have pushed back hard. The ACLU's sustained campaign against police facial recognition, combined with mounting research on racial and gender bias in the underlying algorithms, has transformed what was once a niche surveillance concern into a frontline legislative issue.

San Francisco made headlines in 2019 as the first major U.S. city to ban government use of facial recognition. Since then, the movement has spread to more than a dozen cities and multiple states. By early 2026, the regulatory landscape is a complex patchwork of outright bans, moratoriums, warrant requirements, and narrower restrictions on specific use cases like body-worn cameras and predictive policing. This guide maps the full scope of police-facing facial recognition and AI surveillance legislation across all 50 states.

Legal Disclaimer: This article is for informational purposes only and does not constitute legal advice. Consult a qualified attorney for guidance specific to your situation.

For the full interactive dataset, visit the Facial Recognition Tracker, which covers every bill we track with filterable status, effective dates, and links to official legislature text.

State Bans & Moratoriums

The most aggressive legislative response to police facial recognition is an outright ban or moratorium on government use. These laws typically prohibit city or state agencies—including law enforcement—from purchasing, deploying, or using facial recognition systems, sometimes with narrow exceptions for specific federal investigations.

San Francisco and the City-Level Wave

San Francisco's 2019 Stop Secret Surveillance Ordinance was the catalyst. The ordinance banned all city departments, including the police, from using facial recognition technology. Oakland followed weeks later, and cities including Boston, Minneapolis, Portland (Oregon), and Cambridge (Massachusetts) enacted similar bans through 2020 and 2021. New Orleans banned police use of facial recognition in 2022 after revelations that the NOPD had been secretly using the technology for years without city council authorization.

Massachusetts: The First Statewide Moratorium

In December 2020, Massachusetts became the first state to enact a statewide moratorium on government use of facial recognition through its comprehensive police reform bill. The law prohibits state and local agencies from using facial recognition except through a narrow process requiring requests to be routed through the state Registry of Motor Vehicles or the FBI. Even then, results cannot serve as the sole basis for an arrest. The moratorium was designed as a temporary measure pending further study, but it remains in effect as of 2026 and has been cited as a model by legislators in other states.

Vermont, Maine, and Emerging State Action

Vermont enacted legislation in 2020 prohibiting law enforcement from using facial recognition technology, making it one of the earliest state-level bans. Maine's 2021 law restricts government use of facial recognition to specific circumstances and requires a court order before law enforcement can run a facial recognition search. Virginia enacted a moratorium on local law enforcement use of facial recognition in 2021, though the law includes exceptions for certain federal partnerships. Several additional states—including Washington, Oregon, and Maryland—have enacted more targeted restrictions that stop short of full bans but significantly limit police deployment.

JurisdictionTypeYearKey Provision
San Francisco, CACity ban2019Prohibits all city department use
Oakland, CACity ban2019Prohibits city use; surveillance oversight
Boston, MACity ban2020Bans city government use of facial recognition
MassachusettsStatewide moratorium2020Requires RMV/FBI routing; not sole basis for arrest
VermontStatewide ban2020Prohibits law enforcement use
Portland, ORCity ban2020Bans both government and private-sector use in public
MaineStatewide restriction2021Court order required; limited exceptions
VirginiaStatewide moratorium2021Local police barred; federal partnership exceptions
New Orleans, LACity ban2022Bans police use after secret deployment revealed

Police Use Restrictions & Warrants

Even in states without full bans, a growing number of jurisdictions have enacted laws that require warrants or court authorization before law enforcement can use facial recognition. These laws treat facial recognition searches as analogous to wiretaps or other invasive surveillance techniques, requiring judicial oversight before police can submit a probe image to a facial recognition system.

Warrant Requirements

Maine's 2021 law is among the most restrictive, requiring a court order based on probable cause before any facial recognition search can be conducted. Washington state limits police use of facial recognition to serious offenses and mandates annual public reporting on how the technology is used. Illinois, through its Biometric Information Privacy Act (BIPA), does not specifically regulate police facial recognition but creates a broad framework that has been interpreted to restrict government biometric data collection in certain contexts.

Limitations on Real-Time Surveillance

Real-time facial recognition surveillance—the use of cameras to scan faces in public spaces and match them against watchlists in real time—has drawn the strongest legislative pushback. Several states and cities have specifically targeted this use case. Portland, Oregon, is notable for banning not just government but also private-sector use of facial recognition in public accommodations. King County, Washington (encompassing Seattle), banned government use of facial recognition in 2021. New York's proposed legislation has repeatedly targeted real-time surveillance in public housing and transit systems, though comprehensive state-level legislation has stalled as of early 2026.

Body Cam AI / Video Analytics

The integration of artificial intelligence into body-worn camera footage represents a newer frontier in surveillance regulation. As camera vendors increasingly offer AI-powered analytics—including facial recognition, emotion detection, and behavioral analysis applied to recorded footage—legislators are scrambling to define what is and is not permissible.

Illinois BIPA and Body Cameras

Illinois's Biometric Information Privacy Act (BIPA), enacted in 2008 and the strongest biometric privacy law in the country, has significant implications for body camera AI. BIPA requires informed consent before collecting biometric identifiers, including faceprint data. While BIPA contains a law enforcement exemption for certain investigative activities, the scope of that exemption is contested. Courts have ruled that BIPA's private right of action—which allows individuals to sue for statutory damages of $1,000 to $5,000 per violation—applies broadly, creating substantial liability risk for agencies or vendors that apply facial recognition to body camera footage without proper authorization.

Washington and Other State Restrictions

Washington state's 2020 facial recognition law explicitly addresses body cameras, prohibiting the use of real-time facial recognition on body-worn camera feeds and requiring agencies to adopt accountability reports before deploying any facial recognition technology. Oregon's body camera policies similarly restrict post-hoc facial recognition analysis of recorded footage. Colorado requires written policies specifically addressing whether and how AI analytics may be applied to body camera recordings.

Federal Guidance

At the federal level, the Department of Justice issued guidance in 2023 urging caution in the use of facial recognition on body camera footage, particularly given documented accuracy disparities across racial demographics. The guidance recommends that agencies adopt policies requiring human review of any facial recognition match before investigative action and prohibiting facial recognition as the sole basis for probable cause. While not legally binding, the DOJ guidance has influenced state-level policy development and agency procurement decisions.

Predictive Policing & Risk Scoring

Predictive policing tools—algorithms that forecast where crimes are likely to occur or which individuals are likely to commit or be victims of crime—have drawn intense scrutiny over documented racial bias. These systems, which often rely on historical arrest data that reflects decades of racially disproportionate policing, have been shown to create feedback loops that concentrate enforcement in communities of color.

Bias Concerns and Research Findings

A 2021 study published in the journal Science found that predictive policing algorithms systematically over-predicted crime in Black and Latino neighborhoods, even after controlling for underlying crime rates. The RAND Corporation's evaluation of predictive policing in several cities found mixed results on effectiveness and raised concerns about civil liberties implications. These findings have fueled legislative action at both the state and city level.

State Restrictions

Several jurisdictions have moved to restrict or ban predictive policing. In 2020, Santa Cruz, California, became the first U.S. city to ban predictive policing outright. The Los Angeles Police Department discontinued its use of PredPol (now Geolitica) in 2020 after an inspector general report found the tool was deployed disproportionately in Black neighborhoods. Illinois enacted legislation requiring transparency reporting for any algorithmic tools used in policing decisions. New Jersey's Attorney General issued a directive in 2021 requiring law enforcement agencies to disclose their use of algorithmic decision-making tools and subjecting those tools to bias audits. Vermont and Washington have proposed legislation that would require impact assessments before deploying any AI-based policing tool.

For more on algorithmic bias requirements across sectors, see the Bias Audit Requirements tracker.

Court & Pretrial AI Tools

Artificial intelligence has made significant inroads into the court system, particularly through risk assessment tools used in bail, sentencing, and parole decisions. These tools generate scores that predict a defendant's likelihood of reoffending or failing to appear in court, and judges use these scores—alongside other factors—to inform pretrial detention and sentencing decisions.

The COMPAS Controversy

The most well-known pretrial risk assessment tool is COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), developed by Equivant (formerly Northpointe). In 2016, a ProPublica investigation found that COMPAS was nearly twice as likely to incorrectly flag Black defendants as future criminals compared to white defendants, while white defendants were more likely to be incorrectly labeled as low risk. The investigation sparked a national debate about the use of algorithmic tools in criminal justice that continues to shape legislation today.

State Responses

Idaho became the first state to ban the use of algorithmic risk scores as the sole basis for pretrial detention in 2023. California's experience has been contentious: the state eliminated cash bail in 2019 through SB 10, which relied heavily on risk assessment tools, but voters overturned the law via Proposition 25 in 2020 amid concerns about algorithmic bias. New Jersey's bail reform system uses a risk assessment tool called the Public Safety Assessment (PSA), but the state has implemented guardrails including judicial override authority and regular bias audits. Illinois's Pretrial Fairness Act, which took effect in 2023, eliminated cash bail and includes provisions requiring transparency in any algorithmic tools used in pretrial decisions.

Several states have enacted or proposed legislation requiring that pretrial risk assessment tools be independently validated for accuracy and bias, that defendants be informed when such tools are used, and that algorithmic scores never serve as the sole determinant of pretrial detention or sentencing.

What to Watch

The regulatory landscape for police facial recognition and AI surveillance is evolving rapidly. Several developments are likely to shape the next phase of legislation:


Explore the complete bill-level data on our Facial Recognition Tracker, which provides filterable access to every facial recognition and biometric surveillance bill we track across all 50 states. For related regulatory areas, see the Bias Audit Requirements tracker and the Election AI Tracker.

Facial recognition law is moving fast. New bans, moratoriums, and warrant requirements are introduced every legislative session. Stay current with weekly updates from our research team.

Subscribe to the daily AI law digest →

Frequently Asked Questions

Which states ban police facial recognition?

As of early 2026, Vermont and Virginia have enacted statewide moratoriums or bans on law enforcement use of facial recognition. Massachusetts has a statewide moratorium that restricts government use to narrow channels through the RMV or FBI. Maine requires a court order before any law enforcement facial recognition search. At the city level, San Francisco, Oakland, Boston, Portland (Oregon), Minneapolis, Cambridge, and New Orleans have all banned police facial recognition. See our Facial Recognition Tracker for the most current list and bill details.

What states restrict predictive policing?

Several jurisdictions have acted to restrict or ban predictive policing algorithms. Santa Cruz, California, was the first city to ban predictive policing outright in 2020. Illinois requires transparency reporting for algorithmic policing tools. New Jersey's Attorney General mandates disclosure and bias audits for law enforcement algorithmic decision-making tools. Vermont and Washington have proposed impact assessment requirements for AI-based policing tools. At the federal level, there is growing pressure for standardized transparency requirements. Visit the Bias Audit Requirements tracker for details on algorithmic accountability laws by state.

Are body cam AI tools regulated?

Yes, and regulation is expanding. Washington state explicitly prohibits real-time facial recognition on body-worn camera feeds and requires accountability reports before agencies can deploy any facial recognition technology. Illinois BIPA creates significant liability risk for applying facial recognition to body camera footage, since faceprint data collection generally requires informed consent. Oregon restricts post-hoc facial recognition analysis of body camera recordings. Colorado requires written policies governing AI analytics on body camera data. At the federal level, the Department of Justice has issued guidance recommending human review requirements and prohibiting facial recognition as the sole basis for probable cause.

Struggling with AI compliance?

Describe your situation and we'll connect you with a specialist who understands your state's AI laws.

Get Compliance Help

Free consultation request · No obligation