Session Expired

Your session has expired. Please sign in again to continue where you left off.

Sign In Again
Election Law

Political Deepfake Laws: 20+ States Regulate AI in Elections (2026 Guide)

AI Laws by State April 27, 2026 12 min read

The 2024 election cycle proved that AI-generated political content is no longer hypothetical. From an AI-generated robocall impersonating President Biden that targeted New Hampshire primary voters to a wave of synthetic campaign ads that blurred the line between reality and fabrication, AI-powered election interference moved from warning to reality almost overnight.

The fallout was swift. The FCC issued a landmark declaratory ruling confirming that AI-generated voices in robocalls violate the Telephone Consumer Protection Act (TCPA). State legislatures responded with a legislative tsunami: more than 30 states now have laws specifically addressing the use of artificial intelligence in elections, from mandatory disclosure labels on AI-generated political ads to outright bans on synthetic media near Election Day.

This guide maps every major category of election AI law now in effect or pending across the United States, explains the enforcement mechanisms behind them, and identifies the gaps that remain heading into the 2026 midterm cycle. For bill-level detail and live status tracking, use our Election AI Tracker.

Legal Disclaimer: This article is for informational purposes only and does not constitute legal advice. Consult a qualified attorney for guidance specific to your situation.

Synthetic Media / Deepfake Disclosure in Political Ads

The most common legislative approach to election AI is mandatory disclosure. These laws require any political advertisement, campaign communication, or electioneering material that uses AI-generated or AI-manipulated content to carry a clear, conspicuous label alerting viewers that the content was created or altered with artificial intelligence.

California AB 2655 (the Defending Democracy from Deepfake Deception Act) is among the most comprehensive disclosure statutes. Signed into law in September 2024, it requires large online platforms to label or remove deceptive AI-generated content related to elections. The law applies to audio, video, and images that are materially deceptive and distributed within 120 days of an election. Platforms that fail to act face civil penalties and injunctive relief.

Minnesota was an early mover, enacting a statute that requires political campaign material containing AI-generated synthetic media to include a disclosure statement. The disclosure must be clear and conspicuous—not buried in fine print or hidden behind a clickthrough. Violations are treated as deceptive campaign practices under existing election law, opening the door to both administrative penalties and civil action by opponents.

Wisconsin requires political communications using AI-generated content to include a prominent disclosure label. The law applies to television, radio, online, and print campaign materials. Candidates, PACs, and third-party groups that distribute unlabeled AI political content face campaign finance penalties and potential referral to election enforcement authorities.

Other states with active disclosure requirements include Michigan (HB 5144, requiring disclosure at all times, not just near elections), Colorado (HB 1147, 60-day window), and Florida (HB 919, 90-day window). The common thread: if AI touched the content and it relates to a candidate or ballot measure, voters must be told.

State Law Disclosure Window Applies To
CaliforniaAB 2655120 days before electionLarge online platforms
MinnesotaCampaign disclosure statuteAny time during campaignAll campaign materials
WisconsinAI disclosure lawElection periodTV, radio, online, print
MichiganHB 5144All timesAll political communications
ColoradoHB 114760 daysCampaign communications
FloridaHB 91990 daysPolitical ads
ArizonaHB 2394 / SB 135990 daysElection-related content
New MexicoHB 182Election periodPolitical communications

Outright Bans Within X Days of an Election

A second, more aggressive category of election AI law goes beyond disclosure. Rather than simply requiring a label, these laws prohibit the distribution of materially deceptive AI-generated content about candidates or ballot measures within a defined window before an election—typically 60 or 90 days.

Texas SB 751 is the flagship example. Signed into law in 2023, it makes it a crime to publish a deepfake video intended to injure a political candidate or influence an election within 30 days of the election. Violations are a Class A misdemeanor, punishable by up to one year in jail and a $4,000 fine. The law applies to any person, not just campaigns or PACs, casting a wide net over individual social media users, political operatives, and outside groups.

Several states have adopted broader time windows. Mississippi (SB 2577) imposes a 90-day blackout period for materially deceptive AI-generated election content, with penalties escalating from misdemeanor fines to imprisonment for repeat offenders. Montana (SB 25) created a 60-day window with fines up to $5,000 and potential imprisonment of up to two years for a third violation. California AB 2839 attempted a 120-day prohibition but was partially enjoined by a federal court in August 2025 on First Amendment grounds—a ruling that has significant implications for similar statutes nationwide.

The constitutional tension is real. Time-based bans restrict speech about political candidates during the most politically relevant period. Courts will likely continue to scrutinize whether these bans are narrowly tailored or unconstitutionally overbroad. The AB 2839 injunction may be the first of several legal challenges that reshape this category of law before the 2026 midterms.

AI-Generated Robocall Regulations

The single event that did more to accelerate election AI regulation than any other was the New Hampshire primary deepfake incident in January 2024. Voters in the state received robocalls featuring an AI-generated clone of President Biden’s voice, urging them not to vote in the primary. The calls were traced to a political consultant using commercially available voice-cloning technology. The incident led to criminal charges, FCC enforcement action, and an immediate policy response at both the federal and state level.

The FCC’s February 2024 declaratory ruling confirmed what many had suspected but few had codified: AI-generated voices in robocalls constitute “artificial” voices under the Telephone Consumer Protection Act (TCPA). This means AI voice calls made without prior express consent are illegal under existing federal law, carrying penalties of $500 to $1,500 per call. The ruling did not require new legislation—it clarified that the TCPA, originally written in 1991, already covers AI-generated voice technology.

At the state level, New Hampshire itself moved quickly, strengthening its robocall statute to explicitly reference AI-generated and synthetic voice content. Several other states have followed:

The practical implication for political campaigns is straightforward: using AI-generated or AI-cloned voices in automated calls without explicit disclosure and prior consent is illegal under federal law and an expanding number of state laws. The penalties stack—a single campaign blast can trigger per-call TCPA liability plus state-level election law violations.

Voter Suppression & Impersonation

Beyond political advertising and robocalls, a growing number of states are targeting the use of AI to suppress voter turnout or impersonate candidates and election officials. These provisions address scenarios where AI-generated content is designed not to persuade but to deceive voters about fundamental election mechanics—where to vote, when polls close, or whether their registration is valid.

AI-powered voter suppression laws typically criminalize the use of synthetic media or AI-generated communications to provide materially false information about election procedures. If an AI-generated message tells voters the wrong polling location, fabricates a polling-place closure, or falsely claims that voting by mail has been canceled, these laws create specific criminal liability for the person who created or distributed the content.

Several states have incorporated AI-specific language into existing voter suppression and election fraud statutes:

Candidate impersonation is treated as a distinct harm. Where traditional fraud law requires proof that a person claimed to be someone else, AI deepfake impersonation laws recognize that a sufficiently realistic synthetic video or audio clip can impersonate a candidate without the creator ever explicitly claiming to be that person. States like Arizona, Michigan, and New York have adapted their fraud provisions to cover this AI-specific attack vector.

Civil & Criminal Penalties

The penalty landscape for election AI violations varies dramatically across jurisdictions. Some states treat violations as minor campaign finance infractions. Others have created felony-level criminal liability. The range reflects the lack of consensus on how severely election AI misuse should be punished.

State Law Penalty Type Maximum Penalty Enforcement
TexasSB 751Criminal1 year jail + $4,000 fineCounty DA
MississippiSB 2577Criminal + civilImprisonment; fines up to $10,000AG + private action
MontanaSB 25Criminal$5,000 fine; up to 2 years (3rd offense)County attorney
CaliforniaAB 2655CivilInjunctive relief + civil penaltiesAG + private action
New MexicoHB 182CriminalMisdemeanor (1st); felony (repeat)DA + AG
MichiganHB 5144Civil + criminalCivil and criminal penaltiesAG + Secretary of State
ArizonaHB 2394CivilInjunctive relief + civil damagesPrivate action + AG
ColoradoHB 1147CivilCivil action for damagesPrivate action

Attorney General enforcement is the most common mechanism. In most states with election AI laws, the state AG has authority to investigate violations, issue cease-and-desist orders, and bring civil or criminal actions. Several states also allow private rights of action, enabling impersonated candidates or affected parties to sue directly for damages and injunctive relief.

The federal TCPA framework adds a separate layer of liability for AI robocalls. The FCC can impose forfeiture penalties of up to $23,727 per call (adjusted for inflation), and private plaintiffs can recover $500 to $1,500 per call in class actions. For a mass robocall campaign targeting thousands of voters, the aggregate exposure is significant.

Enforcement remains the weak link. While the laws exist on paper, few states have dedicated enforcement resources for election AI violations. The speed at which AI content can be created and distributed—often going viral within hours—outpaces the capacity of AG offices and election commissions to respond before the damage is done. The 2026 midterms will be the first major test of whether these laws can be enforced in real time. For detailed penalty data across all states, see the Deepfake Penalty Tracker.

What to Watch: 2026 and Beyond

The 2026 midterm election cycle will be the first conducted under this new generation of election AI laws. Several developments bear watching:

For real-time tracking of every election AI bill across all 50 states, use the Election AI Tracker. Related tools that intersect with election AI regulation include the Deepfake Penalty Tracker (deepfake penalties across all categories), the Facial Recognition Tracker (voter surveillance and biometric data at polling places), and the Employment AI Tracker (AI use in government hiring and public-sector decision-making).


This article is for informational purposes only and does not constitute legal advice. Consult a qualified attorney for guidance specific to your situation.

Frequently Asked Questions

Which states ban political deepfakes?

More than 30 states have enacted or introduced laws that either ban or require disclosure of AI-generated content in political advertising and election communications. States with outright bans near elections include Texas (SB 751, 30-day ban), Mississippi (SB 2577, 90-day ban), and Montana (SB 25, 60-day ban). California attempted a 120-day ban with AB 2839, but it was partially enjoined on First Amendment grounds in August 2025. Other states like Michigan, Minnesota, Wisconsin, Colorado, and Florida require mandatory disclosure labels rather than outright bans. The distinction between a ban and a disclosure requirement matters legally and practically—bans prohibit distribution entirely within the protected window, while disclosure laws allow the content to be distributed as long as it is clearly labeled as AI-generated. Use the Election AI Tracker for the full, current list of every state with an election AI law.

What disclosure is required for AI political ads?

Disclosure requirements vary by state but share common elements. Most states require a clear, conspicuous label stating that the content was generated or substantially altered using artificial intelligence. The label must be visible or audible to the average viewer or listener—it cannot be buried in fine print, hidden behind a link, or disclosed only in metadata. For video content, the disclosure typically must appear on-screen for the duration of the AI-generated segment or as a persistent watermark. For audio, a verbal disclosure at the beginning of the communication is standard. California AB 2655 requires large platforms to label or remove deceptive AI election content. Michigan HB 5144 requires disclosure at all times, not just near elections. Failure to include the required disclosure can result in campaign finance penalties, civil liability, and in some states criminal prosecution.

Are AI robocalls during elections legal?

No, in most cases AI-generated robocalls during elections are illegal under federal law. The FCC issued a declaratory ruling in February 2024 confirming that AI-generated voices qualify as “artificial” voices under the Telephone Consumer Protection Act (TCPA). This means automated calls using AI-cloned or AI-generated voices require prior express consent from the recipient, just like traditional robocalls. Without that consent, each call can trigger penalties of $500 to $1,500 under the TCPA, and the FCC can impose forfeiture penalties up to $23,727 per call. Several states have gone further by adding AI-specific provisions to their robocall statutes, including New Hampshire, Oregon, Michigan, and Illinois. The New Hampshire primary deepfake incident in January 2024—where an AI-generated Biden voice urged voters not to vote—led to criminal charges and remains the highest-profile enforcement case to date.