The 2024 election cycle proved that AI-generated political content is no longer hypothetical. From an AI-generated robocall impersonating President Biden that targeted New Hampshire primary voters to a wave of synthetic campaign ads that blurred the line between reality and fabrication, AI-powered election interference moved from warning to reality almost overnight.
The fallout was swift. The FCC issued a landmark declaratory ruling confirming that AI-generated voices in robocalls violate the Telephone Consumer Protection Act (TCPA). State legislatures responded with a legislative tsunami: more than 30 states now have laws specifically addressing the use of artificial intelligence in elections, from mandatory disclosure labels on AI-generated political ads to outright bans on synthetic media near Election Day.
This guide maps every major category of election AI law now in effect or pending across the United States, explains the enforcement mechanisms behind them, and identifies the gaps that remain heading into the 2026 midterm cycle. For bill-level detail and live status tracking, use our Election AI Tracker.
Synthetic Media / Deepfake Disclosure in Political Ads
The most common legislative approach to election AI is mandatory disclosure. These laws require any political advertisement, campaign communication, or electioneering material that uses AI-generated or AI-manipulated content to carry a clear, conspicuous label alerting viewers that the content was created or altered with artificial intelligence.
California AB 2655 (the Defending Democracy from Deepfake Deception Act) is among the most comprehensive disclosure statutes. Signed into law in September 2024, it requires large online platforms to label or remove deceptive AI-generated content related to elections. The law applies to audio, video, and images that are materially deceptive and distributed within 120 days of an election. Platforms that fail to act face civil penalties and injunctive relief.
Minnesota was an early mover, enacting a statute that requires political campaign material containing AI-generated synthetic media to include a disclosure statement. The disclosure must be clear and conspicuous—not buried in fine print or hidden behind a clickthrough. Violations are treated as deceptive campaign practices under existing election law, opening the door to both administrative penalties and civil action by opponents.
Wisconsin requires political communications using AI-generated content to include a prominent disclosure label. The law applies to television, radio, online, and print campaign materials. Candidates, PACs, and third-party groups that distribute unlabeled AI political content face campaign finance penalties and potential referral to election enforcement authorities.
Other states with active disclosure requirements include Michigan (HB 5144, requiring disclosure at all times, not just near elections), Colorado (HB 1147, 60-day window), and Florida (HB 919, 90-day window). The common thread: if AI touched the content and it relates to a candidate or ballot measure, voters must be told.
| State | Law | Disclosure Window | Applies To |
|---|---|---|---|
| California | AB 2655 | 120 days before election | Large online platforms |
| Minnesota | Campaign disclosure statute | Any time during campaign | All campaign materials |
| Wisconsin | AI disclosure law | Election period | TV, radio, online, print |
| Michigan | HB 5144 | All times | All political communications |
| Colorado | HB 1147 | 60 days | Campaign communications |
| Florida | HB 919 | 90 days | Political ads |
| Arizona | HB 2394 / SB 1359 | 90 days | Election-related content |
| New Mexico | HB 182 | Election period | Political communications |
Outright Bans Within X Days of an Election
A second, more aggressive category of election AI law goes beyond disclosure. Rather than simply requiring a label, these laws prohibit the distribution of materially deceptive AI-generated content about candidates or ballot measures within a defined window before an election—typically 60 or 90 days.
Texas SB 751 is the flagship example. Signed into law in 2023, it makes it a crime to publish a deepfake video intended to injure a political candidate or influence an election within 30 days of the election. Violations are a Class A misdemeanor, punishable by up to one year in jail and a $4,000 fine. The law applies to any person, not just campaigns or PACs, casting a wide net over individual social media users, political operatives, and outside groups.
Several states have adopted broader time windows. Mississippi (SB 2577) imposes a 90-day blackout period for materially deceptive AI-generated election content, with penalties escalating from misdemeanor fines to imprisonment for repeat offenders. Montana (SB 25) created a 60-day window with fines up to $5,000 and potential imprisonment of up to two years for a third violation. California AB 2839 attempted a 120-day prohibition but was partially enjoined by a federal court in August 2025 on First Amendment grounds—a ruling that has significant implications for similar statutes nationwide.
The constitutional tension is real. Time-based bans restrict speech about political candidates during the most politically relevant period. Courts will likely continue to scrutinize whether these bans are narrowly tailored or unconstitutionally overbroad. The AB 2839 injunction may be the first of several legal challenges that reshape this category of law before the 2026 midterms.
- Texas SB 751: 30-day ban; Class A misdemeanor
- Mississippi SB 2577: 90-day ban; misdemeanor to felony on repeat
- Montana SB 25: 60-day ban; up to $5,000 fine, 2 years imprisonment
- California AB 2839: 120-day ban (partially enjoined)
- Indiana: 60-day ban on deceptive AI election content
- Washington: AI election content restrictions near election dates
AI-Generated Robocall Regulations
The single event that did more to accelerate election AI regulation than any other was the New Hampshire primary deepfake incident in January 2024. Voters in the state received robocalls featuring an AI-generated clone of President Biden’s voice, urging them not to vote in the primary. The calls were traced to a political consultant using commercially available voice-cloning technology. The incident led to criminal charges, FCC enforcement action, and an immediate policy response at both the federal and state level.
The FCC’s February 2024 declaratory ruling confirmed what many had suspected but few had codified: AI-generated voices in robocalls constitute “artificial” voices under the Telephone Consumer Protection Act (TCPA). This means AI voice calls made without prior express consent are illegal under existing federal law, carrying penalties of $500 to $1,500 per call. The ruling did not require new legislation—it clarified that the TCPA, originally written in 1991, already covers AI-generated voice technology.
At the state level, New Hampshire itself moved quickly, strengthening its robocall statute to explicitly reference AI-generated and synthetic voice content. Several other states have followed:
- Oregon: Expanded its robocall statute to specifically cover AI-generated and cloned voice calls used in election-related communications
- Michigan: Added AI voice provisions to its election communication laws, with violations treated as election fraud
- California: AB 2655 covers AI-generated audio content on platforms, complementing the TCPA framework for phone-based robocalls
- Illinois: Updated telemarketing and robocall laws to address AI-generated voice content, with penalties enforced by the Attorney General
The practical implication for political campaigns is straightforward: using AI-generated or AI-cloned voices in automated calls without explicit disclosure and prior consent is illegal under federal law and an expanding number of state laws. The penalties stack—a single campaign blast can trigger per-call TCPA liability plus state-level election law violations.
Voter Suppression & Impersonation
Beyond political advertising and robocalls, a growing number of states are targeting the use of AI to suppress voter turnout or impersonate candidates and election officials. These provisions address scenarios where AI-generated content is designed not to persuade but to deceive voters about fundamental election mechanics—where to vote, when polls close, or whether their registration is valid.
AI-powered voter suppression laws typically criminalize the use of synthetic media or AI-generated communications to provide materially false information about election procedures. If an AI-generated message tells voters the wrong polling location, fabricates a polling-place closure, or falsely claims that voting by mail has been canceled, these laws create specific criminal liability for the person who created or distributed the content.
Several states have incorporated AI-specific language into existing voter suppression and election fraud statutes:
- Michigan: Makes it illegal to use AI-generated content to mislead voters about voting times, places, or eligibility requirements. Violations are prosecutable as election fraud under state law.
- New York: Prohibits AI-generated impersonation of election officials or candidates in communications designed to influence voter behavior. Civil and criminal remedies are available.
- Washington: Includes AI-generated content in its voter suppression statute, covering synthetic media that misrepresents ballot measures, candidates, or election procedures.
- Arizona: HB 2394 creates a civil cause of action when AI-generated digital impersonation of a candidate causes election interference risk, allowing impersonated candidates to seek injunctive relief and damages.
Candidate impersonation is treated as a distinct harm. Where traditional fraud law requires proof that a person claimed to be someone else, AI deepfake impersonation laws recognize that a sufficiently realistic synthetic video or audio clip can impersonate a candidate without the creator ever explicitly claiming to be that person. States like Arizona, Michigan, and New York have adapted their fraud provisions to cover this AI-specific attack vector.
Civil & Criminal Penalties
The penalty landscape for election AI violations varies dramatically across jurisdictions. Some states treat violations as minor campaign finance infractions. Others have created felony-level criminal liability. The range reflects the lack of consensus on how severely election AI misuse should be punished.
| State | Law | Penalty Type | Maximum Penalty | Enforcement |
|---|---|---|---|---|
| Texas | SB 751 | Criminal | 1 year jail + $4,000 fine | County DA |
| Mississippi | SB 2577 | Criminal + civil | Imprisonment; fines up to $10,000 | AG + private action |
| Montana | SB 25 | Criminal | $5,000 fine; up to 2 years (3rd offense) | County attorney |
| California | AB 2655 | Civil | Injunctive relief + civil penalties | AG + private action |
| New Mexico | HB 182 | Criminal | Misdemeanor (1st); felony (repeat) | DA + AG |
| Michigan | HB 5144 | Civil + criminal | Civil and criminal penalties | AG + Secretary of State |
| Arizona | HB 2394 | Civil | Injunctive relief + civil damages | Private action + AG |
| Colorado | HB 1147 | Civil | Civil action for damages | Private action |
Attorney General enforcement is the most common mechanism. In most states with election AI laws, the state AG has authority to investigate violations, issue cease-and-desist orders, and bring civil or criminal actions. Several states also allow private rights of action, enabling impersonated candidates or affected parties to sue directly for damages and injunctive relief.
The federal TCPA framework adds a separate layer of liability for AI robocalls. The FCC can impose forfeiture penalties of up to $23,727 per call (adjusted for inflation), and private plaintiffs can recover $500 to $1,500 per call in class actions. For a mass robocall campaign targeting thousands of voters, the aggregate exposure is significant.
Enforcement remains the weak link. While the laws exist on paper, few states have dedicated enforcement resources for election AI violations. The speed at which AI content can be created and distributed—often going viral within hours—outpaces the capacity of AG offices and election commissions to respond before the damage is done. The 2026 midterms will be the first major test of whether these laws can be enforced in real time. For detailed penalty data across all states, see the Deepfake Penalty Tracker.
What to Watch: 2026 and Beyond
The 2026 midterm election cycle will be the first conducted under this new generation of election AI laws. Several developments bear watching:
- First Amendment challenges: The partial injunction of California AB 2839 may be the beginning of a broader constitutional reckoning. If courts find that time-based bans on political deepfakes are facially overbroad, legislatures will need to redraft laws with narrower tailoring—potentially weakening their protective scope.
- Federal AI election legislation: Multiple bills have been introduced in Congress, including the REAL Political Advertisements Act and the AI Transparency in Elections Act. None have passed, but the 2024 deepfake incidents and the FCC’s TCPA ruling have increased momentum. A federal floor standard could preempt the current state-by-state patchwork. Track federal bill status on our Federal AI Bills Tracker.
- FEC rulemaking on AI in campaigns: The Federal Election Commission has opened a rulemaking proceeding on the use of AI in campaign advertising. If finalized, FEC rules could establish uniform disclosure requirements for AI-generated political ads at the federal level, supplementing state laws.
- Enforcement capacity: AG offices in most states lack dedicated election AI enforcement units. The 2026 cycle will test whether existing structures can handle complaints in real time, or whether a new enforcement model is needed.
- Technology evolution: Voice cloning, video synthesis, and real-time deepfake generation are improving faster than the legislative cycle. Laws written around 2024-era technology may be outdated by 2026. Watermarking and content provenance standards (C2PA) could play a complementary role, but adoption remains voluntary.
For real-time tracking of every election AI bill across all 50 states, use the Election AI Tracker. Related tools that intersect with election AI regulation include the Deepfake Penalty Tracker (deepfake penalties across all categories), the Facial Recognition Tracker (voter surveillance and biometric data at polling places), and the Employment AI Tracker (AI use in government hiring and public-sector decision-making).
This article is for informational purposes only and does not constitute legal advice. Consult a qualified attorney for guidance specific to your situation.