Session Expired

Your session has expired. Please sign in again to continue where you left off.

Sign In Again
Education & AI Policy

AI in Schools: Student Privacy Laws Across All 50 States (2026 Guide)

AI Laws by State April 27, 2026 12 min read

Key Takeaways

  • At least 25 states have enacted or introduced legislation specifically addressing AI use in K–12 education as of spring 2026.
  • Federal laws like FERPA and COPPA set the floor, but state-level student data privacy laws increasingly target AI and edtech specifically.
  • Generative AI classroom policies range from outright bans to managed-access frameworks with disclosure requirements.
  • A growing number of states now mandate AI literacy instruction as part of K–12 curricula.
Legal Disclaimer: This article is for informational purposes only and does not constitute legal advice. Consult a qualified attorney for guidance specific to your situation.

When California introduced AB 2071, the Digital Wellness Education Act, in March 2026, it joined a rapidly expanding list of states grappling with a fundamental question: how should schools handle artificial intelligence? The explosion of ChatGPT and other generative AI tools in classrooms has forced school districts across the country to scramble for policies—often with little guidance from state legislatures that are still catching up.

This guide tracks the current state of K–12 AI policy across all 50 states, covering student data privacy, generative AI classroom rules, AI literacy mandates, academic integrity frameworks, and teacher professional development requirements. For real-time tracking of education-specific AI bills, visit our Education AI Tracker.

Student Data Privacy & AI

Student data privacy is the foundation of every AI-in-education conversation. Before any school district can adopt an AI-powered edtech tool, it must navigate a layered set of federal and state privacy requirements.

Federal Baseline: FERPA and COPPA

The Family Educational Rights and Privacy Act (FERPA) governs how schools handle student education records. When an AI tool processes student work, grades, or behavioral data, that data typically qualifies as an education record under FERPA. Schools must ensure that any AI vendor receiving student data operates under the “school official” exception or has obtained proper parental consent.

The Children’s Online Privacy Protection Act (COPPA) adds another layer for students under 13. COPPA requires verifiable parental consent before collecting personal information from children online. AI edtech tools that interact with elementary school students must comply with COPPA’s notice-and-consent requirements—a challenge that has tripped up several major vendors in FTC enforcement actions.

State SOPIPA-Style Laws

California’s Student Online Personal Information Protection Act (SOPIPA), enacted in 2014, became the model for a wave of state laws that specifically regulate how edtech companies handle student data. These laws typically prohibit:

As of 2026, more than 40 states have enacted student data privacy laws with provisions modeled on or inspired by SOPIPA. However, most of these laws were written before generative AI entered the picture. The result is a patchwork: some states have updated their student privacy statutes to explicitly address AI processing of student data, while others rely on older frameworks that leave significant gray areas.

States leading on AI-specific student privacy protections include:

For a full breakdown by state, see the Education AI Tracker.

Generative AI in Classrooms

The release of ChatGPT in November 2022 sent shockwaves through American education. Within weeks, districts began issuing emergency bans. By 2026, the landscape has matured significantly, but approaches still vary widely.

Bans vs. Managed Access

The early wave of outright bans—most notably New York City’s January 2023 ban on ChatGPT across public school networks—has largely given way to more nuanced frameworks. NYC reversed its ban by May 2023, and most major districts have followed suit with managed-access policies rather than blanket prohibitions.

Current state-level approaches generally fall into three categories:

ApproachDescriptionExample States
Managed accessState provides guidance or requires districts to adopt AI-use policies; tools allowed with guardrailsCalifornia, Oregon, North Carolina, Virginia
RestrictiveGenerative AI tools prohibited on school networks unless explicitly approved by the districtPortions of Alabama, Mississippi
No state guidanceState has not issued AI-specific guidance; decisions left entirely to individual districtsWyoming, South Dakota, West Virginia

Disclosure Requirements

A growing number of states now require disclosure when AI is used in educational contexts. These requirements take several forms:

AI Literacy & Curriculum Requirements

Beyond regulating AI as a tool, a growing number of states are mandating that students learn about AI as part of their education. This represents a significant shift from reactive policy (restricting AI use) to proactive policy (building AI literacy).

As of April 2026, at least 15 states have enacted or introduced legislation requiring AI literacy or digital literacy instruction that includes AI components:

These mandates typically cover how AI systems work, how algorithms influence content and decisions, how to evaluate AI-generated content critically, and the ethical implications of AI deployment.

Cheating & Academic Integrity Policies

AI-powered writing tools have disrupted academic integrity frameworks across American education. Schools face the dual challenge of defining what constitutes “cheating” when AI tools are readily available and determining how to detect AI-assisted work.

State and District Approaches

No state has enacted a law that explicitly criminalizes student use of AI for schoolwork. Instead, academic integrity policies remain primarily a district-level concern, with states providing varying levels of guidance:

AI Detection Tools

The reliability of AI detection tools remains a significant concern. Studies have shown high false-positive rates, particularly for non-native English speakers and students with certain learning disabilities. Several districts have scaled back reliance on detection tools after facing complaints, and the U.S. Department of Education has cautioned against using AI detection as the sole basis for academic integrity determinations.

States that have addressed AI detection in policy guidance include:

Teacher Use of AI & Professional Development

Teachers are both users and gatekeepers of AI in education. States are increasingly recognizing that effective AI policy requires investing in teacher preparation.

Professional Development Requirements

A growing number of states now include AI-specific professional development in their teacher training requirements:

Teacher AI Tool Guidelines

States are also setting boundaries on how teachers themselves can use AI. Key concerns include:

What to Watch

The K–12 AI policy landscape is evolving rapidly. Here are the developments most likely to reshape the field in the coming months:


For real-time tracking of education-specific AI bills across all 50 states, visit the Education AI Tracker. You may also find these related tools useful:

Sources


This article is for informational purposes only and does not constitute legal advice. Consult a qualified attorney for guidance specific to your situation.

K–12 AI laws are changing fast. Subscribe to AI Laws by State for weekly updates on new education AI bills, effective dates, and enforcement actions across all 50 states.

Subscribe to the weekly digest →

Frequently Asked Questions

Which states regulate AI in K-12 schools?

As of April 2026, at least 25 states have enacted or introduced legislation that specifically addresses AI use in K–12 education. States with the most comprehensive frameworks include California, Virginia, Colorado, Connecticut, Illinois, North Carolina, Oregon, and New York. These laws cover student data privacy, generative AI classroom policies, AI literacy mandates, and teacher professional development requirements. The regulatory picture is evolving quickly—visit the Education AI Tracker for real-time updates.

Is ChatGPT allowed in classrooms?

It depends on the state and district. The early wave of outright ChatGPT bans in 2023 has largely been reversed. Most major districts now follow a managed-access approach: generative AI tools like ChatGPT are allowed with guardrails, which may include age restrictions, teacher supervision requirements, acceptable-use policies, and student disclosure obligations. A handful of states still restrict generative AI on school networks unless explicitly approved. Check your state’s Department of Education guidance and your district’s acceptable-use policy for current rules.

What student data privacy laws cover AI?

At the federal level, FERPA (Family Educational Rights and Privacy Act) and COPPA (Children’s Online Privacy Protection Act) set the baseline for student data privacy and apply to AI tools that process student information. At the state level, more than 40 states have enacted student data privacy laws modeled on California’s SOPIPA (Student Online Personal Information Protection Act). These laws typically prohibit edtech companies from selling student data, using it for targeted advertising, or building non-educational profiles. States like California, Colorado, Connecticut, Illinois, and Virginia have updated their laws to specifically address AI and algorithmic processing of student data.

Struggling with AI compliance?

Describe your situation and we'll connect you with a specialist who understands your state's AI laws.

Get Compliance Help

Free consultation request · No obligation