Key Takeaways
- At least 25 states have enacted or introduced legislation specifically addressing AI use in K–12 education as of spring 2026.
- Federal laws like FERPA and COPPA set the floor, but state-level student data privacy laws increasingly target AI and edtech specifically.
- Generative AI classroom policies range from outright bans to managed-access frameworks with disclosure requirements.
- A growing number of states now mandate AI literacy instruction as part of K–12 curricula.
When California introduced AB 2071, the Digital Wellness Education Act, in March 2026, it joined a rapidly expanding list of states grappling with a fundamental question: how should schools handle artificial intelligence? The explosion of ChatGPT and other generative AI tools in classrooms has forced school districts across the country to scramble for policies—often with little guidance from state legislatures that are still catching up.
This guide tracks the current state of K–12 AI policy across all 50 states, covering student data privacy, generative AI classroom rules, AI literacy mandates, academic integrity frameworks, and teacher professional development requirements. For real-time tracking of education-specific AI bills, visit our Education AI Tracker.
Student Data Privacy & AI
Student data privacy is the foundation of every AI-in-education conversation. Before any school district can adopt an AI-powered edtech tool, it must navigate a layered set of federal and state privacy requirements.
Federal Baseline: FERPA and COPPA
The Family Educational Rights and Privacy Act (FERPA) governs how schools handle student education records. When an AI tool processes student work, grades, or behavioral data, that data typically qualifies as an education record under FERPA. Schools must ensure that any AI vendor receiving student data operates under the “school official” exception or has obtained proper parental consent.
The Children’s Online Privacy Protection Act (COPPA) adds another layer for students under 13. COPPA requires verifiable parental consent before collecting personal information from children online. AI edtech tools that interact with elementary school students must comply with COPPA’s notice-and-consent requirements—a challenge that has tripped up several major vendors in FTC enforcement actions.
State SOPIPA-Style Laws
California’s Student Online Personal Information Protection Act (SOPIPA), enacted in 2014, became the model for a wave of state laws that specifically regulate how edtech companies handle student data. These laws typically prohibit:
- Selling student data or using it for targeted advertising
- Building non-educational profiles of students
- Retaining student data beyond its educational purpose
As of 2026, more than 40 states have enacted student data privacy laws with provisions modeled on or inspired by SOPIPA. However, most of these laws were written before generative AI entered the picture. The result is a patchwork: some states have updated their student privacy statutes to explicitly address AI processing of student data, while others rely on older frameworks that leave significant gray areas.
States leading on AI-specific student privacy protections include:
- California — SOPIPA, plus the California Consumer Privacy Act (CCPA) and the Age-Appropriate Design Code Act (AADC), which collectively impose strict limits on how AI systems can process data belonging to minors.
- Colorado — The Student Data Transparency and Security Act requires edtech vendors to disclose AI and algorithmic processing of student data.
- Connecticut — Updated its student data privacy law in 2025 to include provisions specific to AI-driven adaptive learning platforms.
- Illinois — The Student Online Personal Protection Act (SOPPA) includes data governance requirements that apply to AI-powered tools, and Illinois’ Biometric Information Privacy Act (BIPA) applies when AI tools use facial recognition or voice data in school settings.
- Virginia — Requires school boards to adopt policies governing AI tools that access student data, including annual vendor audits.
For a full breakdown by state, see the Education AI Tracker.
Generative AI in Classrooms
The release of ChatGPT in November 2022 sent shockwaves through American education. Within weeks, districts began issuing emergency bans. By 2026, the landscape has matured significantly, but approaches still vary widely.
Bans vs. Managed Access
The early wave of outright bans—most notably New York City’s January 2023 ban on ChatGPT across public school networks—has largely given way to more nuanced frameworks. NYC reversed its ban by May 2023, and most major districts have followed suit with managed-access policies rather than blanket prohibitions.
Current state-level approaches generally fall into three categories:
| Approach | Description | Example States |
|---|---|---|
| Managed access | State provides guidance or requires districts to adopt AI-use policies; tools allowed with guardrails | California, Oregon, North Carolina, Virginia |
| Restrictive | Generative AI tools prohibited on school networks unless explicitly approved by the district | Portions of Alabama, Mississippi |
| No state guidance | State has not issued AI-specific guidance; decisions left entirely to individual districts | Wyoming, South Dakota, West Virginia |
Disclosure Requirements
A growing number of states now require disclosure when AI is used in educational contexts. These requirements take several forms:
- Student disclosure: Students must indicate when they have used AI tools in completing assignments. At least 12 states have issued guidance or enacted legislation encouraging or requiring this practice.
- Vendor disclosure: Edtech vendors must disclose whether their products use AI or algorithmic processing. Colorado and Connecticut require this as part of data transparency agreements.
- Institutional disclosure: Schools or districts must notify parents when AI tools are used in instruction or assessment. Virginia and California have requirements in this area.
AI Literacy & Curriculum Requirements
Beyond regulating AI as a tool, a growing number of states are mandating that students learn about AI as part of their education. This represents a significant shift from reactive policy (restricting AI use) to proactive policy (building AI literacy).
As of April 2026, at least 15 states have enacted or introduced legislation requiring AI literacy or digital literacy instruction that includes AI components:
- California (AB 2071) — The Digital Wellness Education Act would require digital wellness instruction—including AI literacy and algorithm awareness—in middle and high school health classes.
- Virginia — Mandates computer science instruction including AI concepts for all K–12 students, with standards updated in 2025 to include generative AI.
- North Carolina — Requires integration of AI literacy into existing technology education standards across all grade levels.
- Connecticut — Enacted AI literacy standards as part of its 2025 digital citizenship education requirements.
- Indiana — Added AI awareness modules to its required computer science curriculum beginning in the 2025–2026 school year.
- Oregon — Established a task force to develop AI literacy standards for K–12 integration by 2027.
These mandates typically cover how AI systems work, how algorithms influence content and decisions, how to evaluate AI-generated content critically, and the ethical implications of AI deployment.
Cheating & Academic Integrity Policies
AI-powered writing tools have disrupted academic integrity frameworks across American education. Schools face the dual challenge of defining what constitutes “cheating” when AI tools are readily available and determining how to detect AI-assisted work.
State and District Approaches
No state has enacted a law that explicitly criminalizes student use of AI for schoolwork. Instead, academic integrity policies remain primarily a district-level concern, with states providing varying levels of guidance:
- Clear state guidance issued: California, New York, Virginia, Oregon, and North Carolina have published detailed frameworks that help districts distinguish between prohibited AI use (e.g., submitting AI-generated work as one’s own) and permitted AI use (e.g., using AI as a brainstorming or revision tool with disclosure).
- Model policies provided: Texas, Illinois, and Massachusetts have developed model academic integrity policies that districts can adopt or adapt, including AI-specific provisions.
- No state guidance: Approximately 20 states have not issued AI-specific academic integrity guidance, leaving districts to develop policies independently.
AI Detection Tools
The reliability of AI detection tools remains a significant concern. Studies have shown high false-positive rates, particularly for non-native English speakers and students with certain learning disabilities. Several districts have scaled back reliance on detection tools after facing complaints, and the U.S. Department of Education has cautioned against using AI detection as the sole basis for academic integrity determinations.
States that have addressed AI detection in policy guidance include:
- New York — Advises districts that AI detection tools should not be used as the sole evidence of academic dishonesty.
- Oregon — Recommends that districts focus on process-based assessment (drafts, revision histories) rather than detection-based enforcement.
- California — The Department of Education has flagged equity concerns with AI detection tools and recommends multi-factor evaluation.
Teacher Use of AI & Professional Development
Teachers are both users and gatekeepers of AI in education. States are increasingly recognizing that effective AI policy requires investing in teacher preparation.
Professional Development Requirements
A growing number of states now include AI-specific professional development in their teacher training requirements:
- Virginia — Requires all public school teachers to complete AI awareness training as part of their annual professional development, effective 2025–2026.
- North Carolina — Allocated funding for a statewide AI-in-education training program for teachers and administrators.
- California — The State Department of Education is developing AI professional development resources as part of the broader digital wellness initiative tied to AB 2071.
- Massachusetts — Requires districts to provide AI training to teachers before deploying AI-powered tools in classrooms.
- Oregon — Established an AI-in-education fellowship program for teachers to develop best practices and share findings statewide.
Teacher AI Tool Guidelines
States are also setting boundaries on how teachers themselves can use AI. Key concerns include:
- Grading and assessment: Several states have issued guidance cautioning against using AI to make final grading decisions without human review. Virginia and Connecticut explicitly require human oversight of any AI-assisted grading.
- Student communication: Guidelines in California and New York discourage teachers from using generative AI to draft individualized student communications (such as IEP notes or parent conference summaries) without thorough human review, due to accuracy and privacy concerns.
- Lesson planning: Most states permit teachers to use AI tools for lesson planning and content creation, provided the resulting materials are reviewed for accuracy and appropriateness before classroom use.
What to Watch
The K–12 AI policy landscape is evolving rapidly. Here are the developments most likely to reshape the field in the coming months:
- California AB 2071 implementation: If signed, the Digital Wellness Education Act would make California the first state to require AI-inclusive digital wellness instruction in health classes. Its implementation plan, due by January 1, 2028, could set a template for other states. Track it on our AB 2071 explainer.
- Federal student privacy AI updates: The U.S. Department of Education has signaled interest in updating FERPA guidance to address AI processing of student data. Any formal rulemaking would significantly affect how edtech AI vendors operate nationwide.
- UNESCO AI education guidelines: The UNESCO Recommendation on the Ethics of AI includes provisions on AI in education. As more countries adopt these guidelines, they may influence U.S. state-level policy development and create pressure for federal coordination.
- AI literacy as a graduation requirement: Multiple states are considering making AI literacy a graduation requirement, which would dramatically increase the urgency of curriculum development and teacher training.
For real-time tracking of education-specific AI bills across all 50 states, visit the Education AI Tracker. You may also find these related tools useful:
- Employment AI Tracker — Track AI hiring and workplace laws, many of which intersect with education policy when schools act as employers.
- Facial Recognition Tracker — Monitor biometric surveillance laws that affect school security systems and student monitoring tools.
Sources
- U.S. Department of Education, Student Privacy Policy Office — FERPA Guidance
- Federal Trade Commission, COPPA Rule
- California Legislature, AB 2071 — Digital Wellness Education Act
- National Conference of State Legislatures, Student Data Privacy Legislation
- CoSN (Consortium for School Networking), AI in Education Resource Center
- UNESCO, Recommendation on the Ethics of Artificial Intelligence
This article is for informational purposes only and does not constitute legal advice. Consult a qualified attorney for guidance specific to your situation.
Subscribe to the weekly digest →