What It Does
New York S 8828 establishes transparency and safety requirements for developers of frontier artificial intelligence models. The law mandates that AI model developers provide transparency reports detailing model capabilities, known risks, safety testing results, and mitigation measures. It also creates a new state office responsible for overseeing AI model development and enforcing compliance. Signed as Chapter 96 on March 27, 2026, S 8828 takes effect January 1, 2027. This is the first enacted state law in the U.S. specifically targeting the developers of large-scale frontier AI models.
Who It Applies To
S 8828 targets developers of frontier AI models — the companies building the largest and most capable general-purpose AI systems. While the full bill text will define specific thresholds, the law is understood to apply to companies that develop AI models above a defined compute or capability threshold. This includes major AI labs, large technology companies with foundational model programs, and potentially mid-tier model developers as thresholds are defined by the oversight office. Companies that merely deploy or fine-tune existing models are not the primary targets, but developers who train models from scratch for commercial distribution in New York are covered.
Key Provisions
- Transparency reporting: Developers must publish or file reports detailing model capabilities, training data characteristics, known limitations, and safety evaluations.
- Oversight office: Establishes a state office with authority to set reporting requirements, conduct investigations, and enforce compliance.
- Safety testing disclosure: Developers must disclose the results of safety and red-team evaluations conducted before and after model release.
- Risk documentation: Known risks, dual-use capabilities, and mitigation strategies must be documented and made available to the oversight office.
- Enforcement authority: The oversight office has the power to investigate noncompliance and refer cases for enforcement action.
Compliance Checklist
If you develop frontier AI models that are available in New York, before January 1, 2027 you should:
- Assess whether your models meet the threshold for frontier AI classification under the law’s definitions.
- Build transparency reporting infrastructure capable of producing the required disclosures about model capabilities, risks, and safety evaluations.
- Document safety testing processes including pre-release red-teaming, adversarial testing, and ongoing monitoring protocols.
- Establish a compliance liaison to coordinate with New York’s AI oversight office once it becomes operational.
- Review existing public disclosures (model cards, system cards) to identify gaps against the law’s requirements.
How This Compares
S 8828 is the most direct state-level regulation of frontier AI model developers in the U.S. California’s SB 1047 was vetoed by Governor Newsom in 2024, making New York the first state to successfully enact frontier model legislation. The law differs from Colorado’s SB 24-205, which focuses on deployers of high-risk AI systems rather than model developers. S 8828 also contrasts with the EU AI Act’s general-purpose AI model provisions, though both share the transparency reporting approach.
Effective Date Countdown
Compliance deadline: January 1, 2027. As of April 2026, frontier AI model developers have approximately eight months to prepare. The oversight office is expected to publish guidance in the interim period, but developers should not wait for that guidance to begin building compliance infrastructure.
Read the Bill
Author: AI Laws by State. This is not legal advice. For compliance questions specific to your operation, consult an attorney licensed in New York.