New York's RAISE Act Is Law - AI Labs Have Until 2027
New York's RAISE Act is now on the books, requiring frontier AI developers to publish safety protocols, report incidents within 72 hours, and submit to annual audits by January 2027.

New York's Responsible AI Safety and Education Act officially went onto the books on March 19, 2026. It's now the country's most demanding frontier AI safety law, and companies like OpenAI, Anthropic, Google, Meta, and Microsoft have until January 1, 2027 to meet its requirements - or face penalties of up to $3 million per violation.
The timing couldn't have been more pointed. One day later, on March 20, the White House released its National AI Legislative Framework calling on Congress to preempt exactly these kinds of state-level laws. New York signed. The federal government said states shouldn't be signing. The legal collision is now set.
TL;DR
- New York's RAISE Act went onto the books March 19, 2026; developer obligations kick in January 1, 2027
- Covers frontier models trained at over 10^26 FLOPs costing more than $100M, plus companies with annual revenue over $500M
- Requires published safety protocols, 72-hour incident reporting, and annual independent audits
- Penalties were negotiated down from $10M/$30M to $1M/$3M before Governor Hochul signed
- The White House's federal preemption push creates constitutional uncertainty about whether the law survives
What the Law Actually Demands
The RAISE Act isn't a principles document. It lists specific obligations, deadlines, and enforcement mechanisms.
Safety protocols that must go public
Before rolling out a covered frontier model, developers must write a detailed safety and security protocol. The protocol has to identify and describe steps to reduce risks of "critical harm," include cybersecurity controls, and lay out how the model was tested for unreasonable risk. A version of this document - with trade secrets redacted - must be published publicly. The unredacted copy goes to the New York Attorney General and the Division of Homeland Security and Emergency Services.
These protocols aren't one-and-done. The law requires annual reviews accounting for how the model's capabilities have changed and what current best practices look like. Records must be kept for the duration of deployment plus five years.
The 72-hour incident clock
The most operationally demanding requirement is the incident reporting window. When a developer determines - or "reasonably believes" - a safety incident has occurred, it has 72 hours to notify state officials. That's sharply stricter than California's SB-53, which gives developers 15 days.
The phrase "reasonably believes" matters. Developers don't get to wait until an incident is confirmed before the clock starts. Reasonable suspicion alone triggers the obligation.
DIGIT - a new AI watchdog
Governor Hochul announced the creation of the Office of Digital Innovation, Governance, Integrity and Trust (DIGIT), sitting within the Department of Financial Services. DIGIT will administer developer registration, assess oversight fees, issue regulations, and publish annual safety reports. It's a purpose-built AI oversight office, not a repurposed consumer protection bureau.
Governor Hochul signed the RAISE Act on December 19, 2025, stating New York is "once again leading the nation in setting a strong and sensible standard for frontier AI safety."
Source: flickr.com (Official Governor Kathy Hochul account)
Who Gets Caught in the Net
The law targets "large developers" through two entry paths.
The first is compute-based: any developer that has trained a frontier model using more than 10^26 computational operations and spent more than $100 million on compute costs. Knowledge-distilled models are also covered if the distillation process costs more than $5 million. The second path is revenue-based: companies with annual revenues passing $500 million.
That revenue threshold was added during legislative negotiations to align with California's SB-53. The practical effect is that large cloud providers and platform companies deploying third-party frontier models could fall under the law even if they haven't trained their own. Coverage applies to models "available to New York residents" or launched "in whole or in part" in New York.
State agencies and academic institutions doing non-commercial research are exempt - but that exemption disappears the moment an institution transfers IP rights to a commercial entity.
What Was Negotiated Away
The version Governor Hochul signed is far gentler than what the legislature originally passed. The original text carried penalties of $10 million for first violations and $30 million for subsequent ones. The signed version carries $1 million and $3 million respectively.
AI safety advocates described the final text as "considerably watered down." A Carnegie Endowment analysis noted that the reduced penalties made industry acceptance more likely: with smaller fines, large AI companies were "unlikely to lobby aggressively to abolish" the transparency requirements. The penalty reduction was the price of getting Hochul to sign, and the industry paid it willingly.
"By enacting the RAISE Act, New York is once again leading the nation in setting a strong and sensible standard for frontier AI safety, holding the biggest developers accountable for their safety and transparency protocols."
- Governor Kathy Hochul, December 19, 2025
State Senator Andrew Gounardes, who sponsored the bill, said the law goes beyond California's framework in "significant ways." The 72-hour reporting window - versus California's 15-day window - is the clearest example of that ambition.
New York vs. California's SB-53
| Requirement | NY RAISE Act | CA SB-53 |
|---|---|---|
| Compute threshold | 10^26 FLOPs + $100M cost | 10^26 FLOPs |
| Revenue threshold | $500M annual revenue | None |
| Incident reporting window | 72 hours | 15 days |
| Critical harm threshold | 100+ deaths or $1B+ damages | 50+ deaths |
| Oversight body | New DIGIT office at DFS | Existing agencies |
| First violation penalty | $1M | $1M |
| Later violations | $3M | $3M |
| Whistleblower protection | Labor law only | Explicit in statute |
The two laws are close enough in structure that companies complying with RAISE will largely satisfy SB-53. The main operational difference is the incident reporting timeline. A 72-hour window assumes developers have real-time monitoring capable of detecting and escalating incidents fast enough to meet the deadline. Whether current safety tooling can support that cadence at the scale of a deployed frontier model is an open question.
The Federal Collision Course
The same week the RAISE Act went onto the books, the White House released its National AI Legislative Framework calling on Congress to replace all state AI rules with a single federal standard. We covered that push when it dropped. The administration isn't only asking Congress - Trump's December 2025 executive order established a DOJ AI Litigation Task Force specifically to challenge state AI laws deemed inconsistent with federal goals.
Legal experts have mapped out the likely attack vectors. A Dormant Commerce Clause challenge would argue that requiring developers to manage models differently for New York residents effectively forces a global change in model management - placing one state's rules over national AI infrastructure. A First Amendment challenge would frame the public disclosure requirements as unconstitutional compelled speech. And straightforward federal preemption becomes viable the moment Congress passes anything touching AI safety reporting.
The New York State Capitol in Albany, where the RAISE Act was passed before Hochul's December 2025 signature.
Source: commons.wikimedia.org (CC BY-SA 4.0)
Companies are spending to shape whatever comes next
OpenAI and Anthropic spent a combined $125 million lobbying Congress in 2026, partly to shape what any eventual federal standard looks like. A federal standard favorable to industry would preempt New York's law from above. A successful constitutional challenge would preempt it from below. Developers building compliance programs right now are doing so under genuine legal uncertainty.
The 2026 AI Safety Report documented how hard it is to reliably assess frontier models for critical harm risks - the very assessments RAISE requires developers to conduct and publish. Whether the safety protocols the law demands are technically achievable at required confidence levels is a question the statute doesn't answer.
The DOJ AI Litigation Task Force hasn't filed against New York yet. Developers have until January 2027 to build compliance programs that could be voided by federal courts or overwritten by Congress before the deadline. Those that do build are betting on RAISE's durability. With the DIGIT office standing up, the Attorney General holding exclusive enforcement authority, and no federal AI law yet on the books, New York holds the initiative for now.
Sources:
- Governor Hochul signs RAISE Act - Official press release
- New York Laws Raise the Bar on AI Safety - Nelson Mullins
- NY RAISE Act: What frontier model developers need to know - Jones Walker
- AI state law convergence: NY and California - Carnegie Endowment
- Governor signs sweeping AI safety law - Fisher Phillips
- Landmark AI safety bill signed into law - NY Senate
