AISN #57: The RAISE Act

By Center for AI Safety, Corin Katzke, Dan H @ 2025-06-17T17:38 (+12)

This is a linkpost to https://newsletter.safe.ai/p/ai-safety-newsletter-57-the-raise

Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.

In this edition: The New York Legislature passes an act regulating frontier AI—but it may not be signed into law for some time.

Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.

Subscribe to receive future versions.

The RAISE Act

New York may soon become the first state to regulate frontier AI systems. On June 12, the state’s legislature passed the Responsible AI Safety and Education (RAISE) Act. If New York Governor Kathy Hochul signs it into law, the RAISE Act will be the most significant state AI legislation in the U.S.

New York’s RAISE Act imposes four guardrails on frontier labs: developers must publish a safety plan, hold back unreasonably risky models, disclose major incidents, and face penalties for non-compliance.

The RAISE Act only regulates the largest developers. Mirroring California’s SB 1047—vetoed by Governor Gavin Newsom in 2024—the Act covers any model costing at least $100 million in compute.

Obligations fall on developers that have trained at least one frontier model and spent a cumulative $100 million on such training—and on anyone who later buys the model’s full intellectual-property rights. Accredited colleges are exempt when conducting academic research, but commercial spin-outs are not. These carve-outs serve to focus the legal burden onto the handful of firms capable of creating catastrophic harms.

While New York acts, the U.S. Congress weighs a federal moratorium on state AI regulation. The “One Big Beautiful Bill Act,” the budget reconciliation package the U.S. House of Representatives approved on May 22, contained a 10‑year federal moratorium on “any law or regulation” that “restricts, governs or conditions” the design, deployment, or use of AI systems.

The moratorium was originally unlikely to pass the Senate’s Byrd Rule, which prohibits policy provisions from being included in budget reconciliation bills. The Senate Commerce Committee, chaired by Cruz, recently revised the moratorium such that it would be a prerequisite for states to receive billions in federal broadband expansion funds. This change could potentially bypass the Byrd rule.

However, the proposed moratorium has drawn criticism from some Republican lawmakers—including the House Freedom Caucus—who may be crucial to its survival. A recent poll found that proposal appears to be unpopular with the party’s base, with 50 percent of Republican voters saying they opposed the moratorium compared to 30 percent saying they supported it. Last week, a bipartisan group of 260 state legislators also wrote a letter to congress opposing the moratorium.

The RAISE Act isn’t law yet. Although both chambers have passed the bill, they have not yet delivered it to Governor Kathy Hochul—a step lawmakers can take at any point during 2025.

A diagram depicting the bill’s current status. Source.

Once the bill is finally sent, Hochul will have up to 30 days to sign it, veto it, or negotiate “chapter amendments,” the back-and-forth revisions governors often use to tweak language before giving final approval. Until that clock starts, the measure sits in limbo, and its ultimate shape—possibly even its survival—remains an open question.

In Other News

Government

Industry

Civil Society

See also: CAIS’ X account, our paper on superintelligence strategy, our AI safety course, and AI Frontiers, a new platform for expert commentary and analysis.

Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.

Subscribe to receive future versions.


Astelle Kay @ 2025-06-19T04:32 (+1)

This is incredibly exciting. The RAISE Act feels like a much-needed shift toward real, structural accountability, and I really hope it sets a precedent for other states to follow.

I especially appreciate how it focuses on frontier developers and doesn’t overburden smaller orgs or academic researchers. That kind of targeting feels unusually thoughtful for policy this early in the curve. The safety plan + incident reporting combo could create some helpful culture shifts, too, even beyond enforcement.

I'm hoping this clears the last few steps with Hochul. Thank you for your insights!