AISN #63: California’s SB-53 Passes the Legislature

By Center for AI Safety, Corin Katzke, Dan H @ 2025-09-24T16:56 (+6)

This is a linkpost to https://newsletter.safe.ai/p/ai-safety-newsletter-63-californias

Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.

In this edition: California’s legislature sent SB-53—the ‘Transparency in Frontier Artificial Intelligence Act’—to Governor Newsom’s desk. If signed into law, California would become the first US state to regulate catastrophic risk.

Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.

A note from Corin: I’m leaving the AI Safety Newsletter soon to start law school—but if you’d like to hear more from me, I’m planning to continue to write about AI in a new personal newsletter, Conditionals. On a related note, we’re also hiring a writer for the newsletter.


California’s SB-53 Passes the Legislature

SB-53 is the Legislature’s weaker sequel to last year’s vetoed SB-1047. After Governor Gavin Newsom vetoed SB-1047 last year, he convened the Joint California Policy Working Group on AI Frontier Models. The group’s June report recommended transparency, incident reporting, and whistleblower protections as near-term priorities for governing AI systems. SB-53 (the “Transparency in Frontier Artificial Intelligence Act”) is an attempt to codify those recommendations. The California Legislature passed SB-53 on September 17th.

The introduction to SB-53’s text. Source.

Transparency. To track and respond to the risks involved in frontier AI development, governments need frontier developers to disclose the capabilities of their systems and how they assess and mitigate catastrophic risk. The bill defines a “catastrophic risk” as a foreseeable, material risk that a foundation model’s development, storage, use, or deployment will result in death or serious injury to more than 50 people, or more than $1 billion in property damages arising from a single incident involving a foundation model:

With these risks in mind, SB-53 requires frontier developers to:

Incident reporting. Governments need to be alerted to critical safety incidents involving frontier AI systems—such as harms resulting from unauthorized access to model weights or loss of control of an agent—to intervene before they escalate into catastrophic outcomes. SB-53 provides that:

The bill’s incident reporting requirements are also designed to accommodate future federal requirements. In the case that federal requirement for critical safety incident reporting becomes equivalent to, or stricter than, those required by SB-53, then OES can defer to those federal requirements.

Whistleblower protection. California state authorities will need to rely on whistleblowers to report whether frontier AI companies are complying with SB-53’s requirements. Given the industry’s mixed history regarding whistleblowers, the bill provides that:

Covered employees can sue frontier developers for noncompliance with whistleblower protections, and the Attorney General is empowered to enforce the bill’s transparency and incident reporting requirements by punishing violations with civil penalties of up to $1 million per violation.

How we got here, and what happens next. SB-1047 required frontier AI developers to implement specific controls to reduce catastrophic risk (such as shutdown controls and prohibitions on releasing unreasonably risky models), and Governor Newsom vetoed the bill under pressure from national Democratic leadership and industry lobbying. Since SB-53 only implements transparency requirements—and relies on the recommendations made by the Governor’s working group—SB-53 seems more likely to be signed into law. Anthropic has also publicly endorsed the bill.

Governor Newsom has until October 12th to sign SB-53. If he does, SB-53 will be the first significant AI legislation to become law since Senator Ted Cruz pushed (and narrowly failed) to attach a 10-year moratorium on state and local AI enforcement to federal budget legislation. He has since picked up the idea again in a new proposal—which, if it gains traction, might set up a conflict between California and Washington.

Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.

Subscribe to receive future versions.

In Other News

Government

Industry

Civil Society

See also: CAIS’ X account, our paper on superintelligence strategy, our AI safety course, and AI Frontiers, a new platform for expert commentary and analysis.