Simple summary of AI Safety laws
By Joseph Miller, Matilda, PauseAI UK @ 2026-01-07T01:51 (+10)
We wrote this briefing for UK politicians, to help them quickly get their heads around the AI safety laws that already exist in the US and EU. We found that it was also clarifying for us and we hope it will be useful for others. These laws are too long and complex to be perfectly distilled without losing some nuances, but this is our best attempt.
The laws
Three jurisdictions have enacted laws addressing extreme dangers from AI:
- EU - AI Act (May 2024)
- California - SB 53 (Sept 2025)
- New York - RAISE Act (Dec 2025).
For advanced general purpose AI models (language models trained with lots of ‘compute’ - like ChatGPT), all three laws have two basic requirements:
- Safety protocols. AI companies are required to develop and follow their own safety and testing framework to mitigate severe risks of AI.[1]
- Reporting requirements. AI companies must report their safety protocols, various details about their models and critical incidents of harm caused by their AIs.
| EU AI Act | CA SB 53 | NY RAISE | |
|---|---|---|---|
| Max penalties | €15M / 3% turnover (GPAI tier) | $1M | $1M - $3M |
| Audits | Internal (details TBC) | Not required | Internal review |
| Critical harm incident threshold | Death, irreversible critical infrastructure disruption, fundamental rights violationor serious property harm. | - 50 deaths or $1B in damage. - Increased risk of the above from the model deceiving the developer. - Injury caused by theft / escape or loss of control of a model. | - 100 deaths or $1B in damage. - Increased risk of the above caused by the theft / escape or loss of control of a model. |
| Incident reporting deadline | 2 - 15 days (depending on nature of incident) | 15 days | 72 hours |
SB 53 also includes whistleblower protections for employees of AI companies. The other two acts don't, although the EU already has strong general protections for whistleblowers.
The US laws stop there. The EU AI Act is broader, also governing non-frontier models. It sets:
- Prohibited practices: The use of AI is prohibited fully for social scoring and partly for emotion inference and biometrics
- Requirements for AI in high-risk domains: critical infrastructure, education, etc.
To more easily demonstrate compliance with the EU AI Act, companies can follow the Code of Practice. The Code of Practice isn't part of the law itself, but the law explicitly recognises it as a way to show you're compliant.
Strengths and limitations
- Penalties in the US laws are small. Note that these are administrative/civil penalties for violations of the Acts’ requirements. Damages for critical incidents would come separately via existing liability law (see below).
- Enforcement of these penalties has yet to be tested. While US enforcement rests with the attorneys general, the EU has a new enforcement body, the AI Office.
- Self-certification: Labs write their own protocols and conduct their own evaluations. Independent auditing is absent (US) or undefined (EU).[2]
- Rigid cost or 'compute' thresholds are used to define the models governed by these laws, so that only the most advanced AI models are included.[3] If future advances in AI science allow for dangerous AI to be created under these thresholds, the US laws can be amended only by statute, while the EU AI Act is more flexible.
- Catastrophic risks must be 'foreseeable' for developers to be held accountable for the misuse of their models under SB 53. (This does not apply to loss of control).
- No kill switches are defined in the US laws. There is no mechanism to halt a deployed model that is causing harm.[4]
- No pre-release evaluations by a body able to halt release.
Effect on liability
The laws mostly don't affect liability for harms, but they give slight, indirect boosts to existing laws. Judicial remedies remain largely unchanged outside of administrative penalties; no private recourse is created. That said, some features of these laws may support civil and criminal recourse:
- Risk / mitigation assessments and incident reporting may yield evidentiary support in civil / criminal cases where negligence or breach of duty of care matter.
- RAISE closes some legal loopholes that might have let companies escape liability.
Separately from the AI Act, in the EU, civil recourse prospects for AI-induced harms are strengthened by the recently updated Product Liability Directive, which recognises software as ‘products’, but the burden of proof of causation puts remedies out of most claimants’ reach.
UK regulatory horizon
- No equivalent legislation. The July 2024 King's Speech announced the government's intent to regulate advanced AI, but no government bill has yet been introduced. And none is expected before late 2026 at the earliest.
- The AI Security Institute, AISI, has tested many models on a voluntary basis[5], but companies can decline[6], results are not disclosed and companies can reject findings.
- DSIT has issued 5 principles which sectoral regulators can apply[7], and has stated it might upgrade this voluntary guidance to a statutory duty but has not yet done so.
- DSIT was tasked with conducting a regulatory gap analysis but progress has been slow.
Recommendations
A basic approach to UK legislation would be to align with existing EU/US legislation:
- Published safety protocols, regular audits, and prompt incident reporting to AISI.[8]
- Deterrent penalties for violations, enforceable by a credible body.
Easy improvements on EU/US law:
- Protect critical infrastructure from dependence on AI, ensuring that kill switches are feasible.
- External assessment of safety protocols by AISI, so that AI companies are not ‘marking their own homework’.
- Require pre-release evaluation of all models by AISI and published ‘safety cases’, with power to delay/block.
- Give AISI statutory footing to (re)set standards via principles-based secondary legislation.[9]
- ^
Strictly speaking, the EU AI Act does not require a safety and testing framework. However, the Act does require developers to conduct model evaluation and risk assessment, and the Code of Practice suggests a Safety and Security Framework as the operationalisation of this.
- ^
EU AI Act Arts. 43, 61; SB 53 §11547.6(a); RAISE Act §1101(c).
- ^
As mentioned earlier, The EU AI Act actually regulates a wide range of models, but compute thresholds are used to designate models as posing a 'system risk'. It is these 'system risk' models which have the safety protocol and reporting requirements described in the first section.
- ^
The RAISE Act, as passed by the legislature, enables the attorney general to block a model release via an injunction if it poses an unreasonable risk of critical harm, but this is expected to be removed or reduced in future amendments.
- ^
UK AI Safety Institute, 'Our First Year', 1 Nov. 2024. https://www.aisi.gov.uk/blog/our-first-year.
- ^
Following a letter issued by PauseAI UK, TIME confirmed that DeepMind did not give pre-release access to Gemini 2.5 Pro, Aug 2025. https://time.com/7313320/google-deepmind-gemini-ai-safety-pledge/.
- ^
DSIT, 'A pro-innovation approach to AI regulation', Cm 815, March 2023. https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper.
- ^
Shaffer, T. S., 'AI incident reporting: Addressing a gap in the UK's regulation of AI', CLTR, June 2024.
- ^
Ritchie, O., Anderljung, M. and Rachman, T., 'From Turing to Tomorrow: The UK's Approach to AI Regulation', Centre for the Governance of AI, July 2025. https://arxiv.org/abs/2507.03050
Alistair Stewart @ 2026-01-07T11:51 (+1)
Super helpful, thanks Joseph / Matilda / PauseAI UK team!