AISN #59: EU Publishes General-Purpose AI Code of Practice

By Center for AI Safety, Corin Katzke, Dan H @ 2025-07-15T18:32 (+8)

This is a linkpost to https://aisafety.substack.com/p/ai-safety-newsletter-59-eu-publishes

Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.

In this edition: The EU published a General-Purpose AI Code of Practice for AI providers, and Meta is spending billions revamping its superintelligence development efforts.

Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.

Subscribe to receive future versions.

EU Publishes General-Purpose AI Code of Practice

In June 2024, the EU adopted the AI Act, which remains the world’s most significant law regulating AI systems. The Act bans some uses of AI like social scoring and predictive policing and limits other “high risk” uses such as generating credit scores or evaluating educational outcomes. It also regulates general-purpose AI (GPAI) systems, imposing transparency requirements, copyright protection policies, and safety and security standards for models that pose systemic risk (defined as those trained using ≥1025 FLOPs).

However, these safety and security standards are ambiguous—for example, the Act requires providers of GPAIs to “assess and mitigate possible systemic risks,” but does not specify how to do so. This ambiguity may leave GPAI developers uncertain whether they are complying with the AI Act, and regulators uncertain whether GPAI developers are implementing adequate safety and security practices.

To address this problem, on July 10th 2025, the EU published the General-Purpose AI Code of Practice. The Code is a voluntary set of guidelines to comply with the AI Act’s GPAI obligations before they take effect on August 2nd, 2025.

The Code of Practice establishes safety and security requirements for GPAI providers. The Code consists of three chapters—Transparency, Copyright, and Safety and Security. The last chapter, Safety and Security, only applies to the handful of companies whose models cross the Act’s systemic-risk threshold.

The Safety and Security chapter requires GPAI providers to create frameworks outlining how they will identify and mitigate risks throughout a model's lifecycle. These frameworks must follow a structured approach to risk assessment—for each major decision (such as new model releases), providers must follow the following three steps:

Continuous monitoring, incident reporting timelines, and future-proofing. The Code requires continuous monitoring after models are deployed, and strict incident reporting timelines. For serious incidents, companies must file initial reports within days. It also acknowledges that current safety methods may prove insufficient as AI advances. Companies can implement alternative approaches if they demonstrate equal or superior safety outcomes.

AI providers will likely comply with the Code. While the Code is technically voluntary, compliance with the EU AI Act is not. Providers are incentivized to reduce their legal uncertainty by complying with the Code, since EU regulators will assume that providers who comply with the Code are also Act-compliant. OpenAI and Mistral have already indicated they intend to comply with the Code.

The Code formalizes some existing industry practices advocated for by parts of the AI safety community, such as publishing safety frameworks (or: responsible scaling policies) and system cards. Since frontier AI companies are very likely to comply with the Code, securing similar legislation in the US may no longer be a priority for AI safety.

Meta Superintelligence Labs

Meta spent $14.3 billion for a 49 percent stake in Scale AI, starting “Meta Superintelligence Labs.” The deal folds every AI group at Meta into one division and puts Scale founder Alexandr Wang—now chief AI officer—to lead Meta’s superintelligence development efforts.

Meta makes nine-figure pay offers to poach top AI talent. Reuters reported that Meta has offered “up to $100 million” to OpenAI staff, a tactic CEO Sam Altman criticized. SemiAnalysis estimates Meta is offering typical leadership packages of around $200 million over four years. For example, Bloomberg reports that Apple’s foundation-models chief Ruoming Pang left for Meta after a package “well north of $200 million.” Other early recruits span OpenAI, DeepMind, and Anthropic.

Meta has created a resourced competitor in the superintelligence race. In response to Meta’s hiring efforts, OpenAI, Google, and Anthropic have already raised pay bands, and smaller labs might be priced out of frontier work.

Meta is also raising its compute expenditures. It lifted its 2025 capital-expenditure forecast to $72 billin, and SemiAnalysis describes new, temporary “tent” campuses that can house one-gigawatt GPU clusters.

In Other News

Government

Industry

Civil Society

See also: CAIS’ X account, our paper on superintelligence strategy, our AI safety course, and AI Frontiers, a new platform for expert commentary and analysis.

Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.

Subscribe to receive future versions.