EU policymakers reach an agreement on the AI Act

By tlevin @ 2023-12-15T06:03 (+109)

On December 8, EU policymakers announced an agreement on the AI Act. This post aims to briefly explain the context and implications for the governance of global catastrophic risks from advanced AI. My portfolio on Open Philanthropy’s AI Governance and Policy Team includes EU matters (among other jurisdictions), but I am not an expert on EU policy or politics and could be getting some things in this post wrong, so please feel free to correct it or add more context or opinions in the comments!

If you have useful skills, networks, or other resources that you might like to direct toward an impactful implementation of the AI Act, you can indicate your interest in doing so via this short Google form.

Context

The AI Act was first introduced in April 2021, and for the last ~8 months, it has been in the “trilogue” stage. The EU Commission, which is roughly analogous to the executive branch (White House or 10 Downing Street), drafted the bill; then, the European Parliament (sort of like the U.S. House of Representatives, with seats assigned to each country by a population-based formula) and the Council of the EU (sort of like the pre-17th-Amendment U.S. Senate, with each country's government getting one vote in a complicated voting system)[1] each submitted proposed revisions; then, representatives from each body negotiated to land on a final version (analogous to conference committees in the US Congress).

In my understanding, AI policy folks who are worried about catastrophic risk were hoping that the Act would include regulations on all sufficiently capable GPAI (general-purpose AI) systems, with no exemptions for open-source models (at least for the most important regulations from a safety perspective), and ideally additional restrictions on “very capable foundation models” (those above a certain compute threshold), an idea floated by some negotiators in October. In terms of the substance of the hoped-for regulations, my sense is that the main hope was that the legislation would give the newly-formed AI Office substantial leeway to require things like threat assessments/dangerous capabilities evaluations and cybersecurity measures, with a lot of the details to be figured out later by that Office and by standard-setting bodies like CEN-CENELEC’s JTC-21

GPAI regulations appeared in danger of being excluded after Mistral, Aleph Alpha, and the national governments of France, Germany, and Italy objected to what they perceived as regulatory overreach and threatened to derail the Act in November. There was also some reporting that the Act would totally exempt open-source models from regulation.

What’s in it?

Sabrina Küspert, an AI policy expert working at the EU Commission, summarized the results on some of these questions in a thread on X:

The Commission’s blog post says: “For very powerful models that could pose systemic risks, there will be additional binding obligations related to managing risks and monitoring serious incidents, performing model evaluation and adversarial testing. These new obligations will be operationalised through codes of practices developed by industry, the scientific community, civil society and other stakeholders together with the Commission.” (I’m guessing this means JTC-21 and similar, but if people with more European context can better read the tea leaves, let me know.)

Parliament’s announcement notes that GPAI systems and models will “have to adhere to transparency requirements” including “technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.” I think these transparency requirements are the main opportunity to develop strong requirements for evaluations.

Enforcement will be up to both national regulators and the new European AI Office, which, as the Commission post notes, will be “the first body globally that enforces binding rules on AI and is therefore expected to become an international reference point.” Companies that fail to comply with these rules face fines up to “35 million euro or 7% of global revenue,” whichever is higher. (Not sure whether this would mean 7% of e.g. Alphabet’s global revenue or DeepMind’s).

The Act also does what some people have called the obvious thing of requiring that AI-generated content be labeled as such in a machine-readable format, with fines for noncompliance. (Seems easy to do for video/audio, much harder for text, but at least requiring that AI chatbots notify users that they’re AI systems rather than humans would be a first step.)

This post focuses on the most relevant parts of the Act to frontier models and catastrophic risk, but most of the Act is focused on the application layer. It bans the use of AI for:

The Act will start being enforced at the end of “a transitional period,” which the NYT says will be 12-24 months. In the meantime, the Commission is launching the cleverly titled “AI Pact,” which seeks voluntary commitments to start implementing the Act’s requirements before the legal deadline. EU Commission president Ursula von der Leyen says “around 100 companies have already expressed their interest to join” the Pact.

How big of a deal is this?

A few takeaways for me so far:

Making the AI Act effective for catastrophic risk reduction

The Act appears to stake out a high-level approach to Europe’s AI policy, but will very likely task the AI Office, standard-setting organizations (SSOs) like JTC-21, and EU member states with fleshing out a lot of detail and implementing the policies. Depending on the standardization and implementation phases over the next few years, the Act could wind up strongly incentivizing AI developers to act more safely, or it could wind up insufficiently detailed, captured by industry, bogged down in legal challenges, or so onerous that AI companies withdraw from the EU market and ignore the law. 

To achieve outcomes more like the former, people who would like to reduce global catastrophic risks from future AI systems could consider doing the following:

And once again: if you have useful skills, networks, or other resources that you might like to direct toward an impactful implementation of the AI Act, you can indicate your interest in doing so via this short Google form.

  1. ^

    Thanks to the commenter Sherrinford for correcting me on these.


Koen Holtman @ 2023-12-15T21:07 (+8)

Thanks for sharing! Speaking as a European I think this is a pretty good summary of the latest state of events.

I currently expect the full text of the Act as agreed on in the trilogue to be published by the EU some time in January or February.

c.trout @ 2023-12-15T21:46 (+6)

Another worrisome+unclear reported exemption is for national security.

Larks @ 2023-12-15T15:48 (+6)

Thanks for sharing!

Küspert also says “no exemptions,” which I interpret to mean “no exemptions to the systemic-risk rules for open-source systems.” Other reporting suggests there are wide exemptions for open-source models, but the requirements kick back in if the models pose systemic risks. However, Yann LeCun is celebrating based on this part of a Washington Post article: "The legislation ultimately included restrictions for foundation models but gave broad exemptions to “open-source models,” which are developed using code that’s freely available for developers to alter for their own products and tools. The move could benefit open-source AI companies in Europe that lobbied against the law, including France’s Mistral and Germany’s Aleph Alpha, as well as Meta, which released the open-source model LLaMA." So it’s currently unclear to me where the Act lands on this question, and I think a close review by someone with legal or deep EU policy expertise might help illuminate.

It's a shame this is so unclear. To me this is basically the most important part of the act, and intuitively seems like it makes the difference between 'the law is net bad because it gives only the appearance of safety while adding a lots of regulatory overhead' and 'the law is good'.

Denis @ 2023-12-21T12:26 (+2)

Great summary. 

I was pleasantly surprised at how good this turned out to be, despite it having to be re-evaluated when Chat-GPT came along, despite the objections of major governments. 

The EU Commission is a fantastic organisation. Yes, massive levels of bureaucracy, but the people there tend to be extremely smart and very committed to doing what's best. Just being accepted to work in the Commission requires finishing in the top 1% or less of a very tough evaluation process and then passing a series of in-person evaluations. 

So normally when they produce a proposal, it has been thought through very carefully. 

Of course it's not perfect, and I especially appreciate that the post ends with tangible ideas for how to help make it more impactful. 

Obviously this is an area where we'll need to keep working all the time as AI evolves, as regulations elsewhere evolve. But good to see someone taking the lead and actually putting something tangible in place which seems to be 80/20 what's needed. Maybe this can be the starting point for an even better US legislation ??

constructive @ 2023-12-31T09:05 (+1)

At the very least, in my view, the picture has changed in an EU-favoring direction in the last year (despite lots of progress in US AI policy), and this should prompt a re-evaluation of the conventional wisdom (in my understanding) that the US has enough leverage over AI development such that policy careers in DC are more impactful even for Europeans.

Interesting! I don't quite understand what updated you. To me, it looks like, given the EU AI Act is mostly determined at this stage, there is less leverage in the EU, not more. Meanwhile, the approach the US takes to AI regulation still remains uncertain, indicating many more opportunities for impact.

tlevin @ 2023-12-31T23:43 (+2)

The text of the Act is mostly determined, but it delegates tons of very important detail to standard-setting organizations and implementation bodies at the member-state level.

constructive @ 2024-01-05T10:29 (+1)

And your update is that this process will be more globally impactful than you initially expected? Would be curious to learn why.

tlevin @ 2024-01-11T05:41 (+4)

The shape of my updates has been something like:

Q2 2023: Woah, looks like the AI Act might have a lot more stuff aimed at the future AI systems I'm most worried about than I thought! Making that go well now seems a lot more important than it did when it looked like it would mostly be focused on pre-foundation model AI. I hope this passes!

Q3 2023: As I learn more about this, it seems like a lot of the value is going to come from the implementation process, since it seems like the same text in the actual Act could wind up either specifically requiring things that could meaningfully reduce the risks or just imposing a lot of costs at a lot of points in the process without actually aiming at the most important parts, based on how the standard-setting orgs and member states operationalize it. But still, for that to happen at all it needs to pass and not have the general-purpose AI stuff removed.

November 2023: Oh no, France and Germany want to take out the stuff I was excited about in Q2. Maybe this will not be very impactful after all.

December 2023: Oh good, actually it seems like they've figured out a way to focus the costs France/Germany were worried about on the very most dangerous AIs and this will wind up being more like what I was hoping for pre-November, and now highly likely to pass!

SummaryBot @ 2023-12-15T14:43 (+1)

Executive summary: The EU has reached an agreement on regulations for AI systems, including requirements for general-purpose AI systems that could reduce risks.

Key points:

  1. The EU's AI Act will regulate general-purpose AI systems and "very capable" models.
  2. It requires threat assessments, model evaluations, transparency, and addressing systemic risks.
  3. There are questions around exemptions for open-source models.
  4. The Act could influence companies due to the size of the EU market.
  5. Effective implementation requires expertise in standards bodies and regulators.
  6. More policy research could inform catastrophic risk reduction.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

tlevin @ 2023-12-15T17:13 (+3)

It uses the language of "models that present systemic risks" rather than "very capable," but otherwise, a decent summary, bot.