EU policymakers reach an agreement on the AI Act
By tlevin @ 2023-12-15T06:03 (+109)
This is a crosspost, probably from LessWrong. Try viewing it there.
nullKoen Holtman @ 2023-12-15T21:07 (+8)
Thanks for sharing! Speaking as a European I think this is a pretty good summary of the latest state of events.
I currently expect the full text of the Act as agreed on in the trilogue to be published by the EU some time in January or February.
c.trout @ 2023-12-15T21:46 (+6)
Another worrisome+unclear reported exemption is for national security.
Larks @ 2023-12-15T15:48 (+6)
Thanks for sharing!
Küspert also says “no exemptions,” which I interpret to mean “no exemptions to the systemic-risk rules for open-source systems.” Other reporting suggests there are wide exemptions for open-source models, but the requirements kick back in if the models pose systemic risks. However, Yann LeCun is celebrating based on this part of a Washington Post article: "The legislation ultimately included restrictions for foundation models but gave broad exemptions to “open-source models,” which are developed using code that’s freely available for developers to alter for their own products and tools. The move could benefit open-source AI companies in Europe that lobbied against the law, including France’s Mistral and Germany’s Aleph Alpha, as well as Meta, which released the open-source model LLaMA." So it’s currently unclear to me where the Act lands on this question, and I think a close review by someone with legal or deep EU policy expertise might help illuminate.
It's a shame this is so unclear. To me this is basically the most important part of the act, and intuitively seems like it makes the difference between 'the law is net bad because it gives only the appearance of safety while adding a lots of regulatory overhead' and 'the law is good'.
Akash @ 2023-12-16T02:15 (+4)
Thanks for this overview, Trevor. I expect it'll be helpful– I also agree with your recommendations for people to consider working at standard-setting organizations and other relevant EU offices.
One perspective that I see missing from this post is what I'll call the advocacy/comms/politics perspective. Some examples of this with the EU AI Act:
- Foundation models were going to be included in the EU AI Act, until France and Germany (with lobbying pressure from Mistral and Aleph Alpha) changed their position.
- This initiated a political/comms battle between those who wanted to exclude foundation models (led by France and Germany) and those who wanted to keep it in (led by Spain).
- This political fight rallied lots of notable figures, including folks like Gary Marcus and Max Tegmark, to publicly and privately fight to keep foundation models in the act.
- There were open letters, op-eds, and certainly many private attempts at advocacy.
- There were attempts to influence public opinion, pieces that accused key lobbyists of lying, and a lot of discourse on Twitter.
It's difficult to know the impact of any given public comms campaign, but it seems quite plausible to me that many readers would have more marginal impact by focusing on advocacy/comms than focusing on research/policy development.
More broadly, I worry that many segments of the AI governance/policy community might be neglecting to think seriously about what ambitious comms/advocacy could look like in the space.
I'll note that I might be particularly primed to bring this up now that you work for Open Philanthropy. I think many folks (rightfully) critique Open Phil for being too wary of advocacy, campaigns, lobbying, and other policymaker-focused activities. I'm guessing that Open Phil has played an important role in shaping both the financial and cultural incentives that (in my view) leads to an overinvestment into research and an underinvestment into policy/advocacy/comms.
(I'll acknowledge these critiques are pretty high-level and I don't claim that this comment provides compelling evidence for them. Also, you only recently joined Open Phil, so I'm of course not trying to suggest that you created this culture, though I guess now that you work there you might have some opportunities to change it).
I'll now briefly try to do a Very Hard thing which is like "put myself in Trevor's shoes and ask what I actually want him to do." One concrete recommendation I have is something like "try to spend at least 5 minutes thinking about ways in which you or others around you might be embedded in a culture that has blind spots to some of the comms/advocacy stuff." Another is "make a list of people you read actively or talked to when writing this post. Then ask if there were any other people/orgs you could've reached out, particularly those that might focus more on comms+adovacy". (Also, to be clear, you might do both of these things and conclude "yea, actually I think my approach was very solid and I just had Good Reasons for writing the post the way I did.")
I'll stop here since this comment is getting long, but I'd be happy to chat further about this stuff. Thanks again for writing the post and kudos to OP for any of the work they supported/will support that ends up increasing P(good EU AI Act goes through & gets implemented).
tlevin @ 2023-12-20T23:16 (+2)
(Cross-posting from LW)
Thanks for these thoughts! I agree that advocacy and communications is an important part of the story here, and I'm glad for you to have added some detail on that with your comment. I’m also sympathetic to the claim that serious thought about “ambitious comms/advocacy” is especially neglected within the community, though I think it’s far from clear that the effort that went into the policy research that identified these solutions or work on the ground in Brussels should have been shifted at the margin to the kinds of public communications you mention.
I also think Open Phil’s strategy is pretty bullish on supporting comms and advocacy work, but it has taken us a while to acquire the staff capacity to gain context on those opportunities and begin funding them, and perhaps there are specific opportunities that you're more excited about than we are.
For what it’s worth, I didn’t seek significant outside input while writing this post and think that's fine (given the alternative of writing it quickly, posting it here, disclaiming my non-expertise, and getting additional perspectives and context from commenters like yourself). However, I have spoken with about a dozen people working on AI policy in Europe over the last couple months (including one of the people whose public comms efforts are linked in your comment) and would love to chat with more people with experience doing policy/politics/comms work in the EU.
We could definitely use more help thinking about this stuff, and I encourage readers who are interested in contributing to OP’s thinking on advocacy and comms to do any of the following:
- Write up these critiques (we do read the forums!);
- Join our team (our latest hiring round specifically mentioned US policy advocacy as a specialization we'd be excited about, but people with advocacy/politics/comms backgrounds more generally could also be very useful, and while the round is now closed, we may still review general applications); and/or
- Introduce yourself via the form mentioned in this post.
Akash @ 2023-12-21T14:39 (+3)
I appreciate the comment, though I think there's a lack of specificity that makes it hard to figure out where we agree/disagree (or more generally what you believe).
If you want to engage further, here are some things I'd be excited to hear from you:
- What are a few specific comms/advocacy opportunities you're excited about//have funded?
- What are a few specific comms/advocacy opportunities you view as net negative//have actively decided not to fund?
- What are a few examples of hypothetical comms/advocacy opportunities you've been excited about?
- What do you think about EG Max Tegmark/FLI, Andrea Miotti/Control AI, The Future Society, the Center for AI Policy, Holly Elmore, PauseAI, and other specific individuals or groups that are engaging in AI comms or advocacy?
I think if you (and others at OP) are interested in receiving more critiques or overall feedback on your approach, one thing that would be helpful is writing up your current models/reasoning on comms/advocacy topics.
In the absence of this, people simply notice that OP doesn't seem to be funding some of the main existing examples of comms/advocacy efforts, but they don't really know why, and they don't really know what kinds of comms/advocacy efforts you'd be excited about.
Denis @ 2023-12-21T12:26 (+2)
Great summary.
I was pleasantly surprised at how good this turned out to be, despite it having to be re-evaluated when Chat-GPT came along, despite the objections of major governments.
The EU Commission is a fantastic organisation. Yes, massive levels of bureaucracy, but the people there tend to be extremely smart and very committed to doing what's best. Just being accepted to work in the Commission requires finishing in the top 1% or less of a very tough evaluation process and then passing a series of in-person evaluations.
So normally when they produce a proposal, it has been thought through very carefully.
Of course it's not perfect, and I especially appreciate that the post ends with tangible ideas for how to help make it more impactful.
Obviously this is an area where we'll need to keep working all the time as AI evolves, as regulations elsewhere evolve. But good to see someone taking the lead and actually putting something tangible in place which seems to be 80/20 what's needed. Maybe this can be the starting point for an even better US legislation ??
constructive @ 2023-12-31T09:05 (+1)
At the very least, in my view, the picture has changed in an EU-favoring direction in the last year (despite lots of progress in US AI policy), and this should prompt a re-evaluation of the conventional wisdom (in my understanding) that the US has enough leverage over AI development such that policy careers in DC are more impactful even for Europeans.
Interesting! I don't quite understand what updated you. To me, it looks like, given the EU AI Act is mostly determined at this stage, there is less leverage in the EU, not more. Meanwhile, the approach the US takes to AI regulation still remains uncertain, indicating many more opportunities for impact.
tlevin @ 2023-12-31T23:43 (+2)
The text of the Act is mostly determined, but it delegates tons of very important detail to standard-setting organizations and implementation bodies at the member-state level.
constructive @ 2024-01-05T10:29 (+1)
And your update is that this process will be more globally impactful than you initially expected? Would be curious to learn why.
tlevin @ 2024-01-11T05:41 (+4)
The shape of my updates has been something like:
Q2 2023: Woah, looks like the AI Act might have a lot more stuff aimed at the future AI systems I'm most worried about than I thought! Making that go well now seems a lot more important than it did when it looked like it would mostly be focused on pre-foundation model AI. I hope this passes!
Q3 2023: As I learn more about this, it seems like a lot of the value is going to come from the implementation process, since it seems like the same text in the actual Act could wind up either specifically requiring things that could meaningfully reduce the risks or just imposing a lot of costs at a lot of points in the process without actually aiming at the most important parts, based on how the standard-setting orgs and member states operationalize it. But still, for that to happen at all it needs to pass and not have the general-purpose AI stuff removed.
November 2023: Oh no, France and Germany want to take out the stuff I was excited about in Q2. Maybe this will not be very impactful after all.
December 2023: Oh good, actually it seems like they've figured out a way to focus the costs France/Germany were worried about on the very most dangerous AIs and this will wind up being more like what I was hoping for pre-November, and now highly likely to pass!
SummaryBot @ 2023-12-15T14:43 (+1)
Executive summary: The EU has reached an agreement on regulations for AI systems, including requirements for general-purpose AI systems that could reduce risks.
Key points:
- The EU's AI Act will regulate general-purpose AI systems and "very capable" models.
- It requires threat assessments, model evaluations, transparency, and addressing systemic risks.
- There are questions around exemptions for open-source models.
- The Act could influence companies due to the size of the EU market.
- Effective implementation requires expertise in standards bodies and regulators.
- More policy research could inform catastrophic risk reduction.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
tlevin @ 2023-12-15T17:13 (+3)
It uses the language of "models that present systemic risks" rather than "very capable," but otherwise, a decent summary, bot.