RP’s AI Governance & Strategy team - June 2023 interim overview

By MichaelA🔸 @ 2023-06-22T13:45 (+68)

Hi! I co-lead Rethink Priorities’ AI Governance & Strategy (AIGS) team. At the suggestion of Ben West, I’m providing an update on our team. 

Caveats:

Comments and DMs are welcome, though I can’t guarantee a rapid or detailed reply. 

Summary 

Who we are 

Rethink Priorities’ AI Governance & Strategy team works to reduce catastrophic risks related to development & deployment of AI systems. We do this by producing research that grounds concrete recommendations in strategic considerations, and by strengthening coordination and talent pipelines across the AI governance field.

Our four workstreams 

We recently narrowed down to four focus areas, each of which has a 1-3 person subteam working on it. Below we summarize these workstreams and link to docs that provide further information on each (e.g., about ongoing projects, public outputs, and stakeholders and paths to impact). 

Most of these workstreams essentially only started in Q2 2023. Their strategies may change considerably, and we may drop, modify, or add workstreams in future.

We previously also worked on projects outside of those focus areas, some of which are still wrapping up. See here for elaboration.

Some of our ongoing or completed work 

Note: This isn’t comprehensive, and in particular excludes nonpublic work. If you’re reading this after June 2023, please see the documents linked to in the above section for updated info on our completed or ongoing projects. 

Compute governance

Ongoing:

China

Ongoing:

Outputs:

Lab governance

Ongoing:

Strategy & foresight

Ongoing:

Outputs:

US regulation 

Ongoing:

Other

Media appearances:

How we can help you or you can help us 

Feel free to:

Please let us know (e.g. via emailing one of us at firstname@rethinkpriorities.org) if:

Appendix: Some lessons learned and recent updates

Here I feel especially inclined to remind readers that, due to time constraints, unfortunately this post was quickly written, omits most of our reasoning, and may not reflect the views of all members of the team. 

A huge amount has happened in the AI and AI governance spaces since October 2022. Additionally, our team has learned a lot since starting out. Below I summarize some of the things that I personally consider lessons learned and recent updates for our team (with little elaboration, justification, or nuance). 

Note that, even if these points are accurate for our team, they may not also apply to other people or teams, depending on whether their beliefs, actions, skills, etc. are similar in relevant ways to our team until recently. 

Regarding what we work on and consider important, it seems that, relative to in 2022, our team should: 

Regarding how we work, it seems that, relative to in 2022, our team should: 

Note that most of those are just small/moderate shifts (e.g., we still want several team members to focus on lab governance), and we may later shift in opposite directions (e.g., we may increase our focus on strategic questions after we gain more expertise or if in future there are fewer policy windows open). 

Acknowledgements


 

This is a blog post from Rethink Priorities–a think tank dedicated to informing decisions made by high-impact organizations and funders across various cause areas. The primary author is Michael Aird, though some parts were contributed by Ashwin Acharya, Oliver Guest, Onni Aarne, Shaun Ee, and Zoe Williams. Thanks to those and all other AIGS team members, Peter Wildeford, Sarina Wong, and other RP staff for helpful conversations and feedback. 

If you are interested in RP’s work, please visit our research database and subscribe to our newsletter


Guy Raveh @ 2023-06-22T21:45 (+8)

Thanks for the update! At least from what's described in the post, the team's research seems to have a higher impact potential than most of the AI safety field.

In particular I'm excited about: