AI Safety University Organizing: Early Takeaways from Thirteen Groups

By Agustín Covarrubias 🔸, Steven Veld 💡 @ 2024-10-02T14:39 (+45)

TL;DR

Introduction

Within the last few years, the number of AI safety university groups has grown from nearly zero to ~70. Many of these groups have spun out of existing Effective Altruism university groups but recently have become increasingly independent. While there is a solid body of work on effective organizing techniques for EA groups, the field of AI safety as a whole is still largely pre-paradigmatic – and this is especially true of theories of change for university groups. Despite this, we have seen a number of groups rapidly scale up and achieve exciting successes, in large part thanks to intentional advertising, facilitating, and other organizing tactics. After talking with organizers from >10 of the world’s top AI safety university groups, we’ve collected a list of these tactics, along with some discussion surrounding the effectiveness of each. Once again, given the pre-paradigmatic nature of the field, we’d like people to read this post not as “here is a list of tried-and-true best practices that all organizers should follow”, but rather as a tentative collection of recently-acquired wisdom that can help inform people’s priors on what has and hasn't worked in the space of AI safety university organizing.

This post will start by briefly outlining the methodology that we used to source perspectives from various university organizers. Then, it will dive into our key takeaways, along with any uncertainties we still have. We hope that readers will engage with this post by a) actually internalizing/visualizing the techniques we discuss (what would happen if I / a friend implemented this? How would I/they go about it?), and b) reading with a critical lens and providing feedback – as mentioned previously, most university groups have not been around very long, so we are still very much in a stage where we would like to update our beliefs/strategies based on community feedback.

Methodology

To source ideas from university organizers on “what works and what doesn’t” in their AI safety groups, we went through three stages of interaction: 1) a virtual meeting among organizers to discuss the most helpful form for knowledge-sharing, 2) soliciting retrospective documents from various universities, and 3) a formatted in-person discussion among organizers discussing specific claims. Across different stages, we had participation from organizers from the following 13 groups:

        In June of this year, we had a meeting with 13 AI safety organizers and prospective organizers from nine universities, where we discussed the high-level theory of change for AI safety groups, along with the best ways for groups to help each other through knowledge-sharing. While initially, we had considered having a group of organizers jointly write an AI safety university groups guide, we ultimately decided that the best way to avoid knowledge cascades was to have each university write up a retrospective document of their own, which would then later be synthesized.

        Over the next two months, we collected retrospective documents from various universities. We reached out to organizers from 11 universities and ultimately received documents from seven. Organizers were prompted to write ~2 pages outlining their organization’s theory of change, structure, advertisement techniques, interaction with other organizations, activities, and external support. The full list of guidelines is in Appendix A.

        In August of this year, we ran a structured group discussion at OASIS 3.0, a workshop for AI Safety group organizers, in which around 15 organizers participated. Based on our main takeaways from the retrospectives, we made a list of 21 claims about group organizing, and organizers voted in favor or against each claim, separately voting on whether they wanted to discuss them. We then discussed 8 of these claims, particularly the more controversial ones, and summarized the main takeaways from each discussion. The “Key Lessons” section of this post is largely modeled after our discussion at OASIS 3.0; the precise number of organizers that agreed and disagreed with each claim is included in Appendix B.

Early takeaways

The takeaways that follow are our best attempt at summarizing important opinions and discussions that surfaced through the activities we ran; they should only be taken as rough early guesses and not prescriptive recommendations.

Programming

Recruitment

Outreach

Collaborations

Community building

Group strategy

Appendix A: Retrospective document guidelines

We asked AI safety organizers from 11 universities to write retrospective documents on their organization. For the most part, we asked organizers to write up “whatever they thought to be most relevant” and provided guidelines to spark thinking on some organizing topics. The guidelines are italicized below.

The document can be written in bullet-point format -- it should be readable and dive straight into the meat of things. Some things that you can include in the document:

Appendix B: OASIS poll & discussion

The table below includes the full list of claims included in our poll at OASIS 3.0; for each claim, we recorded the number of organizers who agreed, the number who disagreed, and the number who wanted to discuss this claim (independent of their agreement/disagreement). The claims with the highest number of votes in the “# want discuss” column were those that we went on to discuss in the second half of the meeting.

CategoryQuestion# agree# disagree# want discuss
ProgramsML upskilling bootcamps (e.g., ARENA, MLAB) require too much commitment for students, and therefore not worth running in the context of uni groups.3.554
ML upskilling bootcamps should be run in the form of code sprints (small number of long, intense sessions) rather than shorter weekly meetings624
Research projects are generally good after-intro activities for group participants756
Talks and panels (with high quality guests) are effective for…Start of the semester recruiting for fellowships645
Improving the group’s perception on campusallnonenone
Motivating existing members of the group6.513
FellowshipsThe beginning of the pipeline should not be an intro fellowship but rather a shorter introductory experience. This reduces wasted effort on people who won’t engage further527
You shouldn’t tweak AISF too much for your fellowships6.565
Readings should be done in-session920
Even introductory fellowships should have hands-on content. For example, participants should write code and not just read papers2.575
Other student clubsIf there’s an AI ethics club on campus, you should try collaborating with them80.56
If there is an AI club at your school, you should collaborate with this AI club (say, by proposing an AIS paper in their AI reading group)allnonenone
If you have socials or coworking sessions, it’s fine to co-host them with your school’s EA club3.597
Target audienceWhen selecting participants to fellowships, research projects, etc., you should select for talent over motivation (e.g., you should not accept someone to a technical fellowship who is highly motivated to learn but has no CS or ML experience)39all
You should be selective (say, accept < 50%) for your intro fellowships925
You should be selective in your membership, offer exclusive member-only events and programs, and kick members out when they have been inactiveallnone1
AdvertisingAdvertising should explicitly mention x-risk/catastrophic concerns90.53
You should be as truthful as possible in your outreach, including being ~doomy3.588
Group cultureSocials should be run pretty frequently (every one or two weeks)allnonenone
Reading groupsPeople should be able to propose papers to read (beyond just voting on them)?31
FacultyMost groups are not doing enough outreach to faculty members837

sammyboiz @ 2024-10-02T18:04 (+10)

WOW this is incredible. As an EA uni group organizer running a fellowship, this is insanely helpful for me. Forwarding this to my AIS uni group organizer friend!

Luca De Leo @ 2024-10-03T12:01 (+8)

I recently helped start an AI Safety group (together with @Eitan) at the University of Buenos Aires, so this is incredibly useful—thank you!

A few observations from our experience: