Update on Harvard AI Safety Team and MIT AI Alignment

By Xander123 @ 2022-12-02T06:09 (+71)

We help organize the Harvard AI Safety Team (HAIST) and MIT AI Alignment (MAIA), and are excited about our groups and the progress we’ve made over the last semester. 

In this post, we’ve attempted to think through what worked (and didn’t work!) for HAIST and MAIA, along with more details about what we’ve done and what our future plans are. We hope this is useful for the many other AI safety groups that exist or may soon exist, as well as for others thinking about how best to build community and excitement around working to reduce risks from advanced AI.

Important things that worked:

Important things we got wrong:

If you’re interested in supporting the alignment community in our area, the Cambridge Boston Alignment Initiative is currently hiring.

What we’ve been doing

HAIST and MAIA are concluding a 3-month period during which we expanded from one group of about 15 Harvard and MIT students who read AI alignment papers together once a week to two large student organizations that:

What worked

Communication & Outreach Strategy

Operations

Pedagogy

After we incorporate a final round of participant feedback, we’ll release our final adaptation of the AGISF curriculum, structured as 9 weeks of two-hour meetings, and with various minor curricular substitutions.

Mistakes/Areas for Improvement

Next Steps/Future Plans

At this stage, we’re most focused on addressing mistakes and opportunities for improvement on existing programming (see above). Concretely, some of our near-term top priorities are:

How You Can Get Involved


Zoe Williams @ 2022-12-05T22:48 (+10)

Post summary (feel free to suggest edits!):
Reflections from an organizer of the student organisations Harvard AI Safety Team (HAIST) and MIT AI Alignment (MAIA).

Top things that worked:

Top things that didn’t work:

In future, they plan to set up an office space for MAIA, share infrastructure and resources with other university alignment groups, and improve programming for already engaged students (including opportunities over winter and summer break). 

They’re looking for mentors for junior researchers / students, researchers to visit during retreats or host Q&As, feedback, and applicants to their January ML bootcamp or to roles in the Cambridge Boston Alignment Initiative

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Xander Davies @ 2022-12-06T07:34 (+1)

Thanks for this! Want to note that this was co-authored by 7 other people (the names weren't transferred when it was crossposted from LW).

Zoe Williams @ 2022-12-06T09:23 (+2)

Good to know, cheers - will update in the summary posts and emails to include all authors.