Update on Harvard AI Safety Team and MIT AI Alignment

By Xander123 @ 2022-12-02T06:09 (+71)

This is a crosspost, probably from LessWrong. Try viewing it there.

null
Zoe Williams @ 2022-12-05T22:48 (+10)

Post summary (feel free to suggest edits!):
Reflections from an organizer of the student organisations Harvard AI Safety Team (HAIST) and MIT AI Alignment (MAIA).

Top things that worked:

Top things that didn’t work:

In future, they plan to set up an office space for MAIA, share infrastructure and resources with other university alignment groups, and improve programming for already engaged students (including opportunities over winter and summer break). 

They’re looking for mentors for junior researchers / students, researchers to visit during retreats or host Q&As, feedback, and applicants to their January ML bootcamp or to roles in the Cambridge Boston Alignment Initiative

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Xander Davies @ 2022-12-06T07:34 (+1)

Thanks for this! Want to note that this was co-authored by 7 other people (the names weren't transferred when it was crossposted from LW).

Zoe Williams @ 2022-12-06T09:23 (+2)

Good to know, cheers - will update in the summary posts and emails to include all authors.