Update on Harvard AI Safety Team and MIT AI Alignment
By Xander123 @ 2022-12-02T06:09 (+71)
This is a crosspost, probably from LessWrong. Try viewing it there.
nullZoe Williams @ 2022-12-05T22:48 (+10)
Post summary (feel free to suggest edits!):
Reflections from an organizer of the student organisations Harvard AI Safety Team (HAIST) and MIT AI Alignment (MAIA).
Top things that worked:
- Outreach focusing on technically interesting parts of alignment and leveraging informal connections with networks and friend groups.
- HAIST office space, which was well-located and useful for programs and coworking.
- Leadership and facilitators having had direct experience with AI safety research.
- High-quality, scalable weekly reading groups.
- Significant time expenditure, including mostly full-time attention from several organizers.
Top things that didn’t work:
- Starting MAIA programming too late in the semester (leading to poor retention).
- Too much focus on intro programming.
In future, they plan to set up an office space for MAIA, share infrastructure and resources with other university alignment groups, and improve programming for already engaged students (including opportunities over winter and summer break).
They’re looking for mentors for junior researchers / students, researchers to visit during retreats or host Q&As, feedback, and applicants to their January ML bootcamp or to roles in the Cambridge Boston Alignment Initiative.
(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)
Xander Davies @ 2022-12-06T07:34 (+1)
Thanks for this! Want to note that this was co-authored by 7 other people (the names weren't transferred when it was crossposted from LW).
Zoe Williams @ 2022-12-06T09:23 (+2)
Good to know, cheers - will update in the summary posts and emails to include all authors.