MATS Winter 2023-24 Retrospective
By utilistrutil @ 2024-05-11T00:09 (+62)
This is a crosspost, probably from LessWrong. Try viewing it there.
nullStephen McAleese @ 2024-05-12T09:38 (+3)
Thanks for writing this! It's interesting to see how MATS has evolved over time. I like all the quantitative metrics in the post as well.
SummaryBot @ 2024-05-13T12:53 (+1)
Executive summary: The ML Alignment & Theory Scholars program (MATS) successfully ran its fifth iteration in Winter 2023-24, providing mentorship and support to 63 AI safety research scholars, and plans to make several improvements for future programs based on scholar and mentor feedback.
Key points:
- Key changes from the previous program included reducing the scholar stipend, transitioning to Research Management, using the full Lighthaven campus, and replacing Alignment 201 with AI Strategy Discussions.
- Scholars were highly likely to recommend MATS (9.2/10 average rating, +74 NPS) and rated mentorship highly (8.1/10 average). Mentorship was the most valuable MATS element for 38% of scholars.
- Mentors were also likely to recommend MATS (8.2/10 average, +37 NPS). The most common mentoring benefits were helping new researchers, gaining mentorship experience, and advancing AI safety.
- According to mentors, 77% of evaluated scholars could achieve a top conference paper, 41% could receive a job offer from an AI safety team, and 16% could found a new AI safety org within the next year.
- After MATS, scholars reported facing fewer obstacles to an AI safety career, with the biggest remaining obstacles being publication record and funding.
- Key planned changes for future programs include introducing a mentor selection advisory board, shifting research focus, supporting more AI governance mentors, expanding applicant pre-screening, and modifying the strategy discussion format.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.