MATS Summer 2023 Retrospective

By Rocket @ 2023-12-02T00:12 (+28)

This is a crosspost, probably from LessWrong. Try viewing it there.


NickLaing @ 2023-12-02T09:15 (+4)

Thanks that's a thorough report.

Quick note "postmortem" kind of sounds like something bad has happened which had triggered the report egg want the case, perhaps "review" or "roundup" or similar might be a bit more positive a word to use.

Ryan Kidd @ 2023-12-07T20:57 (+3)

Cheers, Nick! We decided to change the title to "retrospective" based on this and some LessWrong comments.

SummaryBot @ 2023-12-04T14:12 (+3)

Executive summary: The ML Alignment & Theory Scholars (MATS) program supported 60 AI safety scholars with mentorship, training, housing, infrastructure, and funding. Scholars improved technical ability, research taste, and knowledge breadth, and reported many positive connections with peers and researchers. Scholars and mentors form part of a talent pipeline for AI safety.

Key points:

  1. 60 scholars studied AI safety for 3 months with 15 mentors. Scholars rated mentors highly (8/10) and are likely to recommend MATS (8.9/10).
  2. Scholars improved technical research skills (self-rated 7.2/10 vs counterfactual summer), knowledge breadth (+1.75/10), research taste (5.9-6.9/10), and made 10 professional connections on average.
  3. Scholars faced fewer career obstacles after MATS, but lack of publications remained an issue. Mentors strongly endorsed 94% of scholars to continue research.
  4. Scholars valued community, seminars and Scholar Support coaching in addition to mentorship. Scholar Support meetings were valued at $750-$3700 in grant equivalent.
  5. MATS will improve applicant screening, support technical skills and research management, and reduce seminars for the next cohort.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.