How MATS addresses “mass movement building” concerns

By Ryan Kidd @ 2023-05-04T00:55 (+79)

Recently, many AI safety movement-building programs have been criticized for attempting to grow the field too rapidly and thus:

  1. Producing more aspiring alignment researchers than there are jobs or training pipelines;
  2. Driving the wheel of AI hype and progress by encouraging talent that ends up furthering capabilities;
  3. Unnecessarily diluting the field’s epistemics by introducing too many naive or overly deferent viewpoints.

At MATS, we think that these are real and important concerns and support mitigating efforts. Here’s how we address them currently.

Claim 1: There are not enough jobs/funding for all alumni to get hired/otherwise contribute to alignment

How we address this:

Claim 2: Our program gets more people working in AI/ML who would not otherwise be doing so, and this is bad as it furthers capabilities research and AI hype

How we address this:

MATS Summer 2023 interest form: “How did you hear about us?” (381 responses)

Claim 3: Scholars might defer to their mentors and fail to critically analyze important assumptions, decreasing the average epistemic integrity of the field

How we address this:


We appreciate feedback on all of the above! MATS is committed to growing the alignment field in a safe and impactful way, and would generally love feedback on our methods. More posts are incoming!


Joseph Lemien @ 2023-05-04T15:11 (+7)

Just to make this a little more accessible to people who aren't familiar with SERI-MATS, MATS is Machine Learning Alignment Theory Scholars Program, a training program for young researchers who want to contribute to AI alignment research.

Ryan Kidd @ 2023-05-04T18:32 (+1)

Thanks Joseph! Adding to this, our ideal applicant has:

  • an understanding of the AI alignment research landscape equivalent to having completed the AGI Safety Fundamentals course;
  • previous experience with technical research (e.g. ML, CS, maths, physics, neuroscience, etc.), ideally at a postgraduate level;
  • strong motivation to pursue a career in AI alignment research, particularly to reduce global catastrophic risk.

MATS alumni have gone on to publish safety research (LW posts here), join alignment research teams (including at Anthropic and MIRI), and found alignment research organizations (including a MIRI team, Leap Labs, and Apollo Research). Our alumni spotlight is here.