Applications are now open for Intro to ML Safety Spring 2023

By Joshc @ 2022-11-04T22:45 (+49)

The Center for AI Safety is running another iteration of Intro to ML Safety this Spring for people who want to learn about empirical AI safety research topics.

Apply to be a participant by January 29th, 2023.

Apply to be a facilitator by December 30th.

Website: mlsafety.org/intro-to-ml-safety

About the Course

Introduction to ML Safety is an 8-week course that aims to introduce students with a deep learning background to empirical AI Safety research. The program is designed and taught by Dan Hendrycks, a UC Berkeley ML PhD and director of the Center for AI Safety, and provides an introduction to robustness, alignment, monitoring, systemic safety, and conceptual foundations for existential risk.

Each week, participants will be assigned readings, lecture videos, and required homework assignments. The materials are publicly available at course.mlsafety.org.

There are two tracks:

The course will be virtual by default, though in-person sections may be offered at some universities.
 

How is this program different from AGISF?

Intro to ML Safety is generally more focused on empirical topics rather than conceptual work. Participants are required to watch recorded lectures and complete homework assignments that test their understanding of the technical material. If you’ve already taken AGISF and are interested in empirical research, then you are the target audience for the advanced track.

Intro to ML Safety also emphasizes different ideas and research directions than AGISF does. Examples include:

You can read about more examples in Open Problems in AI X-risk.

Time Commitment

The program will last 8 weeks, beginning on February 20th and ending on April 14th. Participants are expected to commit at least 5 hours per week. This includes ~1 hour of recorded lectures (which will take more than one hour to digest), ~1-2 hours of readings, ~1-2 hours of written assignments, and 1 hour of discussion. 
 

We understand that 5 hours is a large time commitment, so to make our program more inclusive and remove any financial barriers, we will provide a $500 stipend upon completion of the course. (EDT: reduced from $1000)

Eligibility

Anyone is eligible to apply. The prerequisites are:

If you are not sure whether you meet these prerequisites, err on the side of applying. We will review applications on a case-by-case basis.

Facilitating a section

To be a facilitator, you must have a strong background in deep learning and AI Safety. Note that if you are not familiar with the content, you will have to learn it in advance of each week.

The time commitment for running one cohort is ~2-4 hours per week, depending on prior familiarity with the material. 1 hour of discussion and 1-3 hours of prep. Discussion times are flexible.

We will pay facilitators a stipend corresponding to roughly $30 per hour (subject to legal constraints).

Apply by December 30th. We are especially interested in finding facilitators for in-person groups.
 

You can post questions here or reach out to introcourse@mlsafety.org.


EA Opportunity Board @ 2022-12-08T13:28 (+1)

will add this opportunity to the EA Opportunity board!