New roles on my team: come build Open Phil's technical AI safety program with me!

By Ajeya @ 2023-10-19T16:46 (+102)

Open Phil announced two weeks ago that we’re hiring for over 20 roles across our teams working on global catastrophic risk reduction — and we’ll answer questions at our AMA starting tomorrow. Ahead of that, I wanted to share some information about the roles I’m hiring for on my team (Technical AI Safety). This team is aiming to think through what technical research could most help us understand and reduce AI x-risk, and build thriving fields in high priority research areas by making grants to great projects and research groups.

First of all — since we initially listed roles on Sep 29, we’ve added three new roles in Technical AI Safety that you might not have seen yet if you only saw the original announcement! In addition to the (Senior) Program Associate role that was there originally, we added an Executive Assistant role last week — and yesterday we added a (Senior) Research Associate role and a role for a Senior Program Associate specializing in a particular subfield of AI safety research (e.g. interpretability, alignment theory, etc). Check those out if they seem interesting! The Executive Assistant role in particular requires a very different, less technical skill set.

Secondly, before starting to answer AMA questions, I wanted to highlight that our technical AI safety giving is far away from where it should be at equilibrium, there is considerable room to grow, and hiring more people is likely to lead quickly to more and better grants. My estimate is that last year, we recommended around ~$25M in grants to technical AI safety,[1] and so far this year I’ve recommended a similar amount. With more capacity for grant evaluation, research, and operations, we think this could pretty readily double or more.

All of our GCR teams (Technical AI Safety led by me, Capacity Building led by Claire Zabel, AI Governance and Policy led by Luke Muehlhauser, and Biosecurity led by Andrew Snyder-Beattie) are heavily capacity constrained right now — especially the teams that do work related to AI, given the recent boom in interest and activity in that area. I think my team currently faces even more severe constraints than other program teams. Compared to other teams, my team:

If you join the technical AI safety team in this round, you could help relieve some severe bottlenecks while building this new iteration of the program area from the ground up. If this sounds exciting to you, I strongly encourage you to apply!
 

  1. ^

     Interestingly, these figures are actually considerably larger than annual technical AI safety giving in the several years before that, even though we had fewer full-time-equivalent staff working in the area in 2022 and 2023 compared to 2015-2021.

  2. ^

    Initially, our program was led by Daniel Dewey. By around 2019, Catherine Olsson had joined the team, and eventually (I think by 2020-2021) it transitioned to being a team of three run by Nick Beckstead, who managed Catherine and Daniel, as well as Asya Bergal at half her time. In 2021, all three of Daniel, Catherine, and Nick left for other roles. For an interim period, there was no single point person: Holden was personally handling bigger grants (e.g. Redwood Research), and Asya was handling smaller grants (e.g. an RFP that Nick originally started and our PhD fellowship). Holden then moved on to direct work and Asya went full-time on capacity building. I began doing grantmaking in Oct 2022, and quickly ended up full-time handling FTXFF bailout grants. Since late January 2023 or so, I’ve been presiding over a more normal program area.


Joseph Miller @ 2023-10-19T22:40 (+69)

Was there some blocker that caused this to happen now, rather than 6 months / 1 year ago?

Ajeya @ 2023-10-21T18:37 (+19)

I only got into grantmaking less than a year ago (in November 2022), and shortly after I unburied myself from FTXFF-collapse-related grants around January, I started hiring in a private round which led to Max joining (a private round is generally much less of a logistical lift than a big public round). I'm now joining this big public round along with other OP GCR teams because combining hiring rounds makes it easier on the back-end. See Luke's AMA answers here and here for more detail on the "Why are you hiring now rather than previously?" question, and my comment here for more color on my personal working situation over the last ten months or so.

Akash @ 2023-10-20T12:01 (+9)

Excited to see this team expand! A few [optional] questions:

  1. What do you think were some of your best and worst grants in the last 6 months?
  2. What are your views on the value of "prosaic alignment" relative to "non-prosaic alignment?" To what extent do you think the most valuable technical research will look fairly similar to "standard ML research", "pure theory research", or other kinds of research?
  3. What kinds of technical research proposals do you think are most difficult to evaluate, and why?
  4. What are your favorite examples of technical alignment research from the past 6-12 months?
  5. What, if anything, do you think you've learned in the last year? What advice would you have for a Young Ajeya who was about to start in your role?