Joining the Carnegie Endowment for International Peace

By Holden Karnofsky @ 2024-04-29T15:45 (+228)

Effective today, I’ve left Open Philanthropy and joined the Carnegie Endowment for International Peace[1] as a Visiting Scholar. At Carnegie, I will analyze and write about topics relevant to AI risk reduction. In the short term, I will focus on (a) what AI capabilities might increase the risk of a global catastrophe; (b) how we can catch early warning signs of these capabilities; and (c) what protective measures (for example, strong information security) are important for safely handling such capabilities. This is a continuation of the work I’ve been doing over the last ~year.

I want to be explicit about why I’m leaving Open Philanthropy. It’s because my work no longer involves significant involvement in grantmaking, and given that I’ve overseen grantmaking historically, it’s a significant problem for there to be confusion on this point. Philanthropy comes with particular power dynamics that I’d like to move away from, and I also think Open Philanthropy would benefit from less ambiguity about my role in its funding decisions (especially given the fact that I’m married to the President of a major AI company). I’m proud of my role in helping build Open Philanthropy, I love the team and organization, and I’m confident in the leadership it’s now under; I think it does the best philanthropy in the world, and will continue to do so after I move on. I will continue to serve on its board of directors (at least for the time being).

While I’ll miss the Open Philanthropy team, I am excited about joining Carnegie. 

I’m looking forward to working at Carnegie, despite the bittersweetness of leaving Open Phil. To a significant extent, though, the TL;DR of this post is that I am continuing the work I’ve been doing for over a year: helping to design and advocate for a framework that seeks to get early warning signs of key risks from AI, accompanied by precommitments to have sufficient protections in place by the time they come (or to pause AI development and deployment until these protections get to where they need to be).


 

  1. ^

     I will be at the California office and won’t be relocating.


Dustin Moskovitz @ 2024-04-29T19:05 (+186)

I'm grateful that Cari and I met Holden when we did (and grateful to Daniela for luring him to San Francisco for that first meeting). The last fourteen years of our giving would have looked very different without his work, and I don't think we'd have had nearly the same level of impact — particularly in areas like farm animal welfare and AI that other advisors likely wouldn't have mentioned.

Adam_Scholl @ 2024-04-30T01:44 (+92)

I also think Open Philanthropy would benefit from less ambiguity about my role in its funding decisions (especially given the fact that I’m married to the President of a major AI company).

This makes sense, but if anything the conflict of interest seems more alarming if you're influencing national policy. For example, I would guess that you are one of the people—maybe literally among the top 10?—who stands to personally lose the most money in the event of an AI pause. Are you worried about this, or taking any actions to mitigate it (e.g., trying to convert equity into cash?)

Holden Karnofsky @ 2024-05-13T20:59 (+51)

My spouse isn't currently planning to divest the full amount of her equity. Some factors here: (a) It's her decision, not mine. (b) The equity has important voting rights, such that divesting or donating it in full could have governance implications. (c) It doesn't seem like this would have a significant marginal effect on my real or perceived conflict of interest: I could still not claim impartiality when married to the President of a company, equity or no. With these points in mind, full divestment or donation could happen in the future, but there's no immediate plan to do it.

The bottom line is that I have a significant conflict of interest that isn't going away, and I am trying to help reduce AI risk despite that. My new role will not have authority over grants or other significant resources besides my time and my ability to do analysis and make arguments. People encountering any analysis and arguments will have to decide how to weigh my conflict of interest for themselves, while considering arguments and analysis on the merits.

For whatever it's worth, I have publicly said that the world would pause AI development if it were all up to me, and I make persistent efforts to ensure people I'm interacting with know this. I also believe the things I advocate for would almost universally have a negative expected effect (if any effect) on the value of the equity I'm exposed to. But I don't expect everyone to agree with this or to be reassured by it.

aysja @ 2024-04-30T01:54 (+25)

For context, Holden is married to Daniela Amodei, president and co-founder of Anthropic. She also used to work at OpenAI and still, I believe, holds equity there. As Holden has stated elsewhere: "I am married to the President of Anthropic and have a financial interest in both Anthropic and OpenAI via my spouse."

Greg_Colbourn @ 2024-05-01T19:33 (+10)

Congrats Holden! Just going to quote you from a recent post:

There’s a serious (>10%) risk that we’ll see transformative AI2 within a few years.

  • In that case it’s not realistic to have sufficient protective measures for the risks in time.
  • Sufficient protective measures would require huge advances on a number of fronts, including information security that could take years to build up and alignment science breakthroughs that we can’t put a timeline on given the nascent state of the field, so even decades might or might not be enough time to prepare, even given a lot of effort.

If it were all up to me, the world would pause now

Please don't lose sight of this in your new role. Public opinion is on your side here, and PauseAI are gaining momentum. It's possible for this to happen. Please push for it in your new role! (And reduce your conflict of interest if possible!)

SiebeRozendal @ 2024-05-01T12:16 (+8)

Here's Carnegie's publications on AI: https://carnegieendowment.org/programs/technology/ai/

Nathan Young @ 2024-05-01T11:51 (+1)

Thank you for your work. I am really grateful when people work hard and try hard to achieve good. I hope that the new job goes well.