A case for donating to AI risk reduction (including if you work in AI)

By tlevin @ 2024-12-02T19:05 (+118)

I work on Open Philanthropy’s AI Governance and Policy team, but I’m writing this in my personal capacity – several senior employees at Open Phil have argued with me about this!

This is a brief-ish post addressed to people who are interested in making high-impact donations and are already concerned about potential risks from advanced AI. Ideally such a post would include a case that reducing those risks is an especially important (and sufficiently tractable and neglected) cause area, but I’m skipping that part for time and will just point you to this 80,000 Hours problem profile for now.

Edited to add a couple more concrete ideas for where to donate:

  1. ^

    First, a meta point: I think people sometimes accept the above considerations “on vibes.” But, for people who agree that reducing AI risks is the most pressing cause (as in, the most important, neglected, and tractable) and with my earlier argument that there are good giving opportunities in AI risk reduction at current margins, especially for people who work in that field, their views imply that their donation is a decision with nontrivial stakes. They might actually be giving up a lot of prima facie impact in exchange for more worldview diversification, signaling, and morale. I know this does not address the above considerations, and it could still be a good trade; I’m basically just saying, those considerations have to turn out to be valid and pretty significant in order to outweigh the consequentialist advantages of AI risk donations.

    Second, I think it’s coherent for individual people to be uncertain that AI risk is the best thing to focus on (on both empirical and normative levels) while still thinking it’s better to specialize, including in one’s donations. That’s because worldview diversification seems to me like it makes more sense at larger scales, like the EA movement or Open Philanthropy’s budget, and less at the scale of individuals and small donors. Consider the limits in either direction: it seems unlikely that individuals should work multiple part-time jobs in different cause areas instead of picking one in which to develop expertise and networks, and it seems like a terrible idea for all of society to dedicate their resources to a single problem. There’s some point in between where the costs of scaling an effort, and the diminishing returns of more resources thrown at the problem, start to outweigh the benefits of specialization. I think individuals are probably on the “focus on one thing” side of that point.


Sarah Cheng @ 2024-12-03T03:01 (+6)

I think this post makes some great points, thanks for sharing! :) And I think it's especially helpful to hear from your perspective as someone who does grantmaking at OP.

I really appreciate the addition of concrete examples. In fact, I would love to hear more examples if you have time — since you do this kind of research as your job I'm sure you have valuable insights to share, and I expect that you can shift the donations of readers. I'd also be curious to hear where you personally donate, but no pressure, I totally understand if you'd prefer to keep that private.

Work in sub-areas that major funders have decided not to fund

I feel like this is an important point. Do you have any specific AI risk reduction sub-areas in mind?

tlevin @ 2024-12-03T22:29 (+10)

Thanks, glad to hear it's helpful!

  • Re: more examples, I co-sign all of my teammates' AI examples here -- they're basically what I would've said. I'd probably add Tarbell as well.
  • Re: my personal donations, I'm saving for a bigger donation later; I encounter enough examples of very good stuff that Open Phil and other funders can't fund, or can't fund quickly enough, that I think there are good odds that I'll be able to make a really impactful five-figure donation over the next few years. If I were giving this year, I probably would've gone the route of political campaigns/PACs.
  • Re: sub-areas, there are some forms of policy advocacy and moral patienthood research for which small-to-medium-size donors could be very helpful. I don't have specific opportunities in mind that I feel like I can make a convincing public pitch for, but people can reach out if they're interested.
Lukas Trötzmüller🔸 @ 2024-12-06T20:27 (+5)

Adding to the list of funds: Effektiv-spenden.org recently launched their AI safety fund.

Holly Elmore ⏸️ 🔸 @ 2024-12-02T21:50 (+5)

If you are so inclined, individual donors can make a big difference to PauseAI US as well (more here: https://forum.effectivealtruism.org/posts/YWyntpDpZx6HoaXGT/please-vote-for-pauseai-us-in-the-donation-election)

We’re the highest voted AI risk contender in the donation election, so vote for us while there’s still time!

Kat Woods @ 2024-12-04T17:39 (+3)

Seems like a good place to remind people of the Nonlinear Network, where donors can see a ton of AI safety projects with room for funding, see what experts think of different applications, sort by votes and intervention, etc.