EAs should probably focus on recruiting the ultra-wealthy

By Nicholas Decker @ 2024-07-12T02:06 (0)

This is a linkpost to https://nicholasdecker.substack.com/p/eas-should-probably-focus-on-recruiting

I am an Effective Altruist. I think it is good to help people in the most efficient way we can. I believe quite firmly in “shut-up and multiply” — that is, we cannot lose sight of magnitudes. If a positive and a negative are of wildly different weights, then we should go with the one which is much larger than the other, not simply say that there are positives and negatives. 

In the past couple years, EA was hurt by the collapse of Sam Bankman-Fried’s network of companies. This has led to skepticism around becoming too closely associated with any one figure. We want to be independent, and to be seen as independent. 

But I think this is hooey. We need to consistently shut up and multiple! SBF donated about $190 million to charity. The scale of this is perhaps hard to grasp. Let’s say that the reputational damage stopped some people from donating to effective charities. Let’s also say that these people would have donated $5,000 each (about what is needed to save one life from malaria). The reputational damage from SBF would have to have stopped 38,000 people from donating, for it to turn into a net negative. Is that plausible?

A billion dollars is a lot of money. It’s a stupefying amount of money, truly. The gain from convincing one billionaire to donate one billion, is greater than encouraging hundreds of thousands of people to chip in a few thousand. As such, EA should explicitly focus on recruiting the rich, or the to be rich. Effective Altruism MIT seems extremely cost-effective! Surely this is something worth investing relatively trivial resources in.


DC @ 2024-07-12T04:52 (+12)

This post is mostly noise because this is a basic point going back over a decade and you do nothing to elaborate it or incorporate objections to naive utilitarianism. There is prior literature on the topic. I want you to do better because this is an important topic to me. The SBF example is a poor one that's obfuscatory of the basic point because you don't address the hard question of whether his fraud-funded donations were or weren't worth the moral and reputational damage, which is debatable and a separate interesting topic I haven't seen hard analysis of; you open up a can of ethical worms and don't address it in a way that reasonably looks bad to low decouplers, which is probably the reason for the downvoting. Personally I would endorse downvoting because you haven't contributed anything novel about increasing the number of probably good high net worth philanthropists, though I didn't downvote. I only decided to give this feedback because your bio says you're an econ grad student at GMU, which is notorious for disagreeable economists, and so I think you can take it.

Mo Putera @ 2024-07-12T06:23 (+2)

The SBF example is a poor one that's obfuscatory of the basic point because you don't address the hard question of whether his fraud-funded donations were or weren't worth the moral and reputational damage, which is debatable and a separate interesting topic I haven't seen hard analysis of

I've wondered about this as well. Scott Alexander's mistake #57 seems like a relevant starting point: 

57: (12/4/23) In In Continued Defense Of Effective Altruism, I said of EA’s failures (primarily SBF) that “I’m not sure they cancel out the effect of saving one life, let alone 200,000”. A friend convinced me that this was an unfair exaggeration of the point I wanted to make. There are purported exchange rates between money and lives, destroying $5 - $10 billion in value is pretty bad by all of them, and there are knock-on effects on social trust from fraud that suggest its negative effects should be valued even higher. I regret this sentence and no longer stand by it.

One guess as to how Scott's link (US VSL) might underestimate the value destruction is that in Nov 2021, GiveWell aimed to direct ~$1bn annually by 2025, and the year after they revised downward their future funding projections due in part to their main donor Open Phil revising downward its planned 2022 allocation to GW due to a ~40% reduction in their asset base since the end of last year from "the recent stock market decline" changing their portfolio allocation, in particular more than proportionally reducing their GW allocation (partly offset by GW overachieving by ~40% RFMF-wise in finding cost-effective opportunities). My low-confidence guess is that "the recent stock market decline" has quite a bit to do with FTX. It seems likely to me that the NPV of the projected funding reduction to GW is >$1bn over the next say decade at ~5k per life saved (~1,000x the US VSL Scott linked to), or >200k lives that could have been saved but weren't, most of them children under 5. (This galls me, to be honest, so I'd like to be told my reasoning is wrong or something.)

That's one guess; I'm sure there are more I'm missing that's still BOTEC-able, let alone the knock-on effects on social trust from fraud Scott mentioned.

To the OP: I think it's worth reflecting on the warning that maximization is perilous