Are GiveWell Top Charities Too Speculative?
By MichaelDickens @ 2015-12-21T04:05 (+24)
Cross-posted to my blog.
The common claim: Unlike more speculative interventions, GiveWell top charities have really strong evidence that they do good.
The problem: Thanks to flow-through effects, GiveWell top charities could be much better than they look or they could be actively harmful, and we have no idea how big their actual impact is or if it’s even net positive.
Flow-Through Effects
Take the Against Malaria Foundation. It has the direct effect of preventing people from getting malaria, but it might have much larger flow-through effects. Here are some effects AMF might have:
- Increasing human population size by preventing deaths
- Decreasing human population size by accelerating the demographic transition
- Increasing people’s economic welfare, which causes them to eat more animals
- Increasing people’s economic welfare, which causes them to reduce wild animal populations
Increasing population might be good simply because there are more people alive with lives worth living. Accelerating the demographic transition (i.e. reducing population) might be good because it might make a country more stable, increasing international cooperation. This could be a very good thing. On the other hand, making a country more stable means there are more major players on the global stage, which could make cooperation harder 1.
Some of these long-term effects will probably matter more than AMF’s immediate impact. We could say the same thing about GiveWell’s other top charities, although the long-term effects won’t be exactly the same.
Everything Is Uncertain
There’s pretty clear evidence that GiveWell top charities do a lot of direct good–but their flow-through effects are probably even bigger. If a charity like AMF has good direct effects but harmful flow-through effects, it’s probably harmful on balance. That means we can’t say with high confidence that AMF is net positive.
Among effects that are easy to document, yes, AMF is net positive (maybe). Maybe we could just ignore large long-term effects since we can’t really measure them, but I’m uncomfortable with that. If flow-through effects matter so much, is it really fair to assume that they cancel out in expectation?2 We don’t know whether AMF has very good or very bad long-term effects. I tend to think the arguments are a little stronger for AMF having good effects, but I’m wary of optimism bias, especially for such speculative questions where biases can easily overwhelm logical reasoning; and I think a lot of people are too quick to trust speculative arguments about long-term effects.
So where does this leave us? Well, a lot of people use GiveWell top charities as a “fallback” position: “I’m not convinced by the evidence in favor of any intervention with potentially bigger effects, so I’m going to support AMF.” But if AMF might have negative effects, it makes AMF look a lot weaker. Sure, you can argue that AMF has positive flow-through effects, but that’s a pretty speculative claim, so you’re not standing on any better ground than people who follow the fairly weak evidence that online ads can cost-effectively convince people to eat less meat, or people who support research on AI safety.
I don’t like speculative arguments. I much prefer dealing with questions where we have concrete evidence and understand the answer. In a lot of cases I prefer a well-established intervention over a speculative intervention with supposedly higher expected value. But it doesn’t look like we can escape speculative reasoning. For anything we do, there’s a good chance that unpredictable long-term effects have a bigger impact than any direct effects we can measure. Recently I contemplated the value of starting a happy rat farm as a way of doing good without having flow-through effects; but even a rat farm still requires buying a lot of food, which has a substantial effect on the environment that probably matters more than the rats’ direct happiness.
Nothing is certain. Everything is speculative. I have no idea what to do to make the world better. As always, more research is required.
Edited to clarify: I’m not trying to say that AMF is too speculative, and therefore we should give up and do nothing. I strongly encourage more people to donate to AMF. This is more meant as a response to the common claim that existential risk or factory farming interventions are too speculative, so we should support global poverty instead. In fact, everything is speculative, so trying to follow robust evidence only doesn’t get us that far. We have to make decisions in the face of high uncertainty.
Some discussion here.
Notes
-
I recently heard Brian Tomasik make this last argument, and I had never heard it before. When factors this important can go unnoticed for so long, it makes me wary of paying too much attention to speculation about the far-future effects of present-day actions. ↩
undefined @ 2015-12-21T18:37 (+7)
I broadly agree with this, but I'd put it a little differently.
If you think what most matters about your actions is their effect on the long-run future (due to Bostrom-style arguments), then GiveWell recommended charities aren't especially "proven", because we have little idea what their long-run effects are. And they weren't even selected for having good long-run effects in the first place.
One response to this is to argue that the best proxy for having a good long-run impact is having a good short-run impact (e.g. via boosting economic growth).
Another response is to argue that we never have good information about long-run effects, so the best we can do is to focus on the things with the best short-run effects.
I also still think it's fair to say GiveWell recommended charities are a "safe bet" in the sense that donating to them is very likely to do much more good than spending the money on your own consumption.
undefined @ 2015-12-21T19:31 (+5)
One response to this is to argue that the best proxy for having a good long-run impact is having a good short-run impact (e.g. via boosting economic growth).
At the risk of sounding like a broken record, this is still a speculative claim, so if you make it, you can no longer say you're following robust evidence only.
undefined @ 2015-12-21T23:36 (+2)
Yes I totally agree. I was just saying what the most common responses are, not agreeing with them.
cf http://effective-altruism.com/ea/qx/two_observations_about_skeptical_vs_speculative/
undefined @ 2015-12-21T19:30 (+2)
I've heard this "the best proxy for having a good long-run impact is having a good short-run impact" a couple of times now but I haven't seen anyone make any argument for it. Could someone provide a link or something? To me it's not even clear why impact on economy of different charities like Give Directly and AMF should be proportional to their short-term impact.
undefined @ 2015-12-21T23:42 (+5)
It's a controversial claim, and I don't endorse it. One attempt is this: http://blog.givewell.org/2013/05/15/flow-through-effects/ Which argues that general economic growth and human empowerment has lots of good long-run side effects, so that boosting these is a good thing to do. The main response to this is that that's true in the past, but if technological progress causes new xrisks, it's not clear whether it'll be true in the future.
Another strand of argument is to look at what rules of thumb people who had lots of impact in the past followed, and argue that something like "take really good opportunities to have a lot of short-run impact" seems like a better rule of thumb than "try to figure out what's going to happen in the long-run future and how you can shape it). I haven't seen this argued for in writing though.
Also there have been arguments that the best way to shape the long-run future might be through "broad" interventions rather than "narrow" ones, and broad interventions are often things that involve doing short-term common sense good, like making people better educated. http://lesswrong.com/lw/hjb/a_proposed_adjustment_to_the_astronomical_waste/ http://effective-altruism.com/ea/r6/what_is_a_broad_intervention_and_what_is_a_narrow/
CarlShulman @ 2015-12-21T20:48 (+6)
As I said on facebook, I think this mostly goes away (leaving a rather non-speculative case) if one puts even a little weight on special obligations to people in our generation:
AMF clearly saves lives in the short run. If you give that substantial weight rather than evaluating everything solely from a "view from nowhere" long run perspective where future populations are overwhelmingly important, then it is clear AMF is good. It is an effective way to help poor people today and unlikely to be a comparably exceptional way to make the long run worse. If you were building a portfolio to do well on many worldviews or for moral trade it would be a strong addition. You can avoid worry about the sign of its long run effects by remembering relative magnitude.
undefined @ 2016-03-12T05:44 (+7)
I was just thinking about this again and I don't believe it works.
Suppose we want to maximize expected value over multiple value systems. Let's say there's a 10% chance that we should only care about the current generation, and a 90% chance that generational status isn't morally relevant (obviously this is a simplification but I believe the result generalizes). Then the expected utility of AMF is
0.1 * (short-term direct effects only) + 0.9 * (all effects)
Far future effects still dominate.
You could say it's wrong to maximize expected utility across multiple value systems, but I don't see how you can make reasonable decisions at all if you're not trying to maximize expected utility. If you're trying to "diversify" across multiple value systems then you're doing something that's explicitly bad according to a linear consequentialist value system, and you'd need some justification for why diversifying across value systems is better than maximizing expected value over value systems.
CarlShulman @ 2016-03-12T10:20 (+3)
The scaling factors there are arbitrary. I can throw in theories that claim things are infinitely important.
This view is closer to 'say that views you care about got resources in proportion to your attachment to/credence in them, then engage in moral trade from that point.'
Vasco Grilo @ 2023-08-12T13:59 (+2)
Hi Carl,
I am not familiar with the moral uncertainty literature, but in my mind it would make sense to define the utility scale of each welfare theory such that the difference in utility between the best and worst possible state is always the same. For example, always assigning 1 to the best possible state, and -1 to the worst possible state. In this case, the weights of each welfare theory would represent their respective strength/plausibility, and therefore not be arbitrary?
undefined @ 2015-12-22T19:43 (+3)
This is a nice idea but I worry it won't work.
Even with healthy moral uncertainty, I think we should attach very little weight to moral theories that give future people's utility negligible moral weight. For the kinds of reasons that suggest we can attach them less weight don't go any way to suggesting that we can ignore them. To do this they'd have to show that future people's moral weight was (more than!) inversely proportional to their temporal distance from us. But the reasons they give tend to show that we have special obligations to people in our generation, and say nothing about our obligations to people living in the year 3000AD vs people living in the year 30,000AD. [Maybe i'm missing an argument here?!] Thus any plausible moral theory will such that the calculation will be dominated by very long term effects, and long term effects will dominate our decision making process.
undefined @ 2015-12-22T06:23 (+3)
Why would we put more weight on current generations, though? I've never seen a good argument for that. Surely there's no meaningful moral difference between faraway, distant, unknown people alive today and faraway, distant, unknown people alive tomorrow. I can't think of any arguments for charitable distribution which would fall apart in the case of people living in a different generation, or any arguments for agent relative moral value which depend specifically on someone living at the same time as you, or anything of the sort. Even if you believe that moral uncertainty is a meaningful issue, you still need reasons to favor one possibility over countervailing possibilities that cut in opposite directions.
It is an effective way to help poor people today and unlikely to be a comparably exceptional way to make the long run worse. If you were building a portfolio to do well on many worldviews or for moral trade it would be a strong addition.
If we assign value to future people then it could very well be an exceptional way to make the long run worse. We don't even have to give future people equal value, we just have to let future people's value have equal potential to aggregate, and you have the same result.
You can avoid worry about the sign of its long run effects by remembering relative magnitude.
Morality only provides judgements of one act or person over another. Morality doesn't provide any appeal to a third, independent "value scale", so it doesn't make sense to try to cross-optimize across multiple moral systems. I don't think there is any rhyme or reason to saying that it's okay to have 1 unit of special obligation moral value at the expense of 10 units of time-egalitarian moral value, or 20 units, or anything of the sort.
So you're saying that basically "this action is really good according to moral system A, and only a little bit bad according to moral system B, so in this case moral system A dominates." But these descriptors of something being very good or slightly bad only mean anything in reference to other moral outcomes within that moral system. It's like saying "this car is faster than that car is loud".
undefined @ 2015-12-23T21:39 (+1)
Carl's point, though not fully clarified above, is that you can just pick a different intervention that does well on moral system B and is only a little bit bad according to A, pair it off with AMF, and now you have a portfolio that is great according to both systems. For this not to work AMF would have to be particularly bad according to B (bad enough that we can't find something to cancel it out), rather than just a little bit bad. Which a priori is rather unlikely.
undefined @ 2015-12-21T15:46 (+4)
An old write-up of a component of this argument: http://robertwiblin.com/2012/04/14/flow-on-effects-can-be-key-to-charity/
undefined @ 2015-12-21T15:46 (+4)
I alluded to this concern here:
"
I believe in the overwhelming importance of shaping the long term future. In my view most causal chains that could actually matter are likely to be very long by normal standards. But they might at least have many paths to impact, or be robust (i.e. have few weak steps).
People who say they are working on broad, robust or short chains usually ignore the major uncertainties about whether the farther out regions of the chain they are a part of are positive, neutral or negative in value. I think this is dangerous and makes these plans less reliable than they superficially appear to be.
If any single step in a chain produces an output of zero, or negative expected value (e.g. your plan has many paths to increasing our forecasting ability, but it turns out that doing so is harmful), then the whole rest of that chain isn’t desirable."
http://effective-altruism.com/ea/r6/what_is_a_broad_intervention_and_what_is_a_narrow/
undefined @ 2015-12-21T09:56 (+4)
"There’s pretty clear evidence that GiveWell top charities do a lot of direct good–but their flow-through effects are probably even bigger."
You don't make an argument for why this would be true, do you?
Without having put much thought into it, so I might easily be wrong (and happy to be convinced otherwise) it doesn't look that way to me.
Let's look at it from the perspective of one child not dying from Malaria due to AMF. One child being alive has an extreme positive impact on the child and its family. It seems very implausible to me that this one child will on average contribute to making the world a worse place so much that it comes even close to outweigh the benefit of the child continuing to live. I'd expect the life of the child to be far more positive than any negative outcomes.
(Same holds for positive flow through effects.)
I'd suspect that so and so many thousand children to live just doesn't sound that great due to scope insensitivity and this is why in a comparison the sheer magnitude of good it has caused doesn't come across that well.
undefined @ 2015-12-21T15:04 (+3)
I didn't argue that AMF's flow-through effects exceed its direct effects because (a) it's widely (although not universally) accepted and (b) it's hard to argue for. But this is probably worth addressing, so I'll try and give a brief explanation of why I expect this to be true. Thanks for bringing it up. Disclaimer: these arguments are probably not the best since I haven't thought about this much.
Small changes to global civilization have large and potentially long-lasting effects. If, for example, preventing someone from getting malaria slightly speeds up scientific progress, that could improve people's lives for potentially millions of years into the future; or if we colonize other planets, it could affect trillions or quadrillions of people per generation.
If you believe non-human animals have substantial moral value (which I think you should), then it's pretty clear that anything you do to affect humans has an even larger effect on non-human animals. Preventing someone from dying means they will go on to eat a lot of factory-farmed animals (although more so in emerging economies like China than poorer countries like Ghana), and the animals they eat will likely experience more suffering than they themselves would in their entire lives. Plus any effect a human has on the environment will change wild animal populations; it's pretty unclear what sorts of effects are positive or negative here, but they're definitely large.
Now, even if you don't believe AMF has large flow-through effects, how robust is the evidence for this belief? My basic argument still applies here: the claim that AMF has small flow-through effects is a pretty speculative claim, so we still can't say with high confidence how big AMF's impact is or whether it's even net positive.
undefined @ 2015-12-21T15:44 (+2)
Denise, if you value all time periods equally, then the flow through effects are 99%+ of the total impact.
The flow-through effects then only have to be very slightly negative to outweigh the immediate benefit.
undefined @ 2015-12-21T17:15 (+4)
Would you similarly doubt that, on expectation, someone murdering someone else had bad consequences overall? Someone slapping you very hard in the face?
This kind of reasoning seems to bring about a universal scepticism about whether we're doing Good. Even if you think you can pin down the long term effects, you have no idea about the very long term effects (and everything else is negligible compared to very long term effects).
undefined @ 2015-12-21T17:32 (+3)
For what it's worth, I definitely don't think we should throw our hands up and say that everything is too uncertain, so we should do nothing. Instead we have to accept that we're going to have high levels of uncertainty, and make decisions based on that. I'm not sure it's reasonable to say that GiveWell top charities are a "safe bet", which means they don't have a clear advantage over far future interventions. You could argue that we should favor GW top charities because they have better feedback loops--I discuss this here.
undefined @ 2015-12-21T17:18 (+1)
I think the effect of murdering someone are more robustly bad than reducing poverty (which are also probably positive, but less obviously so).
undefined @ 2015-12-21T17:21 (+2)
Why? What are the very long term effects of a murder?
undefined @ 2015-12-21T17:40 (+2)
A previous post on this topic:
On Progress and Prosperity http://effective-altruism.com/ea/9f/on_progress_and_prosperity/
undefined @ 2015-12-21T07:40 (+2)
For a bit more on 1, see also: https://www.reddit.com/r/IRstudies/comments/3jk0ks/is_the_economic_development_of_the_global_south/
undefined @ 2015-12-21T16:59 (+1)
It seems that the best approach to this sort of uncertainty is probabilistic thinking outlined by Max Harms here.
Rather than looking for certainty of evidence, we should look for sufficiency of evidence to act. Thus, we should not ask the question "will this do the most good" before acting, but rather "do I have sufficient evidence that this action will likely lead to the most good"? Otherwise, we risk falling into "analysis paralysis" and information bias, the thinking error of asking for too much information before acting.
undefined @ 2015-12-21T17:15 (+2)
Why is it better to look for sufficient evidence rather than maximizing expected value (keeping in mind that we can't take expected value estimates literally)? Or are you just saying the same thing in a different way?
undefined @ 2015-12-21T18:50 (+1)
Because the question of sufficient evidence enables us to avoid information bias/analysis paralysis. There are high opportunity costs to not acting, and that is a very dangerous trap to avoid. The longer we deliberate, the more time slips by while we are gathering evidence. This causes us to fall into the status quo bias.
undefined @ 2015-12-22T06:32 (+1)
I don't see how information bias would go away if we were only worried about sufficient evidence, and analysis paralysis doesn't seem to be a problem with our current community. People like me and Michael might be really unsure about these things, but it doesn't really inhibit our lives (afaik). I at least don't spend too much time thinking about these things, but what time I do spend seems to lead towards robustly better coherence and understanding of the issues.
undefined @ 2015-12-21T16:36 (+1)
One extra flow-through effect you should mention is AMF and GiveDirectly's effect on global consumption equality. GD's is positive in all periods. AMF is initially negative (a family has to split their income over more children temporarily), and then eventually positive through development and fertility effects.
What's good and bad here? I have no idea.