How can you compare helping two different people in different ways?

By Robert_Wiblin @ 2014-12-11T17:08 (+12)

When people ask what aspiring effective altruists work on, I often start by saying that we do research into how you can help others the most. For example, GiveWell has found that distributing some 600 bed nets, at a cost of $3,000, can prevent one infant dying of malaria. For the same price, they have also found you could deliver 6,000 deworming treatments that work for around a year.

A common question at this point is 'how can you compare the value of helping these different people in these different ways?' Even if the numbers are accurate, how could anyone determine which of these two possible donations helps others the most?

I can't offer a philosophically rigorous answer here, but I can tell you how I personally approach this puzzle. I ask myself the question:

Let's work through this example. First, we'll make the number of people we are considering a manageable number: for $5, I could offer 10 children deworming treatments, or alternatively offer 1 child a bed-net, which has a 1 in 600 chance of saving their life. To make this decision, I should compare three options:

  1. I don't donate, and so none of the 11 children receive any help
  2. Ten of the children receive deworming treatment, but the other one goes without a bed-net
  3. The one child receives a bed-net, but the other ten go without deworming
If I didn't know which of these 11 children I was about to become, which choice would be more appealing?

Obviously 2 and 3 are better than 1 (no help), but deciding between 2 and 3 is not so simple. I am confident that a malaria net is more helpful than a deworming tablet, but it is ten times more useful?

This question has the virtue of:
  • Being 'fair', because in theory everyone's interests are given 'equal consideration'
  • Putting the focus on how much the recipients' value the help, rather than how you feel about it as a donor
  • Motivating you to actually try to figure out the answer, by putting you in the shoes of the people you are trying to help.
You'll notice that this approach looks a lot like the veil of ignorance, a popular method among moral philosophers for determine whether a process or outcome is 'just'. It should also be very appealing to any consequentialist who cares about 'wellbeing', and thinks everyone's interests ought to be weighed equally. [2] It also looks very much like the ancient instruction to "love your neighbor as yourself".

In my experience, this thought experiment pushes you towards asking good concrete questions like:
  • How much would deworming improve my quality of life immediately, and then in the long term?
  • How harmful is it for an infant to die? How painful is it to suffer from a case of malaria?
  • What risk of death might I be willing to tolerate to get the long-term health and incomes gains offered by deworming?
  • And so on.
I find the main weakness of applying this approach is that thousands of people might be affected in some way by a decision. For instance, we should not only consider the harm to young children who die of preventable diseases, but also the grief and hardship experienced by their families as a result. But that's just the start: health treatments delivered today will change the rate of economic development in a country and therefore the quality of life of all future generations. A big part of the case for deworming is that it improves nutrition, and thereby raises education levels and incomes for people when they are adults - benefits that are then passed on to their children and their children's children.

This doesn't make this question the wrong one to ask, but rather that tracking and weighing the impact on the hundreds of people who might be affected by an action is beyond what most of us can do in a casual way. However, I find you can still make useful progress by thinking through and adding up the impacts on paper, or in a spreadsheet. [3] When you apply this approach, it is usually possible to narrow down your choices to just a few options, though in my experience you may then not have enough information to confidently decide among that remaining handful.

--

[1] A very similar, probably equivalent, question is: Which would I prefer if, after making the decision, I then had to sequentially experience the remaining lives of everyone affected by both options?

[2] One weakness is that this question is ambiguous about how to deal with interventions that change who exists (for instance, policies that raise or lower birth rates). If you assume that you must become someone - non-existence is not an option - you would end up adopting the 'average view', which actually has no supporters in moral philosophy. If you simply ignored anyone whose existence was conditional on your decision, you would be adopting the 'person affecting view', which itself has serious problems. If you do include those people in the population of people you could become, and add 'non-existence' as the alternative for the choices which cause those people not to exist, you would be adopting the 'total view'.

[3] Alternatively, if you were convinced that these long-term prosperity effects were the most important impact, and were similarly valuable across countries, you could try to estimate the increase in the rate of economic growth per $100 invested in different projects, and just seek to maximise that.


undefined @ 2014-12-13T19:02 (+1)

This is a neat approach, Rob, and some form of it seems likely to be one of the best ways of thinking about this. I think the emphasis on putting yourself in the shoes of those you're trying to help rather than acting for yourself is particularly valuable. I think there is one extra difficulty that you haven't mentioned, though, which is to do with people having other preferences than yours.

Even if I'm able to work out that, given a random chance of being one of the participants I would prefer 2 to 3, it doesn't necessarily follow that 2 is preferable to 3 in an objective sense. It is interesting to imagine what the participants themselves would choose behind your veil (if they were fully informed about the tradeoffs etc.).

In many cases, one finds that people tend to think that their own condition is less bad than people who don't have the condition do. (That is, if you ask sighted people how bad it would be to be blind they say it would be much worse than blind people do when asked.) This suggests that, behind a veil of ignorance where self-interest is not at play, those at risk of malaria but not worms might regard treating worms as most important and those at risk of worms but not malaria would treat malaria. It seems hard to know whom to prioritise then.

There's also the eternal problem with imagining what one would choose - people often choose poorly. I assume you're making some sort of assumptions choosing under the best possible conditions. It may be, though, that your values depend on your decision-making conditions.

Of course, you still have to choose and like you say it's clear that 2 and 3 are both preferable to 1. I think this tool will get you answers most of the time, and can focus your mind on important questions, but there's a intrinsic uncertainty (or maybe indeterminateness) about the ordering.