Stefan Schubert: Why aren't people donating more effectively?

By EA Global @ 2018-06-08T07:15 (+11)

This is a linkpost to https://www.youtube.com/watch?v=QyvzbW0XKmY&list=PLwp9xeoX5p8P3cDQwlyN7qsFhC9Ms4L5W&index=13


In this talk from EA Global 2018: San Francisco, Dr. Stefan Schubert discusses the psychology of charitable giving. He covers both general models for thinking about why people might choose to give ineffectively, recent research on the topic and what we may be able to do about it. 

A transcript of Stefan's talk is below, which we have lightly edited for clarity. You can also watch it on YouTube and read it on effectivealtruism.org.

The Talk

I'm going to talk about the psychology of effective altruists, and specifically about why people don't donate effectively. I'll be presenting joint work that I'm pursuing together with my colleague Lucius Caviola, who is a PhD student at Oxford University, under the leadership of Nadira Faber, who is the head of the Social Behaviour and Ethics Lab, where we work.

The puzzle of ineffective giving

The puzzle of ineffective giving is that people donate large sums to charity, which is evidence that they actually want to help others. And yet, a large proportion of donations are very ineffective, at least if effective altruists are right. Donating very ineffectively seems to defeat the very purpose of giving. The donations aren't actually helping others, at least not nearly as much as they could. So, we have a puzzle, which gives rise to my first question: "Why don't people donate effectively?" And after that, my second question: "How can we make people donate more effectively?"

I'll first focus on the 'why' question. There are two main explanations. The first is that people don't donate effectively because they actually don't really want to. Instead of effectiveness, they have some other goal with their giving, such as trying to obtain some positive emotion or improve their reputation. We'll call this the preference-based explanation.

The second main explanation is that people want to donate effectively, but they don't know how. They suffer from false beliefs or cognitive biases or something like that. We'll call this the belief-based explanation. So on one hand we have preference-based explanations and on the other hand we have belief-based explanations.

In this talk, I will provide some evidence for both of these different explanations, and in particular I'll present some research of our own which pertains primarily to the belief-based explanations. But I'll start by focusing on the preference based explanations, and delve a little bit more deeply into them.

Preference-based explanations

There are two kinds of preference based explanations, on our conceptualization. The first has to do with "warm glow". It says that when people are donating, they're optimizing for warm glow rather than effectiveness.

Warm glow is a positive emotion that people obtain from showing empathy or generosity. In particular, people obtain warm glow from giving spontaneously, and a bit capriciously, which is obviously not something that is very conducive to effectiveness. So, therefore, warm glow giving often tends not to be very effective.

Another preference based explanation has to do with reputation, or signaling. People signal certain traits in order to improve their reputation. On this model, when people donate, third parties observe and judge them. So people want to give in a way that will earn them a favorable judgment. If third parties judge our character traits, and if it's the case that we, for instance, display spontaneous empathy, then people tend to make the judgment: "this is a good person", and reward that, whereas they may not reward effective donations to the same extent.

I should also clarify that the warm glow giving explanation and the reputational signaling explanation aren't necessarily mutually exclusive. When people are engaging in signaling behavior, they may very well at the same time obtain the warm glow.

Let's briefly cover some evidence of warm glow giving.

There is one paper that was recently published in Psychological Science - a prestigious psychology journal - where it was shown that people donate to pet projects, or projects that they feel strongly about, even when told that other charities or other projects are more effective. In the paper, they were told that arthritis research is more effective than cancer research, and still the majority chose to donate to cancer research.

As evidence for signaling or reputation-based giving, it has been shown by a number of effective altruism-aligned researchers that deliberate donors are seen as having a worse moral character than donors who give spontaneously and empathetically. In that case, no wonder people donate spontaneously rather than deliberately!

If the problem is that people prefer to give in ineffective ways, what can we do to improve their giving and make it more effective? Well, one thing that we can do is to try to change the norms and culture of giving. Specifically, we may try to improve the third parties' judgments so that they don't reward ineffective giving, but rather reward effective giving. Here, there are two strategies. One is to try to improve everyone's judgments. So, to get everyone on board with effective giving and get everyone to reward other people donating effectively. However, that may not be tractable, because many people seem to not have a disposition in favor of effective giving. So another approach might be to selectively pick those people who already have views which are conducive to effective giving, bring them together, and form an effectiveness-minded culture where everyone is rewarding everyone else for donating effectively. And to some extent that seems to be exactly what the effective altruist community is doing, sort of re-rewarding each other for donating effectively, rather than spontaneously and capriciously.

Belief-based explanations

Belief-based explanations also come in different forms. One has to do with cognitive biases - there are many biases which affect the effectiveness of giving. One very well known bias is scope neglect, which is that people tend often to be insensitive to the size of the opportunity to help. In a well known study, one group of people was asked how much they would donate to help 200 birds, and another group was asked how much they would donate to help 2000 birds. The two groups chose to donate similar amounts, even though in the second case they could help ten times more. This seems to be a fairly pervasive phenomenon, which obviously reduces the effectiveness of donations.

Another kind of bias has to do with people often focusing on overhead or administration costs, rather than effectiveness, when they evaluate different charities. This is something that Lucius has published a paper on, together with his colleagues, which I will present in some detail in a minute. But first I will talk a little bit about another kind of belief based-explanation which hasn't got to do with general cognitive biases, but rather has to do with false beliefs and ignorance about specific facts relating to charity effectiveness.

There's much less research on this, so this is going to be somewhat more speculative, inevitably, but we think we have some reason to believe that people do have false beliefs and ignorance about giving opportunities. One potential false belief about the effectiveness of various charities might be, for instance, that charities just don't differ much in terms of effectiveness. This is something we've actually started studying, so I will present a study of relevance for this in a minute. Another potential false belief is that differences in effectiveness are mostly driven by differences regarding overhead ratios, or that charity effectiveness can't be measured.

So, if it's the case that people suffer from these kinds of false beliefs, and it is also the case that this is something that drives ineffective giving, then it seems like we might be able to perform a straightforward intervention. We can just inform people that their misconceptions are misconceptions in order to dispel them, and so, perhaps, improve donation behavior. I will talk more about that later in this talk. But now let's move onto our own research.

Research and results

There's this paper from 2014 by Lucius and his colleagues on overhead aversion. The issue here is that when donors are evaluating charities, they might use different criteria. One is overhead ratio, meaning the percentage of money that goes into administration. Another is cost effectiveness, meaning the positive outcome per dollar donated. For instance, the number of lives saved. And, of course, effective altruists think that what one should look at is cost effectiveness, but in fact people often look at overhead ratio instead. So, why is that? Well one reason might be that in separate evaluation, as psychologists say, when you're just presented with an individual option - in this case, an individual charity - separately, it can be very hard to evaluate the relative effectiveness of a charity if you don't know anything about effectiveness.

For example, how effective is a charity that saves, say, a hundred lives for a million dollars? There is no intuitive way to tell. Whereas if you're being told that a charity has a 50 percent overhead ratio, you can immediately grasp that this is a high level of overhead. So for that reason, people often focus on overhead in separate evaluation. Or at least, this was Lucius and his colleagues' hypothesis. They ran a study when they had two different charities, one charity which had a higher level of overhead and a higher level of effectiveness than the other charity. And then they compared them in two modes, as it were. One was separate, where participants were only given the opportunity to donate - or not - to charity A, or were given the same opportunity for charity B, but not both. And then there was also joint evaluation, where a third group was presented with the opportunity to either donate to charity A or to charity B.

The hypothesis predicted that in separate evaluation, people would go for the low overhead charity, on aggregate, because it would be easier to evaluate. Whereas in joint evaluation, the hypothesis predicted that the group would be able to compare both effectiveness and the overhead, they would realize that effectiveness is what really matters, and therefore they would go for the more effective charity. This hypothesis was supported by the data.

So, based on this study, it seems that people do focus on effectiveness when charities are appropriately comparable. They don't do so when they're just presented with the individual charities, but they do so when charities are appropriately comparable. So the policy implication might be that we should increase the comparability of charities. We should present multiple charities at the same time, and make it easy to compare relative cost effectiveness.

Let's move on to another study that we're currently running, that rather has to do with the misconception about charity effectiveness and specifically, that has to do with misconceptions about the variance of effectiveness across different charities. So, here we've run two studies so far, and in the first study, we got people's guesses about what the difference is in terms of effectiveness between the most effective and an average charity. This is something that Spencer Greenberg already studied in a pilot study, and the result there was that the median ratio was 1.6. This is much lower than what EAs think, so we wanted to replicate this finding and publish it in an academic journal. And then in the second study we wanted to see whether correcting this misconception increases people's tendency to prioritize effectiveness.

So, study one then, participants were asked about the relative effectiveness of the most effective versus average charities and Spencer's results were replicated here. The median was 1.5, so very close to what Spencer found. So the most effective charity was said to be, according to the median person, 1.5 times more effective than the average charity. So, the next question was whether this would have an effect on people's donation behavior.

The way we studied this was as follows: participants were asked about how they would donate $100, and they could split this money between the most effective charity and an average charity. From previous studies, we know that people often do split when given the chance to do so (although arguably, you should just donate everything to the most effective charity). But what we thought was, if people were told that the most effective charity is, say, 100 times more effective than an average charity, then this splitting tendency would be reduced, and they would donate almost everything to the most effective charity.

We had five different conditions, and in four of them, participants were informed about the relative effectiveness of the most effective charity to an average charity. So, this ranged from 1.05 times to 100 times much more effective. The fifth condition was a control group: they were not given any information about relative effectiveness. For this study, our hypothesis was not supported. Actually, telling participants that the most effective charity was 100 times more effective than an average charity didn't have much of an effect. They continued to donate 30 dollars out of their hundred dollars to the average charity. This was quite surprising to us, that even though the average charity is only one percent as effective as the most effective charity, people still go on to want to donate 30 percent to it.

Overall, there were quite small differences across the five different conditions, so our manipulation didn't have that much of an effect. So, what's the upshot?

Well, people do underestimate the variance in charity's effectiveness, so our hypothesis that they suffer from this misconception was true. This supports Spencer's earlier findings, too. But informing people about relative effectiveness doesn't make a big difference to their giving behavior. So, what might be the explanation? Well, one possibility is that getting rid of one misconception might not be sufficient because of additional obstacles. So, for instance, the preference to split donations might be very strong, or there might be some sort of scope insensitivity factor which has an effect here.

Conclusions

To sum up: I had two questions. One was: "why don't people donate effectively?" There were two responses to that. One was they don't want to - that was the preference based explanation - and the second was they want to but they don't know how - that was the belief based explanation. So far, we haven't amassed enough evidence to say which one of these is the more important, so more research is needed. And on the second question, "how can we make people donate more effectively?" Well, the response to that question will depend on what the response to the first question is. So if the preference-based explanation is right, then we saw that better norms might be one way to go, whereas if the belief-based explanation is right, then better framings which don't trigger cognitive biases might be the way to go.

For instance, we saw that increasing the comparability of different charities might help people to avoid cognitive biases. Another approach might be just to inform people in order to dispel misconceptions, and then hope this will improve their donation behavior. But in the study that we ran, we saw that this didn't have that much of an effect and we speculated that this might be because there are some additional misconceptions which present additional obstacles to effective giving.

In light of our findings, of there being many misconceptions and it being very difficult, perhaps, to fix them all, we might wonder if there is some kind of hack by which we can circumvent all of these different misconceptions. One such hack might be to encourage deference to experts: Rather than having people figuring out for themselves all the relevant factors for assessing the relative effectiveness of different charities, we just have them defer to experts. This is a model which has a great precedent in science. So in science, I am ignorant about scientific facts in many fields, but I can still act on scientific knowledge appropriately because I defer to scientific experts, or scientists.

This model seems to work very well in science, so perhaps this is the way to go also with respect to charitable giving, too.

If you want to run studies in these fields or have study ideas, please contact us. We're also looking to diversify our donor base, so if you want to help out with that, please contact us as well. Thank you very much.