Cognitive and emotional barriers to EA's growth

By Geoffrey Miller @ 2018-03-09T18:59 (+14)

This morning I gave a colloquium to my Psychology Department here at University of New Mexico. Most of the 30+ audience members had never heard of EA, although a few had a vague idea about it.

I analyzed 10 cognitive and emotional barriers that people face in accepting EA approaches to moral activism, from confirmation bias and speciesism to scope-insensitivity and Theory of Mind failures in understanding likely AGI systems.

I also made a pitch for more psychology grad students and faculty to get involved in EA, to share our expertise on human nature, statistics, research design, public outreach, program evaluation, mental health welfare issues, etc. 

The powerpoint is here if anyone's interested: https://geoffrey-miller-y5jr.squarespace.com/s/EA-talk-march09-public-shorter-tcdh.pptx

I've proposed to give a similar but shorter talk at the Human Behavior and Evolution Society (HBES) conference this June in Amsterdam, which is the main evolutionary psychology research meeting -- so I'd appreciate any feedback on this version.


undefined @ 2018-03-10T16:12 (+5)

Thanks. This will be useful for a future presentation. Although, I am going to modify challenges 3-6. Using the word "utilitarian" seems...limiting. EA has utilitarian/consequentialist underpinnings--but not a full blown subscription to only that moral system (i.e., not exclusive). But I'm sure you knew that already. (See Macaskill's comment on 'Effective Altruism' as utilitarian equivocation.)

Off the top of my head, I'm thinking something more along the lines as maximizing impact and the empathy-altruism hypothesis related to meaning well (benevolence) versus actually doing good (beneficence). (Additionally, going to add an outline =)

Also, the slide about Effective Altruism as a movement, founded in 2011? I'm guessing that's for 80k Hours because GWWC has been around since 2009, and the main idea has been around since at least 1972.

undefined @ 2018-03-15T23:55 (+2)

When people ask when EA "started" I'm never sure what to say. But I imagine Geoffrey is referring to when we chose the name with "2011" (see http://effective-altruism.com/ea/5w/the_history_of_the_term_effective_altruism/), plus a quick nod to the longer history in Singer's work with "+ Peter Singer".

undefined @ 2018-03-12T18:13 (+1)

Good points. I don't think "(benevolence)"/"(beneficence)" adds anything, either. Beneficence is effectively EA lingo. You're not going to draw people in by teaching them lingo. Do that a little further into on-boarding.

undefined @ 2018-03-11T22:45 (+4)

On slide 10 (EA challenge 1), I think you meant “that” rather than “than”.

Good luck! Also, I'm new to this forum and would appreciate it if I could get some likes so that I could make a post! Thanks.

undefined @ 2018-03-09T19:04 (+3)

Great content. I just poured through looking for feedback to give but the content is really great. Only note is if this is going to be done as a presentation in June I think it could get a lot more engaging with less written texts on the slide.

undefined @ 2018-03-09T21:49 (+2)

Yes, I always put too much text on slides the first few times I present on a new topic, and then gradually strip it away as I remember better what my points are. Thanks!

undefined @ 2018-03-12T01:35 (+2)

Great stuff! A few quibbles:

undefined @ 2018-03-12T06:06 (+1)

You write, "Agree with others that overusing the word 'utilitarianism' seems unnecessary and not strictly accurate (any moral view that included an idea of aggregation is probably sufficient, which is probably all of them to some degree)."

One thing I am sure about effective altruism is that it endorses helping the greater number, all other things being equal (by which I am here only concerned with the quality of pain being equal, for simplicity’s sake). So, for example, if $10 can be used to either save persons A and B each from some pain or C from a qualitatively identical pain, EA would say that it is morally better to save the two over the one.

Now, this in itself does not mean that effective altruism believes that it makes sense to

  1. sum together certain people’s pain and to compare said sum to the sum of other people’s pain in such a way as to be able to say that one sum of pain is in some sense greater/equal to/lesser than the other, and

  2. say that the morally best action is the one that results in the least sum of pain and the greatest sum of pleasure (which is more-or-less utilitarianism)

(Note that 2. assumes the intelligibility of 1.; see below)

The reason is because there are also non-aggregative ways to justify why it is better to save the greater number, at least when all other things are equal. For a survey of such ways, see "Saving Lives, Moral Theory, and the Claims of Individual" (Otsuka, 2006) However, I'm not aware that effective altruism why it's better to save the greater number, all else equal, via these non-aggregative ways. Likely, it is purposely silent on this issue. Ben Todd (in private correspondence) informed me that "effective altruism starts from the position that it's better to help the greater number, all else equal. Justifying that premise in the first place is in the realm of moral philosophy." If that’s indeed the case, we might say that all effective altruism says is that the morally better course of action is the one that helps more people, everything else being equal (e.g. when the suffering to each person involved in the choice situation is qualitative the same), and (presumably) also sometimes even when everything isnt equal (e.g. when the suffering to each person in the bigger group might be somewhat less painful than the suffering to each person in the smaller group).

Insofar as effective altruism isn’t in the business of justification, then perhaps moral theories shouldn’t be mentioned at all in a presentation about effective altruism. But inevitably people considering joining the movement are going to ask why is it better to save the greater number, all else equal (e.g. A and B instead of C), or even sometimes when all else aren’t equal (e.g. one million people each from a relatively minor pain instead of one other person from a relatively greater pain)? And I think effective altruists ask themselves that question too. The OP might have and thought utilitarianism offers the natural justification: it is better to save A and B instead of C (and the million instead of the one) because doing so results in the least sum of pain. So, utilitarianism clearly offers a justification (though one might question if it is an adequate justification). On the other hand, it is not clear to me at all how other moral theories propose to justify saving the greater number in these two kinds of choice situations. So it is not surprising that OP has associated utilitarianism with effective altruism. I am sympathetic.

A bit more on utilitarianism: Roughly speaking, according to utilitarianism (or the principle of utility), among all the actions we can undertake at any given moment, the right action (ie the action we ought to take) is the one that results in the least sum of pain and the greatest sum of pleasure.

To figure out which action is the right action among a range of possible actions, we are to, for each possible action, add up all its resulting pleasures and pains. We are then to compare the resulting state of affairs corresponding to each action to see which resulting state of affairs contains the least sum of pain and greatest sum of pleasure. For example, suppose you can either save one million people each from a relatively minor pain or one other person from a relatively greater pain, but not both. Then you are to add up all the minor pains that would result from saving the single person, and then add up all the major pains (in this case, just 1) that would result from saving the million people, and then compare the two states of affairs to see which contains the least sum of pain.

From this we can clearly see that utilitarianism assumes that it makes sense to aggregate distinct people's pains and to compare these sums in such a way as to be able to say, for example, that the sum of pain involved in a million people's minor pains is greater (in some sense) than one other person’s major pain. Of course, many philosophers have seriously questioned the intelligibility of that.

undefined @ 2018-03-15T02:43 (+1)

Great work and I really enjoyed reading this presentation.

On slide 27, where did you get the estimates for "Human-caused X-risks are thousands of times more likely per year than natural X-risks"

I agree with this generally but was wondering if you have a source for the thousands times more.

undefined @ 2018-03-12T11:00 (+1)

Do you offer any recommendations for communicating utilitarian ideas based on Everett's research or someone else's?

For example, in Everett's 2016 paper the following is said:

"When communicating that a consequentialist judgment was made with difficulty, negativity toward agents who made these judgments was reduced. And when a harmful action either did not blatantly violate implicit social contracts, or actually served to honor them, there was no preference for a deontologist over a consequentialist."

David_Moss @ 2018-03-12T21:13 (+1)

I imagine more or less anything which expresses conflictedness about taking the 'utilitarian' decision and/or expresses feeling the pull of the contrary deontological norm would fit the bill for what Everett is saying here. That said, I'm not convinced that Everett (2016) is really getting at reactions to "consequentialism" (see here ( 1 , 2 )

I think that this paper by Uhlmann et al, does show that people judge negatively those who take utilitarian decisions though, even when they judge that the utilitarian act was the right one to take. Expressing conflictedness about the utilitarian decision may be a double-edged sword, therefore. I think it may well offset negative character evaluations of the person taking the utilitarian decision, but plausibly it may also reduce any credence people attached to the utilitarian act being the right one to take.

My collaborators and I did some work relevant to this, on the negative evaluation of people who make their donation decisions in a deliberative rather than explicitly empathic way. The most relevant of our experiments for this looked at the evaluation of people who both deliberated about the cost effectiveness of the donation and expressed empathy towards the recipient of the donation simultaneously. The empathy+deliberation condition was close to the empathy condition in moral evaluation (see figure 2 https://osf.io/d9t4n/) and closer to the deliberation condition in evaluation of reasonableness.

undefined @ 2018-03-10T16:23 (+1)

This is well done! Acknowledging and talking about what makes hyper-rationalism repulsive to many people - mostly very unfairly! - is constructive and interesting.

Maybe out of scope, but in the introduction section describing EA, I'd probably also include a slide or two of some of the more reasonable criticisms of typical EA beliefs and behaviors as well, and separate those from the list of 10 barriers of bias and irrational intuition.

Doing that would better set aside the question of the merits of the EA approach, and make it easier to focus on these other blockers to wider adoption. It also would make the presentation come off more even-handed rather than "here are the bad reasons people don't support what I support". That might get you more buy-in from the more skeptical members of the audience, along with inducing some questioning about how to improve EA from people who do find the answers intuitive.