The marketing gap and a plea for moral inclusivity

By MichaelPlant @ 2017-07-08T11:34 (+40)

In this post, I make three points. First, I note there seems to be a gap between what EA markets itself as being about (effective poverty reduction) and what many EAs really believe is important (poverty isn’t the top priority) and this marketing gap is potentially problematic. Second, I propose a two-part solution. One part is that EA outreach-y orgs should be upfront about what they think the most important problems are. The other is that EA outreach-y orgs, and, in fact, the EA movement as a whole, should embrace ‘morally inclusivity’: we should state what the most important problems are for a range on moral outlooks but not endorse a particular moral outlook. I anticipate some will think we should adopt ‘moral exclusivity’ instead, and just endorse or advocate the one view. My third point is a plea for moral inclusivity. I suggest even those who strongly consider one moral position to be true should still be in favour EA being morally inclusive as a morally inclusive movement is likely to generate better outcomes by the standards of everyone’s individual moral theory. Hence moral inclusivity is the dominant option.

Part 1

One thing that's been bothering me for a while is the gap between how EA tends to market itself and what lots of EA really believe. I think the existence of this gap (or even if the perception of it) is probably a bad idea and also probably avoidable. I don't think I've seen this discussed elsewhere, so I thought I'd bring it up here.

To explain, EA often markets itself as being about helping those in poverty (e.g. see GWWC's website) and exhorts the general public to give their money to effective charities in that area. When people learn a bit more about EA, they discover that only some EAs believe poverty is the most important problem. They realise many EAs think we should really be focusing on the far future, and AI safety in particular, or on helping animals, or finding ways to improving the lives of presently existing humans that aren’t do to with alleviating poverty, and that’s where those EAs put their money and time.

There seem to be two possible explanations for the gap between EA marketing and EA reality. The first is historical. Many EAs were inspired by Singer's Famine, Affluence and Morality which centres on saving a drowning child and preventing those in poverty dying from hunger. Poverty was the original focus. Now, on further reflection, many EAs have decided the far future is the important area but, given its anti-poverty genesis, the marketing/rhetoric is still about poverty.

The second is that EAs believe, rightly or wrongly, talking about poverty is a more effective marketing strategy than talking about comparatively weird stuff like AI and animal suffering. People understand poverty and it’s easier to start with than before moving on to the other things.

I think the gap is problematic. If EA wants to be effective over the long run, one thing that's important is that people see it as a movement of smart people with high integrity. I think it's damaging to EA if there's the perception, even if this perception is false, that effective altruists are the kind of people that say you should do one thing (give money to anti-poverty charities) but themselves believe in and do something else (e.g. AI safety is the most important).

I think this is bad for the outside perception of EA: we don't want to give critics of the movement any more ammo than necessary. I think it potentially disrupts within-community cohesion too. Suppose person X joins EA because they were sold on the anti-poverty line by outreach officer Y. X then become heavily involved in the community and subsequently discovers Y really believes something different from what X was originally sold on. In this case, the new EA X would likely to distrust outreach officer Y, and maybe others in the community too.

Part 2

It seems clear to me this gap should go. But what should we do instead? I suggest a solution in two parts.

First, EA marketing should tally with the sort of things EAs believe are important. If we really think animals, AI, etc. are what matters, we should lead with those, rather than suggesting EA is about poverty and then mentioning other cause areas.

This doesn’t quite settle the matter. Should the marketing represent what current EAs believe is important? This is problematically circular: it’s not clear how to identify who counts as an ‘EA’ except by what they believe. In light of that, maybe the marketing should just represent what the heads or members of EA organisations believe is important. This is also problematic: what if EAs orgs’ beliefs substantially differ from the rest of the EA community (however that’s construed)?

Here, we seem to face a choice between what I’ll call ‘moral inclusivism’, stating what the most important problems are for a range on moral outlooks but not endorsing a particular moral outlook, and ‘moral exclusivism’, picking a single moral view and endorsing that.

With this choice in mind, I suggest inclusivism. I’ll explain how I thing this works in this section and defend it in the final one.

Roughly, I think the EA pitch should be "EA is about doing more good, whatever your views". If that seems too concessive, it could be welfarist – "we care making things better or worse for humans and animals" – but neutral on makes things better or worse - "we don't all think happiness is the only thing matters" - and neutral on population ethics – "EAs disagree about how much the future matters. Some focus on helping current people, others are worried about the survival of humanity, but we work together wherever we can. Personally, I think cause X is the most important because I believe theory Y...". 

I don't think all EA organisations need to be inclusive. What the Future of Humanity Institute works on is clearly stated in its name and it would be weird if it started claim the future of humanity was unimportant. I don't think individuals EAs need to pretend endorse multiple view eithers. But I think the central, outreach-y ones should adopt inclusivism.

The advantage of this sort of approach is it allows EA to be entirely straightforward about what effective altruists stand for and avoids even the perception of saying one thing and doing another. Caesar’s wife should be above suspicion, and all that.

An immediate objection is that this sort of approach - front-loading all the 'weirdness' of EA views when we do outreach - would be off-putting. I think this worry, so much as it does actually exist, is overblown and also avoidable. Here's how I think the EA pitch goes:

-Talk about the drowning child story and/or the comparative wealth of those in the developed world.

-Talk about ineffective and effective charities.

-Say that many people became EAs because they were persuaded of the idea we should help others when it's only a trivial cost to ourselves.

-Point out people understand this in different ways because of their philosophical beliefs about what matters: some focus on helping humans alive today, others on animals, others on trying to make sure humanity doesn't accidentally wipe itself out, etc.

-For those worried about how to ‘sell’ AI in particular, I recently heard Peter Singer give a talk when he said something like (can't remember exactly): "some people are very worried about about the risks from artificial intelligence. As Nick Bostrom, a philosopher at the University of Oxford pointed out to me, it's probably not a very good idea, from an evolutionary point of view, to build something smarter than ourselves." At which point the audience chuckled. I thought it was a nice, very disarming way to make the point.

In conclusion, think the apparent gap between rhetoric and reality is problematic and also avoidable. Organisations like GWWC should make it clearer that EAs support causes other than global poverty.

Part 3

One might think EA organisations, faced with the inclusivist-exclusivist dilemma, should opt for the latter. You might think most EAs, at least within certain organisations, do agree one a single moral theory, so endorsing morally inclusivity would be dishonest. Instead, you could conclude we should be moral exclusivists, fly the flag for our favourite moral theory, lead with it and not try to accommodate everyone.

From my outsider’s perspective, I think this is sort of direction 80,000 Hours has started to move in more recently. They are now much more open and straightforward about saying the far future in general, and AI safety in particular, is what really matters. Their cause selection choices, which I think they updated a few months ago only really make sense if you adopt total utilitarianism (maximise happiness throughout history of the universe) rather than if you prefer a person-affecting view in population ethics (make people happy, don’t worry about creating happy people) or you just want to focus on the near future (maybe due to uncertainty about what we can do or pure time discounting).

An obvious worry about being a moral exclusivist and picking one moral theory is that you might be wrong; if you’re endorsing the wrong view, that’s really going to set back your ability to do good. But given you have to take some choices, let’s putthis worry aside. I’m now going to make a plea for making/keeping EA morally inclusive whatever your preferred moral views are. I offer three reasons.

1.

Inclusivity reduces group think. If EA is known as a movement where people believe view X, people who don’t like view X will exit the movement (typically without saying anything). This deprives those who remain of really useful criticism that would help identify intellectual blind spots and force the remainers to keep improving their thinking. This also creates a false sense of confidence in the remainers because all their peers now agree with them.

Another part of this is that, if you want people to seek the truth, you shouldn’t give them incentives to be yes-humans. There are lots of people that like EA and want to work in EA orgs and be liked by other (influential) EAs. If people think they will be rewarded (e.g. with jobs) for adopting the ‘right’ views and signalling them to others, they will probably slide towards what they think people want to hear, rather than what they think is correct. Responding to incentives is a natural human thing to do, and I very much doubt EAs are immune to it. Similar to what I said in part 1, even a perception there are ‘right’ answers can be damaging to truth seeking. Like a good university seminar leader, EA should create an environment where people feel inspired to seek the truth, rather than just agree with the received wisdom, as honest truth seeking and disagreement seems mostly likely to reveal the truth.

2.

Inclusivity increases movement size. If we only appeal to a section of the 'moral market' then there won't be so many people in the EA world. Even if people have different views, they can still can work together, engage in moral trade, personally support each other, share ideas, etc.

I think organisations working on particular, object-level problems, need to be value-aligned to aid co-ordination (if I want to stop global warming and you don't care, you shouldn't join my global warming org) but this doesn't seem relevant at the level of a community. Where people meet in at EA hubs, EA conferences, etc. they’re not working together anyway. Hence this isn’t an objection for EA outreach-y orgs being morally inclusive.

 3.

Inclusivity minimises in-fighting. If people perceive there’s only one accepted and acceptable view, then people will spend their time fighting the battle of hearts and minds to ensure that their view wins, and this will do this rather than working on solving real world problems themselves. Or they'll split, stop talking to each other and fail to co-ordinate. Witness, for instance, the endless schisms within churches about doctrinal matters, like gay marriage, and the seemingly limited interest they have in helping other people. If people instead believe there's a broad range of views within a community, this is okay, and there’s no point fighting for ideological supremacy, they can instead engage in dialogue, get along and help each other. More generally, I think I’d rather be in a community where people thought different things and this was accepted, rather than one where there were no disagreements and none allowed.

On the basis of these three reasons, I don’t think even those who believe they’ve found the moral truth should want EA as a whole to be morally exclusive. Moral inclusivity seems to increase ability of effective altruists to collectively seek the truth and work together, which looks like it leads to more good being done from the perspective of each moral theory.

What followed from parts 1 and 2 is that, for instance, GWWC should close the marketing gap and be more upfront about what EAs really believe. People should not feel surprised about what EAs value when they get more involved in the movement.

What follows from part 3 is that, for instance, 80,000 Hours should be much morally inclusive than they presently are. Instead of “these are the most important things”, it should “these are the most important things if you believe A, but not everyone believes A. If you believe B, you should think these are the important things [new list pops up]. As an organisation, we don’t take a stand on A or B, but here are some arguments you might find relevant to help you decide”.

Here end my pleas for moral inclusivity.

There may be arguments for keeping the marketing gap and adopting moral exclusivism I’ve not considered and I’d welcome discussion. 

Edit (10/07/2017): Ben Todd points out in the comment below that 1) 80k have stated their preferred view since 2014 in order to be transparent and that 2) they provide a decision tool for those who disagree with 80k's preferred view. I'm pleased to learn the former and admit my mistake. On the latter, Ben and I seem to disagree whether adding the decision tool makes 80k morally inclusive or not (I don't think it does).


undefined @ 2017-07-09T00:41 (+21)

It's worth noting that the 2017 EA Survey (data collected but not yet published), the 2015 EA Survey, and the 2014 EA Survey all have global poverty as the most popular cause (by plurality) among EAs in these samples, and by a good sized margin. So it may not be the case that the EA movement is misrepresenting itself as a movement by focusing on global poverty (even if movement leaders may think differently than the movement as a whole).

Still, it is indeed the case that causes other than global poverty are more popular than global poverty, which would potentially argue for a more diverse presentation on what EA is about as you suggest.

undefined @ 2017-07-09T10:29 (+5)

I was aware of this and I think this is interesting. This was the sort of thing I had in mind when considering that EA orgs/leaders may have a different concept of what matters from non-institutional EAs.

I think this is also nicely in tension with Ben Todd's comments above. There's something strange about 80k leaning on the far future as mattering and most EAs wanting to help living people.

undefined @ 2017-07-08T21:00 (+17)

Hi Michael,

I agree the issue of people presenting EA as about global poverty when they actually support other causes is a big problem.

80k stopped doing this in 2014 (not a couple of months ago like you mention), with this post: https://80000hours.org/2014/01/which-cause-is-most-effective-300/ The page you link to listed other causes at least as early as 2015: https://web.archive.org/web/20150911083217/https://80000hours.org/articles/cause-selection/

My understanding is that the GWWC website is in the process of being updated, and the recommendations on where to give are now via the EA Funds, which include 4 cause areas.

These issues take a long-time to fix though. First, it takes a long time to rewrite all your materials. Second, it takes people at least several years to catch up with your views. So, we're going to be stuck with this problem for a while.

In terms of how 80,000 Hours handles it:

Their cause selection choices, which I think they updated a few months ago only really make sense if you adopt total utilitarianism (maximise happiness throughout history of the universe) rather than if you prefer a person-affecting view in population ethics (make people happy, don’t worry about creating happy people) or you just want to focus on the near future (maybe due to uncertainty about what we can do or pure time discounting).

This is a huge topic, but I disagree. Here are some quick reasons.

First, you should value the far future even if you only put some credence on theories like total utilitarianism.

e.g. Someone who had 50% credence in the person affecting view and 50% credence in total utilitarianism, should still place significant value on the far future.

This is a better approximation of our approach - we're not confident in total utilitarianism, but some weight on it due to moral uncertainty.

Second, even if you don't put any value on the far future, it wouldn't completely change our list.

First, the causes are assessed on scale, neglectedness and solvability. Only scale is affected by these value judgements.

Second, scale is (to simplify) assessed on three factors: GDP, QALYs and % xrisk reduction, as here: https://80000hours.org/articles/problem-framework/#how-to-assess-it

Even if you ignore the xrisk reduction column (which I think would be unreasonable due to moral uncertainty), you often find the rankings don't change that much.

E.g. Pandemic risk gets a scale score of 15 because it might pose at xrisk, but if you ignored that, I think the expected annual death toll from pandemics could easily be 1 million per year right now, so it would still get a score of 12. If you think engineered pandemics are likely, you could argue for a higher figure. So, this would move pandemics from being a little more promising than regular global health, to about the same, but it wouldn't dramatically shift the rankings.

I think AI could be similar. It seems like there's a 10%+ chance that AI is developed within the lifetimes of the present generation. Conditional on that, if there's a 10% chance of a disaster, then the expected death toll is 75 million, or 1-2 million per year, which would also give it a score of 12 rather than 15. But it would remain one of the top ranked causes.

I think the choice of promoting EA and global priorities research are even more robust to different value judgements.

We actively point out that the list depends on value judgements, and we provide this quiz to highlight some of the main ones: https://80000hours.org/problem-quiz/

undefined @ 2017-07-10T13:19 (+7)

Ben's right that we're in the process of updating the GWWC website to better reflect our cause-neutrality.

undefined @ 2017-07-12T09:43 (+2)

Hm, I'm a little sad about this. I always thought that it was nice to have GWWC presenting a more "conservative" face of EA, which is a lot easier for people to get on board with.

But I guess this is less true with the changes to the pledge - GWWC is more about the pledge than about global poverty.

That does make me think that there might be space for an EA org that explicitly focussed on global poverty. Perhaps GiveWell already fills this role adequately.

undefined @ 2017-07-12T11:13 (+9)

You might think The Life You Can Save plays this role.

I've generally been surprised over the years by the extent to which the more general 'helping others as much as we can, using evidence and reason' has been easy for people to get on board with. I had initially expected that to be less appealing, due to its abstractness/potentially leading to weird conclusions. But I'm not actually convinced that's the case anymore. And if it's not detrimental, it seems more straightforward to start with the general case, plus examples, than to start with only a more narrow example.

undefined @ 2017-07-12T16:10 (+1)

I hadn't thought the TLYCS as an/the anti-poverty org. I guess I didn't think about it as they're not so present in my part of the EA blogsphere. Maybe it's less of a problem if there are at least charities/orgs to represent different world views (although this would require quite a lot of duplication of work so it's less than ideal).

undefined @ 2017-07-10T13:30 (+1)

And what are your/GWWC's thoughts on moral inclusivity?

undefined @ 2017-07-10T18:24 (+2)

For as long as it's the case that most of our members [edited to clarify: GWWC members, not members of the EA community in general] are primarily concerned with global health and development, content on our blog and social media is likely to reflect that to some degree.

But we also aim to be straightforward about our cause-neutrality as a project. For example, our top recommendation for donors is the EA Funds, which are designed to get people thinking about how they want to allocate between different causes rather than defaulting to one.

undefined @ 2017-07-10T18:32 (+1)

Thanks for the update. That's helpful.

However, it does seem a bit hard to reconcile GWWC's and 80k's positions on this topic. GWWC (i.e. you) seem to be saying "most EAs care about poverty, so that's what we'll emphasise" whereas 80k (i.e. Ben Todd above) seems to saying "most EAs do (/should?) care about X-risk, so that's what we'll emphasise".

These conclusions seem to be in substantial tension, which itself is may confuse new and old EAs.

undefined @ 2017-07-08T21:43 (+4)

Hello again Ben and thanks for the reply.

Thanks for the correction on 80k. I'm pleased to hear 80k stopped doing this ages ago: I saw the new, totalist-y update and assumed that was more of a switch in 80k's position than I thought. I'll add a note.

I agree moral uncertainty is potentially important, but there are two issues.

  1. I'm not sure EMV is the best approach to moral uncertainty. I've been doing some stuff on meta-moral uncertainty and think I've found some new problems I hope to write up at some point.

  2. I'm also not sure, even if you adopt an EMV approach, the result is that totalism becomes your effective axiology as Hilary and Toby suggest in their paper (http://users.ox.ac.uk/~mert2255/papers/mu-about-pe.pdf). I'm also working on a paper on this.

Those are basically holding responses which aren't that helpful for the present discussion. Moving on then.

I disagree with your analysis that person-affecting views are committed to being very concerned about X-risks. Even supposed you're taking a person-affecting view, there's still a choice to be made about your view of the badness of death. If you're an Epicurean about death (it's bad for no one to die) you wouldn't be concerned about something suddenly killing everyone (you'd still be concerned about the suffering as everyone died though). I find both person-affecting views and Epicureanism pretty plausible: Epicureanism is basically just taking a person-affecting view to creating lives and applying it to ending lives, so if you like one, you should like both. On my (heretical and obviously deeply implausible) axiology, X-risk doesn't turn out to be important.

FWIW, I'm (emotionally) glad people are working on X-rosk because I'm not sure what to do about moral uncertainty either, but I don't think I'm making a mistake in not valuing it. Hence I focus on trying to find the best ways to 'improve lives' - increasing the happiness of currently living people whilst they are alive.

You're right that if you combine person-affecting-ness and a deprivationist view of death (i.e. badness of death = years of happiness lost) you should still be concerned about X-risk to some extent. I won't get into the implications of deprivationism here.

What I would say, regarding transparency, is that if you think everyone should be concerned about the far future because you endorse EMV as the right answer to moral uncertainty, you should probably state that somewhere too, because that belief is doing most of the prioritisation work. It's not totally uncontentious, hence doesn't meet the 'moral inclusivity' test.

undefined @ 2017-07-09T03:33 (+12)

Hi Michael,

I agree that if you accept both Epicureanism and the person-affecting view, then you don't care about an xrisk that suddenly kills everyone, perhaps like AI.

However, you might still care a lot about pandemics or nuclear war due to their potential to inflict huge suffering on the present generation, and you'd still care about promoting EA and global priorities research. So even then, I think the main effect on our rankings would be to demote AI. And even then, AI might still rank due to the potential for non-xrisk AI disasters.

Moreover, this combination of views seems pretty rare, at least among our readers. I can't think of anyone else who explicitly endorses it.

I think it's far more common for people to put at least some value on future generations and/or to think it's bad if people die. In our informal polls of people who attend our workshops, over 90% value future generations. So, I think it's reasonable to take this as our starting point (like we say we do in the guide: https://80000hours.org/career-guide/how-much-difference-can-one-person-make/#what-does-it-mean-to-make-a-difference).

And this is all before taking account of moral uncertainty, which is an additional reason to put some value on future generations that most people haven't already considered.

In terms of transparency, we describe our shift to focusing on future generations here: https://80000hours.org/career-guide/world-problems/#how-to-preserve-future-generations-8211-find-the-more-neglected-risks If someone doesn't follow that shift, then it's pretty obvious that they shouldn't (necc) follow the recommendations in that section.

I agree it would be better if we could make all of this even more explicit, and we plan to, but I don't think these questions are on the mind of many of our more readers, and we rarely get asked about them in workshops and so on. In general, there's a huge amount we could write about, and we try to address people's most pressing questions first.

undefined @ 2017-07-09T12:00 (+2)

Hello Ben,

Main comments:

There are two things going on here.

On transparency, if you want to be really transparent about what you value and why, I don't think you can assume people agree with you on topics they've never considered, that you don't mention, and that do basically all the work of cause prioritisation. The number of people worlwide who understand moral uncertainty well enough to explain it could fill one seminar room. If moral uncertainty is your "this is why everyone should agree with us" fall back, then that should presumably feature somewhere. Readers should know that's why you put forward your cause areas so they're not surprised later on to realise that's the reason.

On exclusivity, you response seems to ammount to "most people want to focus on the far future and, what's more, even if they don't, they should because of moral uncertainty, so we're just going to say it's what really matters". It's not true that most EAs want to focus on the far future - see Ben Hurford's post below. Given that it's not true, saying people should focus on it is, in fact, quite exclusive.

The third part of my original post argued we should want EA should be morally inclusive even if we endorse a particular moral theory. Do you disagree with that? Unless you disagree, it doesn't matter whether people are or should be totalists: it's worse from a totalist perspective for 80k to only endorse totalist-y causes.

Less important comments:

FWIW, if you accept both person-affecting views and Epicureanism, you should find X-risk, pandemics or nuclear war pretty trivial in scale compared to things like mental illness, pain and 'ordinary human unhappiness' (that is, the sub-maximal happiness many people have even if they are entirely healthy and economically secure). Say a nuclear war kills everyone, then that's just few moments of suffering. Say it kills most people, but leaves 10m left who eek out a miserable existence in a post apocalyptic world, then you're just concerned with 10m people, which is 50 times less than just the 500m who have either anxiety or depression worldwide.

I know some people who implicitly or explicitly endorse this, but I wouldn't expect you to, and that's one of my worries: if you come out in favour of theory X, you disproportionately attract those who agree with you, and that's bad for truth seeking. By analogy, I don't imagine many people at a Jeremy Corbyn rally vote Tory. I'm not sure Jeremy shouldn't take that as further evidence that a) the Tories are wrong or b) no one votes for them.

I'm curious where you get your 90% figure from. Is this from asking people if they would:

"Prevent one person from suffering next year. Prevent 100 people from suffering (the same amount) 100 years from now."?

I assume it is, because that's how you put it in the advanced workshop at EAGxOX last year. If it is, it's a pretty misleading question to ask for a bunch of reasons that will take too long to type out fully. Briefly, one problem is that I think we should help the 100 people in 100 years if those people already exist today (both necessitarians and presentists get this results). So I 'agree' with your intuition pump but don't buy your conclusions, which suggests the pump is faulty. Another problem is the Hawthorne effect. Another is population ethics is a mess and you've cherry picked a scenario that suits your conclusion. If I asked a room of undergraduate philosophers "would you rather relieve 100 living people of suffering or create 200 happy people" I doubt many would pick the latter.

undefined @ 2017-07-10T05:51 (+6)

I feel like I'm being interpreted uncharitably, so this is making me feel a bit defensive.

Let's zoom out a bit. The key point is that we're already morally inclusive in the way you suggest we should be, as I've shown.

You say:

for instance, 80,000 Hours should be much morally inclusive than they presently are. Instead of “these are the most important things”, it should “these are the most important things if you believe A, but not everyone believes A. If you believe B, you should think these are the important things [new list pops up].

In the current materials, we describe the main judgement calls behind the selection in this article: https://80000hours.org/career-guide/world-problems/ and within the individual profiles.

Then on the page with the ranking, we say:

Comparing global problems involves difficult judgement calls, so different people come to different conclusions. We made a tool that asks you some key questions, then re-ranks the lists based on your answers.

And provide this: https://80000hours.org/problem-quiz/ Which produces alternative rankings given some key value judgements i.e. it does exactly what you say we should do.

Moreover, we've been doing this since 2014, as you can see in the final section of this article: https://80000hours.org/2014/01/which-cause-is-most-effective-300/

In general, 80k has a range of options, from most exclusive to least:

1) State our personal views about which causes are best 2) Also state the main judgement calls required to accept these views, so people can see whether to update or not. 3) Give alternative lists of causes for nearby moral views. 4) Give alternative lists of causes for all major moral views.

We currently do (1)-(3). I think (4) would be a lot of extra work, so not worth it, and it seems like you agree.

It seemed like your objection is more that within (3), we should put more emphasis on the person-affecting view. So, the other part of my response was to argue that I don't think the rankings depend as much on that as it first seems. Moral uncertainty was only one reason - the bigger factor is that the scale scores don't actually change that much if you stop valuing xrisk.

Your response was that you're also epicurean, but then that's such an unusual combination of views that it falls within (4) rather than (3).

But, finally, let's accept epicureanism too. You claim:

FWIW, if you accept both person-affecting views and Epicureanism, you should find X-risk, pandemics or nuclear war pretty trivial in scale compared to things like mental illness, pain and 'ordinary human unhappiness'

For mental health, you give the figure of 500m. Suppose those lives have a disability weighting of 0.3, then that's 150m QALYs per year, so would get 12 on our scale.

What about for pandemics? The Spanish Flu infected 500m people, so let's call that 250m QALYs of suffering (ignoring the QALYs lost by people who died since we're being Epicurean, or the suffering inflicted on non-infected people). If there's a 50% chance that happens within 50 years, then that's 2.5m expected QALYs lost per year, so it comes out 9 on our scale. So, it's a factor of 300 less, but not insignificant. (And this is ignoring engineered pandemics.)

But, the bigger issue is that the cause ranking also depends on neglectedness and solvability.

We think pandemics only get $1-$10bn of spending per year, giving them a score of ~4 for neglectedness.

I'm not sure how much gets spent on mental health, but I'd guess it's much larger. Just for starters, it seems like annual sales of antidepressants are well over $10bn, and that seems like fairly small fraction of the overall effort that goes into it. The 500m people who have a mental health problem are probably already trying pretty hard to do something about it, whereas pandemics are a global coordination problem.

All the above is highly, highly approximate - it's just meant to illustrate that, on your views, it's not out of the question that the neglectedness of pandemics could make up for their lower scale, so pandemics might still be an urgent cause.

I think you could make a similar case for nuclear war (a nuclear war could easily leave 20% of people alive in a dystopia) and perhaps even AI. In general, our ranking is driven more by neglectedness than scale.

undefined @ 2017-07-10T09:23 (+1)

Hey.

So, I don't mean to be attacking you on these things. I'm responding to what you said in the comments above and maybe more of a general impression, and perhaps not keeping in mind how 80k do things on their website; you write a bunch of (cool) stuff, I've probably forgotten the details and I don't think it would be useful to go back and enage in a 'you wrote this here' to check.

A few quick things as this has already been a long exchange.

Given I accept I'm basically a moral hipster, I'd understand if you put my views in the 3 rather 4 category.

If it's of any interest, I'm happy to suggest how you might update your problem quiz to capture my views and views in the area.

I wouldn't think the same way about Spanish flu vs mental health. I'm assuming happiness is duration x intensity (#Bentham). What I think you're discounting is the duration of mental illnesses - they are 'full-time' in that they take up your conscious space for lots of the day. They often last a long time. I don't know what the distribution of duration is, but if you have chronic depression (anhedonia) that will make you less happy constantly. In contrast, the experience of having flu might be bad (although it's not clear it's worse, moment per moment, than say, depression), but it doesnt last for very long. Couple of weeks? So we need to accounts for the fact a case of Spanish flu has 1/26th of the duration than anhedonia, before we even factor in intensity. More generally, I think we suffer from something like scope insensity when we do affecting forecasting: we tend to consider the intensity of events rather than duration. Studies into the 'peak-end' effect show this is exactly how we remember things: our brains only really remember the intensity of events.

One conclusion I reach (on my axiology) is that the things which cause daily misery/happy are the biggest in terms of scale. This is why I think don't think x-risks are the most important thing. I think a totalist should accept this sort of reasoning and bump up the scale of things like mental health, pain and ordinary human unhappiness, even though x-risk will be much bigger in scale on totalism. I accept I haven't offered anything to do with solvability of neglectedness yet.

undefined @ 2017-07-09T02:07 (+14)

Effective Altruism is quite difficult to explain if you want capture all of its complexity. I think that it is a completely valid choice for an introductory talk to focus on one aspect of Effective Altruism as otherwise many people will have trouble following.

I would suggest letting people know that you are only covering one aspect of Effective Altruism, ie. "Effective Altruism is about doing the most good that you can with the resources available to you. This talk will cover how Effective Altruism has been applied to charity, but it is worth noting that Effective Altruism has also been applied to other issues like animal welfare or ensuring the long-term survival of humanity".

This reduces the confusion when they hear about these issues later and reduces the chance that they will feel mislead. At the same time, it avoids throwing too many new ideas at a person at once which may reduce their comprehension and it explains how it applies to an issue which they may already care about.

undefined @ 2017-07-09T03:45 (+18)

I think this is a good point, but these days we often do this, and people still get the impression that it's all about global poverty. People remember the specific examples far more than your disclaimers. Doing Good Better is a good example.

undefined @ 2017-07-19T13:47 (+3)

I broadly agree with you on the importance of inclusivity, but I’m not convinced by your way of cashing it out or the implications you draw from it.

Inclusivity/exclusivity strikes me as importantly being a spectrum, rather than a binary choice. I doubt when you said EA should be about ‘making things better or worse for humans and animals but being neutral on what makes things better or worse’, you meant the extreme end of the inclusivity scale. One thing I assume we wouldn’t want EA to include, for example, is the view that human wellbeing is increased by coming only into contact with people of the same race as yourself.

More plausibly, the reasons you outline in favour of inclusivity point towards a view such as ‘EA is about making things better or worse for sentient beings but being neutral between reasonable theories of what makes things better or worse’. Of course, that brings up the question of what it takes to count as a reasonable theory. One thing it could mean is that some substantial number of people hold / have held it. Presumably we would want to circumscribe which people are included here: not all moral theories which have at any time in the past by a large group of people are reasonable. At the other end of the spectrum, you could include only views currently held by many people who have made it their life’s work to determine the correct moral theory. My guess is that in fact we should take into account which views are and aren’t held by both the general public and by philosophers.

I think given this more plausible cashing out of inclusivity, we might want to be both more and less inclusive than you suggest. Here are a few specific ways it might cash out:

undefined @ 2017-07-20T22:07 (+4)

Thanks Michelle.

I agree there's a difficulty in finding a theoretical justification for how inclusive you are. I think this overcooks the problem somewhat as an easier practical principle would be "be so inclusive no one feels their initially preferred theory isn't represented". You could swap "no one" for "few people" with "few" to be further defined. There doesn't seem much point saying "this is what a white supremacist would think" as there aren't that many floating around EA, for whatever reason.

On your suggestions for being inclusive, I'm not sure the first two are so necessary simply because it's not clear what types of EA actions prioritarians and deontologists will disagree about in practice. For which charities will utils and prioritarians diverge, for instance?

On the third, I think we already do that, don't we? We already have lots of human-focused causes people can pick if they aren't concerned about non-human animals.

On the last, the only view I can think of which puts no value on the future would be one with a very high pure time discount. I'm inclined towards person-affecting views and I think climate change (and X-risk) would be bad and are worth worrying about: they could impact the lives of those alive today. As I said to B. Todd earlier, I just don't think they swamp the analysis.

undefined @ 2017-07-14T10:48 (+2)

Interesting read!

Just a thought: does anyone have any thoughts on religion and EA? I don't mean it in a "saving souls is cost effective" way, more in the moral philosophy way.

My personal take is that unless someone is really hardcore/radical/orthodox, then most of what EA says would be positive/ethical for most religious persons. That is certainly my experience talking to religious folks, no one has ever gotten mad at me unless i get too consequentalist. Religious people might even be more open to giving what we can pledge, and EA altruism in some ways, because of the common practise of tithing. though they might decide that "my faith is the most cost effective", but that only sometimes happens, they seem to donate on top of it usually.

PS: Michael, was it my question on the facebook mental health EA post that prompted you to write this? Just curious.

undefined @ 2017-07-09T15:37 (+1)

Thank you for the interesting post, and you provide some strong arguments for moral inclusivity.

I'm less confident that the marketing gap, if it exists, is a problem, but there may be ways to sell the more 'weird' cause areas, as you suggest. However, even when they are mentioned, people may still get the impression that EA is mostly about poverty. The other causes would have to be explained in the same depth as poverty (looking at specific charities in these cause areas as well as cost-effectiveness estimates where they exist, for instance) for the impression to fade, it seems to me.

While I do agree that it's likely that a marketing gap is perceived by a good number of newcomers (based solely on my intuition), do we have any solid evidence that such a marketing gap is perceived by newcomers in particular?

Or is it mainly perceived by more 'experienced' EAs (many of whom may prioritise causes other than global poverty) who feel as if sufficient weight isn't being given to other causes, or who feel guilty for giving a misleading impression relative to their own impressions (which are formed from being around others who think like them)? If the latter, then the marketing gap may be less problematic, and will be less likely to blow up in our faces.