What are the key claims of EA?

By RobertHarling @ 2022-04-25T12:48 (+63)

[This is rough write-up mainly based on my experiences in EA and previous reading (I didn't do specific reading/research for this post)- I think it’s possible there are important points I’m missing or explaining poorly. I’m posting this still in the spirit of trying to overcome perfectionism, and because I mentioned it to a couple of people who were interested in it]

I think that EA as a worldview contains many different claims and views, and sometimes we may not realise all these distinct claims are combined in our normal view of “an EA” and instead might think EA is just “maximise positive impact”. I initially brainstormed a list of various claims I think could be important parts of the EA worldview and then tried to categorise them into themes. What I present below is the arrangement that feels most intuitive to me, although I list multiple complexities/issues with it below. I tried to use an overall typology of claims on morality, claims about empirical facts about the world, and claims about how to reason. Again this is just based on some short intuitions I have, and is not a well defined typology.

I think this is an interesting exercise for a couple of reasons:

(I’ve bolded the specific claims, and the other bullet points are my thoughts on these)

I’d be interested if there are important claims I’ve missed, if some of the claims below could be separated out, or if there’s a clearer path through the different claims. A lot my thinking on this was informed by Will MacAskill’s paper and Ben Todd’s podcast.

Moral Claims

Claims about what is good and what we ought to do.

Empirical Claims

Claims about Reasoning

 

I’ve been quite vague in my descriptions above and am likely missing a lot of nuance. For me personally, many of these claims are downstream of the idea of feeling morally obligated to try to improve the world as much as possible, and an impartial and welfarist definition of good.


david_reinstein @ 2022-04-26T00:30 (+9)

When considering the relevant lives, this includes all humans, animals and future people. We generally do not discount the lives of future people intrinsically at all. This longtermist claim is common but not absolute in EA, and I’m brushing over mutliple population ethics questions here. (e.g. severals EA might hold person-affecting views)

I don't think this is a longtermist claim, nor does it preclude person-affecting views.

You can still value future people equally as present people, and not discount them at all insofar as they are sure to exist. If they are less likely to exist, you could discount them by this 1 - probability, in an expected value computation. OK, the math of this does get challenging for the person-affecting-view-er, insofar as they cannot just consider their impact on the sum of this value. They only care about improving the welfare holding the number of people constant but not the component of the effect of their choice that occurs through changing the expected number of people in existence.

I actually think total-population-ethics-ers would do that probability discounting too; however, they would value their impact on the number of people likely to exist.

ludwigbald @ 2022-04-25T13:54 (+7)

I've been thinking of distilling some of the criticism of EA that I hear into similar, clearly attackable foundational claims.

One thing I would add is the very individualistic view of impact. We act as individuals to maximize (expected) individual impact. This means things like founding an org, choosing your career, spending time deciding where your money goes. Collective action would mean empowering community-controlled institutions that make decisions by going through a democratic process of consensus-building. Instead our coordination mechanisms rely on trusting a few decision-makers that direct large amounts of funding. This is a consequence of the EA movement having been really small in the past.

Also, it seems we are obsessed with the measurable. That goes as far as defining "good" in a way that does not directly include complex relationships. Strict QUALY maximizers would be okay with eugenics. I don't even know how to approach a topic like ecosystem conservation from an EA perspective.

I think in general we should be aware that our foundational assumptions are only a simplified model of what we actually want. They can serve us fine for directly comparing interventions, but when they lead to surprising conclusions, we should take a step back and examine if we just found a weak spot of the model.

Stefan_Schubert @ 2022-04-25T14:21 (+28)

One thing I would add is the very individualistic view of impact. We act as individuals to maximize (expected) individual impact. 

Personally I wouldn't agree with that. Effective altruists have been at pains to emphasise that we "do good together" - that was even the theme of a past EA Global, if I don't misremember.

80,000 hours had a long article on this theme already in 2018: Doing good together: how to coordinate effectively, and avoid single-player thinking. There was also a 2016 piece called The value of coordination on similar themes.

Also, it seems we are obsessed with the measurable.

I take a different view on that, too. For instance, Katja Grace wrote a post already in 2014 arguing that we shouldn't  refrain from interventions that are high-impact but hard to measure. That article was included in the first version of the EA Handbook (2015).

In fact, many of the causes currently popular with effective altruists, like AI safety and biosecurity, seem hard to measure.

ludwigbald @ 2022-04-26T11:48 (+5)

Thanks for the very useful links, Stefan!
I think the usefulness of coordination is widely agreed upon, but we're still not working together as well as possible. The 80000hours article you linked even states:

Instead, especially in effective altruism, people engage in “single-player” thinking. They work out what would be the best course of action if others weren’t responding to what they do.

 I'll go and spend some time with these topics

Jack_S @ 2022-04-25T22:12 (+2)

I expect most EAs would be self-critical enough to see these both as frequently occurring flaws in the movement, but I'd dispute the claim that they're foundational. For the first criticism, some people track personal impact, and 80k talks a lot about your individual career impact, but people working for EA orgs are surely thinking of their collective impact as an org rather than anything individual. In the same way, 'core EAs' have the privilege of actually identifying with the movement enough that they can internalise the impact of the EA community as a whole. 

As for measurability, I agree that it is a bias in the movement, albeit probably a necessary one. The ecosystem example is an interesting one- I'd argue that it's not that difficult to approach ecosystem conservation from an EA perspective. We generally understand how ecosystems work and how they provide measurable valuable services to humans. A cost-effectiveness calculation would provide the human value of ecosystem services (which environmental economists usually do)and, if you want to give inherent value to species diversity, add the number of species within a given area, the number of individuals of these species and rarity/ external value of species etc. Then add weights according to various criteria to give something like an 'ecosystem value per square metre', and you'd get to a value that you could compare to other ecosystems. Calculate the price that it costs to conserve various ecosystems around the world, and voila, you have a cost-effectiveness analysis that feels at home on an EA platform. The reason this process doesn't feel 100% EA is not that it's difficult to measure, but because it can include value judgements that aren't related to the welfare of conscious beings. 

david_reinstein @ 2022-04-26T00:40 (+3)

I think you get a lot right, but some of these claims, especially the empirical ones, seem to apply only to certain (perhaps long-termist) segments only.

I'd agree on/focus on

  1. Altruism, willingness to substantially give (money, time) from one's own resources, and the goodness of this (but not necessary an 'obligation')

  2. Utilitarianism/consequentialism

(Corollary): The importance of maximization and prioritization in making choices about doing good.

  1. A wide moral circle

  2. Truth-seeking and reasoning transparency

I think these four things are fairly universal and core among EA's -- longtermist and non, and it brings us together. I also suspect that what we learn about how to promote these things will transfer across the various cause areas and branches of EA.

david_reinstein @ 2022-04-26T00:50 (+2)

I sort of disagree-ing with us

'Agreeing on a set of Facts'.

It seems somewhat to disagree with the truth-seeking part. I would say "it is bad for our epistemic norms" ... but I'm not sure I use that terminology correctly.

Aside from that, I think some of the empirics you mentioned probably have a bit less consensus in EA than you suggest... such as

We live in an “unusual” time in history

My impression was that even among longtermists the 'hinge of history' thing is greatly contested

Most humans in the world have net positive lives

Maybe now they do, but in future, I don't think we can have great confidence. Also, the 'most' does a lot of work here. It seems plausible to me that at least 1 billion people in this world have net negative lives.

Sentience is not limited to humans/biological beings

Most EAs (and most humans?) surely believe at least some animals sentient. But non-biological, I'm not sure how widespread this belief is. At least I don't think there is any consensus that we 'know of non-bios who are currently sentient', nor do we have consensus that 'there is a way to know what direction the valence of the non-bios goes'.

e.g. digital minds could be sentient is an important consideration and relevant in a lot of longtermist EA prioritisation.

I'm not sure that's been fully taken on board. In what ways? Are we prioritizing 'create the maximum number of super-happy algorithms'? (Maybe I'm missing something though; this is a legit question.)

Eddie Liu @ 2022-04-25T23:51 (+2)

I was just thinking about this the other day. In terms of pitching effective altruism, I think it's best to keep things simple instead of overwhelming people with different concepts. I think we can boil down your moral claims to essentially 3 core beliefs of EA: 

  1. Doing good is good. (Defining good)
  2. It is more good to do more good. (Maximization)
  3. Therefore, we ought to do more good. (Moral obligation)

If you buy these three beliefs, great! You can probably consider yourself an effective altruist or at least aligned with effective altruism. Everything else is downstream of these 3 beliefs and up for debate (and EAs excel at debating!).  

Michael @ 2022-04-25T22:25 (+2)

Probably it would be worthwhile to cross-reference your post with sources such as:

https://www.centreforeffectivealtruism.org/ceas-guiding-principles

https://resources.eagroups.org/running-a-group/communicating-about-ea/what-to-say-pitch-guide

These sources seem to encapsulate key claims of EA nicely, so points raised there could serve as additional points for your analysis, clarify some things up maybe (haven't thought of it much, just dropping the links).

anont5 @ 2022-04-26T06:37 (+1)

Possibly relevant: "Effective Justice" paper.

Miranda_Zhang @ 2022-04-25T20:54 (+1)

Reminds me of this spreadsheet made by Adam S, which I generally really like.

I agree that it would be nice to have a more detailed, up-to-date typology of EA's core & common principles, as the latter seems like the more controversial class (e.g., should longtermism be considered the natural consequence of impartial welfarism)?