Some Modes of Thinking about EA

By EdoArad @ 2019-11-09T17:54 (+54)

[I've been thinking a lot lately about EA as an R&D project. While collecting my thoughts, I had some notes on other ways of thinking of EA which might be valuable. This is not meant to be precise or complete.]

Intro

There is a constant debate on how we should define EA and what it entails. In this post, I present several modes of thinking about what is EA which might be useful in some context. My goal in writing this is to present and combine several old and new ideas, to hopefully spark new ways of thinking. This should also help to clarify that there are many ways of looking at EA, although the following is not at all aimed to be a rigorous taxonomy.

I find particularly interesting the distinction between EA as an individual project rather than a community project, which seems to me to be conflated frequently. I think that there is much more room for clarification and deliberation on this.

Modes of thinking about EA-

EA as a Question

EA Should be thought of as a question- “How can I do the most good, with the resources available to me?”.

Useful -

EA as an Ideology

Effective altruism is an ideology, meaning that it has particular (if somewhat vaguely defined) set of core principles and beliefs, and associated ways of viewing the world and interpreting evidence.

Useful -

EA as a Movement

The mode of thinking that involves how the EA community revolves around a set of ideas, norms and identities.

Useful -

EA as a Community

Who are the people involved? How are they connected? What do they need?

Useful -

EA is an Opportunity

We are in a unique position to do a lot of good

Useful -

EA as a Moral Obligation

If there is a way to do a lot of good, we ought to do that. If we can do more good, we ought to do that instead. Can be very dependant on cost to self.

Useful -

EA as a Worldview

Specifically mentioned in this post that a crucial assumption of EA is that we can discover ways to do more good. Also, it is a basic assumption the some ways of doing good are much better than others.

Useful -

EA is a commitment to Epistemology

In this post Stefan argus that EA is not about factual beliefs, but instead about epistemology and morality. In EA, the discovery process of facts involves the use of evidence and reason.

Useful -

EA is an Individual Mission

People in EA should seek to do as much good as their limited resources allow, while analyzing their own worldview and moral stance and acting accordingly.

Useful -

EA is a partnership.

People with somewhat different moral perspectives and world-view agree to work together.

Useful -

EA is smarter than me

A lot of decisions can be delegated to the EA set of ideas and leadership. I do not need to figure out exactly why, say, longtermism is correct because a lot of work has been done to convince a lot of people. This allows me to work on what I believe to be the most important thing to do without actually understanding why.

Useful -

EA is a set of memes

There is a vastly growing set of ideas and insights arising from EA.

Useful -

EA is a set of Organisations and Community Leaders

EA is somewhat centralized and is influenced by a set of key individuals and organisation.

Useful -

EA is an inspiring community and social network

EA is awesome!

Useful -


Grue_Slinky @ 2019-11-10T19:14 (+11)

Nice! I like these kinds of synthesis posts, especially when they try to be comprehensive. One could also add:

EA as a "gap-filling gel" within the context of existing society and its altruistic tendencies (I think I heard this general idea (not the name) at Macaskill's EAG London closing remarks, but the video isn't up yet so I'm not sure and don't want to put words in his mouth). The idea is that there's already lots of work in:

And if none of these existed, "doing the most good in the world" would be an even more massive undertaking than it might already seem, e.g. we'd likely "start" with inventing the field of medicine from scratch.

But a large amount of altruistic effort does exist, it's just that it's not optimally directed when viewed globally, because it's mostly shaped by people who only think about their local region of it. Consequently, altruism as a whole has several blind spots:

And the role of EA is to fill those gaps within the altruistic portfolio.


As an antithesis to that mode of thinking, we could also view:

EA as foundational rethinking of our altruistic priorities, to the extent we view those priorities as misdirected. Examples:

G Gordon Worley III @ 2019-11-11T20:52 (+4)

One I was very glad not to see in this list was "EA as Utilitarianism". Although utilitarian ethics are popular among EAs, I think we leave out many people who would "do good better" but from a different meta-ethical perspective. One of the greatest challenges I've seen in my own conversations about EA is with those who reject the ideas because they associate them with Singer-style moral arguments and living a life of subsistence until not one person is in poverty. This sadly seems to turn them off of ways they might think about better allocating resources, for example, because they think their only options are either to do what they feel good about or to be a Singer-esque maximizer. Obviously this is not the case, there's a lot of room for gradation and different perspectives, but it does create a situation where people see themselves in an adversarial relationship to EA and so reject all its ideas rather than just the subset of EA-related ideas they actually disagree with because they got the idea that one part of EA was the whole thing.

edoarad @ 2019-11-11T21:49 (+1)

Even though I agree that presenting EA as Utilitarianism is alienating and misleading, I think that it is a useful mode of thinking about EA in some contexts. Many practices in EA are rooted in Utilitarianism, and many (about half from the respondents to the survey, if I recall correctly) of the people in EA consider themselves utilitarian. So, while Effective Utilitarianism is not the same as EA, I think that the confusion of the outsiders is sometimes justified.