Some Modes of Thinking about EA
By EdoArad @ 2019-11-09T17:54 (+54)
[I've been thinking a lot lately about EA as an R&D project. While collecting my thoughts, I had some notes on other ways of thinking of EA which might be valuable. This is not meant to be precise or complete.]
Intro
There is a constant debate on how we should define EA and what it entails. In this post, I present several modes of thinking about what is EA which might be useful in some context. My goal in writing this is to present and combine several old and new ideas, to hopefully spark new ways of thinking. This should also help to clarify that there are many ways of looking at EA, although the following is not at all aimed to be a rigorous taxonomy.
I find particularly interesting the distinction between EA as an individual project rather than a community project, which seems to me to be conflated frequently. I think that there is much more room for clarification and deliberation on this.
Modes of thinking about EA-
EA as a Question
EA Should be thought of as a question- “How can I do the most good, with the resources available to me?”.
Useful -
- for being Cause Impartial.
- for maintaining flexibility and seeking new information.
- as a way of communicating openness.
EA as an Ideology
Effective altruism is an ideology, meaning that it has particular (if somewhat vaguely defined) set of core principles and beliefs, and associated ways of viewing the world and interpreting evidence.
Useful -
- to critically consider which viewpoints, questions, answers and frameworks are actually privileged in EA discussion.
- when thinking in terms of principles and identity.
EA as a Movement
The mode of thinking that involves how the EA community revolves around a set of ideas, norms and identities.
Useful -
- when considering the dynamics of people and ideas in EA over time.
- Movement Collapse Scenarios.
- How valuable is movement growth?
- Failure Risks for the EA community, from CEA's (not-necessarily) current thinking.
- when trying to grow the amount of people involved and their involvement.
- This Problem Profile by 80K on promoting EA takes the view of EA as a movement.
- when strategising aspects of the EA community at large.
- Relevant: EA as an intentional movement. Diego argues that EA is different from many other movements because it is "intentional" about itself, and may change with respect to new circumstances and world-views.
EA as a Community
Who are the people involved? How are they connected? What do they need?
Useful -
- when feeling like an outsider.
- for networking and collaborating
- when strategising how to connect EAs.
- when strategizing basically anything that involves human capital.
EA is an Opportunity
We are in a unique position to do a lot of good
Useful -
- for enjoying the process of doing good better.
EA as a Moral Obligation
If there is a way to do a lot of good, we ought to do that. If we can do more good, we ought to do that instead. Can be very dependant on cost to self.
Useful -
- for considering how much one should sacrifice.
- when pondering the exact normative stance. What is good?
EA as a Worldview
Specifically mentioned in this post that a crucial assumption of EA is that we can discover ways to do more good. Also, it is a basic assumption the some ways of doing good are much better than others.
Useful -
- for articulating underlying assumptions of the community and engage with criticism.
- to systematically analyze what is still not known and what we need to research further.
EA is a commitment to Epistemology
In this post Stefan argus that EA is not about factual beliefs, but instead about epistemology and morality. In EA, the discovery process of facts involves the use of evidence and reason.
Useful -
- when making personal or professional decisions, and we want to make sure that we are doing it right.
- for setting a standard for the community's processes.
EA is an Individual Mission
People in EA should seek to do as much good as their limited resources allow, while analyzing their own worldview and moral stance and acting accordingly.
Useful -
- for considering career/life options.
- when bargaining in a moral trade.
- when analyzing one self's marginal value (see [this response] from Hilary Greaves to the "collectivist critique").
EA is a partnership.
People with somewhat different moral perspectives and world-view agree to work together.
Useful -
- when thinking how (and why) to contribute to each other's goals.
- when help is needed from people we trust.
EA is smarter than me
A lot of decisions can be delegated to the EA set of ideas and leadership. I do not need to figure out exactly why, say, longtermism is correct because a lot of work has been done to convince a lot of people. This allows me to work on what I believe to be the most important thing to do without actually understanding why.
Useful -
- to efficiently accept a world-view based on some simple and plausible assumptions.
- when thinking about how we present our claims to the general community.
- when we are wary of being cultish.
EA is a set of memes
There is a vastly growing set of ideas and insights arising from EA.
Useful -
- when considering how to properly do outreach and advocacy.
- when thinking about the effects of new knowledge.
EA is a set of Organisations and Community Leaders
EA is somewhat centralized and is influenced by a set of key individuals and organisation.
Useful -
- when trying to affect the community and looking for points of influence.
- when considering other dynamics in the community.
- when seeking help or collaboration with a specific project.
EA is an inspiring community and social network
Useful -
- when considering whether to attend EAG or not 😊
Grue_Slinky @ 2019-11-10T19:14 (+11)
Nice! I like these kinds of synthesis posts, especially when they try to be comprehensive. One could also add:
EA as a "gap-filling gel" within the context of existing society and its altruistic tendencies (I think I heard this general idea (not the name) at Macaskill's EAG London closing remarks, but the video isn't up yet so I'm not sure and don't want to put words in his mouth). The idea is that there's already lots of work in:
- Making people healthier
- Reducing poverty
- Animal welfare
- National/international security and diplomacy (incl. nukes, bioweapons)
And if none of these existed, "doing the most good in the world" would be an even more massive undertaking than it might already seem, e.g. we'd likely "start" with inventing the field of medicine from scratch.
But a large amount of altruistic effort does exist, it's just that it's not optimally directed when viewed globally, because it's mostly shaped by people who only think about their local region of it. Consequently, altruism as a whole has several blind spots:
- Making people healthier and/or reducing poverty in the developing world through certain interventions (e.g. bednets, direct cash transfers) that turn out to work really well
- Animal welfare for factory-farmed and/or wild animals
- Global security from technologies whose long-term risks are neglected (e.g. AI)
And the role of EA is to fill those gaps within the altruistic portfolio.
As an antithesis to that mode of thinking, we could also view:
EA as foundational rethinking of our altruistic priorities, to the extent we view those priorities as misdirected. Examples:
- Some interventions which were posed with altruistic goals in mind turn out to be useless or even net-negative when scrutinized (e.g. Scared Straight)
- Many broader trends which seem "obviously good" such as economic growth or technological progress, seem neutral, uncertain, or even net-negative in light of certain longtermist thinking
G Gordon Worley III @ 2019-11-11T20:52 (+4)
One I was very glad not to see in this list was "EA as Utilitarianism". Although utilitarian ethics are popular among EAs, I think we leave out many people who would "do good better" but from a different meta-ethical perspective. One of the greatest challenges I've seen in my own conversations about EA is with those who reject the ideas because they associate them with Singer-style moral arguments and living a life of subsistence until not one person is in poverty. This sadly seems to turn them off of ways they might think about better allocating resources, for example, because they think their only options are either to do what they feel good about or to be a Singer-esque maximizer. Obviously this is not the case, there's a lot of room for gradation and different perspectives, but it does create a situation where people see themselves in an adversarial relationship to EA and so reject all its ideas rather than just the subset of EA-related ideas they actually disagree with because they got the idea that one part of EA was the whole thing.
edoarad @ 2019-11-11T21:49 (+1)
Even though I agree that presenting EA as Utilitarianism is alienating and misleading, I think that it is a useful mode of thinking about EA in some contexts. Many practices in EA are rooted in Utilitarianism, and many (about half from the respondents to the survey, if I recall correctly) of the people in EA consider themselves utilitarian. So, while Effective Utilitarianism is not the same as EA, I think that the confusion of the outsiders is sometimes justified.