Some core assumptions of effective altruism, according to me
By peterhartree @ 2022-07-29T09:05 (+90)
Zvi recently posted a (critical) list of core assumptions of effective altruism.
The list is interesting, but I think much of it is somewhere between “a bit off” and “clearly inaccurate”.
In this post I redraft the list—keeping the order and breakdown that Zvi used, but applying suggested edits to each point.
Compared to Zvi's list, mine is somewhat aspirational, but I also think it's a more accurate description of the current reality of effective altruism (as body of ideas, and as community).
Important: these are just my takes! I'm not speaking on behalf of current or past employers, key figures in the movement, or anything like that.
This list is not intended to be comprehensive.
I'd love to read your thoughts—including your own suggested edits and additions—in the comments. If you like, make a copy of my Google Doc!
I spent 3-4 hours writing this post. In the future I might share a list I write from scratch, but for now I found it much easier to just go make edits to the list Zvi made.
Some core assumptions of effective altruism, according to me
-
Two-thirds utilitarianism. Utilitarianism is a useful and underrated way to think about what matters in some circumstances. Other theories of value and normative frameworks should be given serious consideration and weight, partly due to moral uncertainty. Taking utilitarianism seriously does not imply that people should go around thinking in utilitarian terms most of the time. The mindsets suggested by moral perfectionism, deontology, virtue ethics and common-sense ethics are often more helpful in daily life. [1]
-
Importance of suffering. All else equal, suffering is bad, and happiness/pleasure is good. Morally, it may be more important to reduce suffering than to increase happiness. Empirically, it may be easier to reduce suffering than to increase happiness (though this is not obvious). This assumption does not oblige us to only care about pleasure and suffering, and certainly not to “focus on the floor of the human condition, rather than the ceiling”. [2]
-
Model-based interventions. Making explicit models, as opposed to compelling stories, is important. In some areas (e.g. global health), we can learn a lot by investing in careful empirical measurement and testing. In others (e.g. anthropogenic existential risk) we are obliged to rely on speculative (but still useful) models, often informed by the projection of historical trends, evolutionary theory, and/or first principles thinking. We should not be afraid to bet heavily on these models.[3]
-
Diverse funding models. If you want >$1m funding, you probably need to apply to one of a few large funders. But if you want small project or seed funding, there are many funding sources available to you, including >50 individuals who can say “yes” with very little constraint from other parties.[4]
-
Scope sensitivity. Preventing 100 people going blind is 100x better than preventing one person going blind. We should have run COVID vaccine challenge trials in January 2020. Shut up and multiply.
-
Duty of privilege. If you are fortunate to have freedom, security, good health, and so on, you should dedicate some part of your resources (e.g. time and money) to trying to help others as much as possible. You should decide how much, but we encourage at least 10%.[5]
-
Effectiveness. Do what works. Seek feedback. Keep learning. Cultivate intellectual virtues, such as quickly updating when you’re wrong, threading the needle between overconfidence and underconfidence, etc etc etc etc.
-
Impartial altruism. One of the best ways to do good yourself is to take up an impartial perspective when you're thinking about how to spend your altruistic resources. This yields surprisingly large opportunities at the moment, because relatively few people do this. This may be especially true if you endorse a zero-discount rate for welfare, which probably implies that the interests of far-future generations are gravely neglected.
-
We can see altruism as opportunity or obligation. Some people are motivated by the joy of helping others; others see it as a moral obligation.
-
Coordination. Working together is sometimes more effective than cultivating competition, especially when your values are shared. But incentives, feedback loops and public choice theory suggest that many things that can be a for-profit probably should be a for-profit.
-
Impartiality. From the perspective of the universe, your welfare is no more important than that of other similar moral patients, no matter where they exist in space and time. This perspective is compatible with the idea that, in practice, you should value yourself, your family, and those around you more than others (see (1) above and also appendix 3).
-
Self-recommending. Belief in the movement and methods themselves.[6]
-
Evangelism. Belief that it is good to grow the movement, in terms of human capital, social capital, financial capital, and general influence. Views differ on how “big” effective altruism should eventually become, which audiences we should focus on, and what growth rate is desirable. Several central features of the current movement probably don’t scale gracefully to all audiences.
-
Reputation. The reputation of EA is a crucial factor for its overall success. Careful communication, community health[7], and cooperativeness with other groups is important on these (and other) grounds. There are strong instrumental arguments in favour of “common sense” virtues like integrity.
-
Mixed feelings about mainstream institutions and expertise; belief that you may be able to do better. As a first cut: trust experts and institutions with good reputations. But beware: many “experts” have terrible track records of prediction, and many institutions are extremely dysfunctional, or at least harbour islands of dysfunction. Governments often drop important balls, non-fiction books are rarely fact-checked, lots of research doesn’t replicate, incentives in academia are often awful, some “experts” in medical ethics would fail an Ethics 101 class, etc etc etc. You may be able to find huge opportunities in areas that seem to be “covered” by existing groups.
-
Existential risk. There could be an immense amount of value in the future, but there could also be very little, or even immense amounts of disvalue. Most people who’ve looked into this think that the probably of existential catastrophe before 2100 is disturbingly high (>1%), largely due to new risks from emerging technologies such as artificial intelligence and biotechnology. Few people are trying to understand and reduce these risks; therefore it is one of the most promising areas to focus on.
-
Value of sacrifice. Sometimes personal sacrifice can help set an inspiring example, or communicate moral seriousness. All else equal, sacrifice for its own sake is not valuable, or at least not particularly valuable: what matters most is the future consequences of your actions.
-
Encouragement. We should praise and reward people who act upon (or criticise and improve upon) these assumptions. It is usually unhelpful to blame or condemn those who act differently—our patterns of blame should, to a large extent, reflect those of common sense morality.
-
Veganism. If you are not vegan many EAs treat you as non-serious (or even evil).[8] -
Grace. In practice, people can’t live up to this list fully and that’s acceptable.
-
Non-totalising. People who are unfamiliar with the effective altruism community sometimes perceive it as a “totalising” community (or set of ideas) [9], which asks people to commit most or all of their lives to the movement. This is not the case. [10] Different people make different levels of commitment, and individual commitment fluctuates as people's situations change. People take breaks. People often prioritise their own wellbeing, their family commitments, and so on—independently of their commitment to effective altruism. That said, many people do make big changes to their lives—e.g. change their career plans, or move to a new city—either because they are inspired to do so, or because, on reflection, they feel a sense of duty to try to make things as good as they can, with whatever resources they've decided to dedicate to altruistic ends. [11]
Appendix 1. Zvi's list of assumptions, for comparison
Copy-pasted from here.
-
Utilitarianism. Alternatives are considered at best to be mistakes.
-
Importance of Suffering. Suffering is The Bad. Happiness/pleasure is The Good.
-
Quantification. Emphasis on that which can be seen and measured.
-
Bureaucracy. Distribution of funds via organizational grants and applications.
-
Scope Sensitivity. Shut up and multiply, two are twice as good as one.
-
Intentionality. You should to plan your life around the impact it will have.
-
Effectiveness. Do what works. The goal is to cut the enemy.
-
Altruism. The best way to do good yourself is to act selflessly to do good.
-
Obligation. We owe the future quite a lot, arguably everything.
-
Coordination. Working together is more effective than cultivating competition.
-
Selflessness. You shouldn’t value yourself, locals or family more than others.
-
Self-Recommending. Belief in the movement and methods themselves.
-
Evangelicalism. Belief that it is good to convert others and add resources to EA.
-
Reputation. EA should optimize largely for EA’s reputation.
-
Modesty. Non-neglected topics can be safely ignored, often consensus trusted.
-
Existential Risk. Wiping out all value in the universe is really, really bad.
-
Sacrifice. Important to set a good example, and to not waste resources.
-
Judgement. Not living up to this list is morally bad. Also sort of like murder.
-
Veganism. If you are not vegan many EAs treat you as non-serious (or even evil).
-
Grace. In practice people can’t live up to this list fully and that’s acceptable.
-
Totalization. Things outside the framework are considered to have no value.
Appendix 2. Twitter version of this post
https://twitter.com/peterhartree/status/1552950728137871361
Appendix 3. Theory of value does not determine normative ethics
Added 2022-08-16, because a couple people asked me about this.
Theory of value does not determine what actions or ways of thinking are best for individuals. One has to combine a theory of value with a normative theory, plus a bunch of empirical facts.
The question: "what is ultimately valuable?" is quite different from questions like "how should humans behave?" and "what patterns of praise and blame would have good consequences"?
(Philosophers sometimes distinguish "axiology (theory of value)" from "normative theory", "practical ethics" or "decision procedure".)
People vary on how quickly they move from impartial axiology to normative ethics, and how revisionary they want to be of traditional ethics, including partiality.
Both Peter Singer and Tyler Cowen start with an impartial theory of value (Tyler is complicated, but he at least uses this framework sometimes). But Tyler tends to take the constraints of human nature more seriously, and thinks that, for humans, the normative software that leads to best consequences is not a million miles away from what we have now.
Utilitarians usually make moves like Tyler, at least in particular cases. Singer does this to some degree.
One could, of course, believe that impartial theory of value makes no sense, and instead embrace a partial theory of value, scoped (for example) to human values. Bernard Williams famously defends this perspective.
See also: Greenberg interviews Beckstead; Karnofsky on future-proof ethics; Bostrom on metaethics. ↩︎
Citing Zohar Atkins, who sometimes vibes against straw-utilitarianism. See also: Runciman on Bentham, SEP on John Stuart Mill on higher and lower pleasures. ↩︎
See also: Cowen: Be Suspicious of Simple Stories. ↩︎
This number went up a lot in 2022 due to the Future Fund regranting program ↩︎
See e.g. Giving What We Can, Ben Todd: How to balance impact and doing what you love. ↩︎
I guess most movements involve this? Perhaps Zvi is suggesting excessive belief in some particular methods, but I think the commitment to effectiveness ("whatever works", item 7 above) is more fundamental. ↩︎
CEA has a community health team. ↩︎
I think Zvi is wildly off on this one. This doesn't match my experience of UK / London / Oxford community at all. I've not spent much time in the various US and Bay Area communities, so I can't personally speak to that, but I asked a couple people who are more familiar and they also didn't recognise Zvi's description. ↩︎
It's worth noting that some people retain this perception even as they become quite involved in things. I see this mainly as a communication problem that EA groups should work to fix, rather than a fundamental issue. ↩︎
Various factors drive this misperception. I won't try to quickly summarise them right now. One big one: poorly crafted memes and discussion around the ideal of "maximisation". This stuff is hard, but my hunch is that"do the most good" was a mistake, all-things-considered. MacAskill (2019) has good, careful definition: "the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources" But this qualification hasn't yet been made salient enough in the intro materials, key talking points, and so on. ↩︎
See also: Luke Muelhauser: Effective altruism as I see it. ↩︎
Guy Raveh @ 2022-07-29T15:50 (+4)
Upvoted for explicitly laying these out, though I don't necessarily endorse all of them myself (example: for #15 I agree with the "there are many problems" vibe, but still think we should aspire to act through institutions).
Zach Stein-Perlman @ 2022-07-29T19:00 (+3)
Lists like this are necessarily imprecise unless we're specific about what it's a list of. Possibilities include necessary conditions for being an EA, or stuff the EA community/movement broadly accepts, or the axiomatic assumptions underlying those beliefs, or common perceptions of the community/movement.
peterhartree @ 2022-07-29T20:08 (+1)
For what it's worth, I was going for:
stuff the EA community/movement broadly accepts
Where I'm mostly thinking of people who have been fairly deeply involved for several years.
But I was also idealising a bit, according to my own tastes and takes. Hence the "according to me".
Guy Raveh @ 2022-07-30T22:17 (+2)
I would say this as "fundamental beliefs and principles, any of which if you disagree with, you'll be widely frowned upon".
LukasRos @ 2022-08-09T10:50 (+2)
I think this is a great list and it also aligns with how I want the EA community to be, but also how I perceive the majority of it to already be (with the few exceptions that get outsized attention).
It's probably just because of the original format, but after renaming #8 from "altruism" to "impartial altruism", this generates an unfortunate overlap with #11 (impartiality) that might be avoidable (though I'm unsure how).
Regarding #19 (veganism), I feel it's weird that you just crossed out this point from the original list instead of sharing your own thoughts, so I'd recommend changing it and arguing why you think veganism should or shouldn't be an EA value.
peterhartree @ 2022-08-16T06:34 (+1)
Added Appendix 3. Theory of value does not determine normative ethics because a couple of people asked me about this.
JimHenry @ 2022-08-05T16:10 (+1)
I realize that it was Zvi that used the term first and you are simply copying him, but "Evangelicalism" should really read "Evangelism" or better yet "Proselytism."
Evangelicalism is a specific movement in Christianity with specific ideas only one of which is evangelism, the preaching of the gospel to convert others.
peterhartree @ 2022-08-06T05:23 (+1)
Oh, thanks! Edited.
Robert_Wiblin @ 2022-07-29T14:35 (+1)
This is certainly closer to what folks involved in EA actually think (and reflects that there's many topics where people have a wide range of views and aesthetics.)