Effective Altruism: An Unmanifesto

By ozymandias @ 2025-07-23T21:12 (+62)

The most common way to identify the effective altruism movement is by pointing. Effective altruists are:

This raises many questions, like what the heck do these three groups have to do with each other and what they get out of posting on the same forums and going to the same conferences.

Perhaps we could seek out definitions that would help us. The Center for Effective Altruism says effective altruism is “a framework and research field that encourages people to combine compassion and care with evidence and reason to find the most effective ways to help others.” The essay Effective Altruism is a question (not an ideology)[1] says effective altruism is the attempt to answer the question “How can I do the most good, with the resources available to me?”

These definitions are unsatisfying, because everyone who is trying to be a good person supports those things. No one is against care and compassion. No one wants to do less good than they could for no reason. No one is like “I specifically hate evidence and reason and I’m going to make my charitable decisions using astrological charts.” Tautologically, people don’t use resources they don’t have.

Often, EAs respond to critique of effective altruism with what Michael Nielsen calls EA judo:

One of the most common lines of "attack" on EA is to disagree with common EA notions of what it means to do the most good. "Are you an EA?" "Oh, those are the people who think you need to give money for malaria bed nets [or AI safety, or de-worming, /etc etc/], but that's wrong because […].”…

These statements may or may not be true. Regardless, none of them is a fundamental critique of EA. Rather, they're examples of EA thinking: you're actually participating in the EA project when you make such comments. EAs argue vociferously all the time about what it means to do the most good. What unites them is that they agree they should "use evidence and reason to figure out how to do the most good"; if you disagree with prevailing EA notions of most good, and have evidence to contribute, then you're providing grist for the mill driving improvement in EA understanding of what is good…

Most external critics who think they're critiquing EA are critiquing a mirage. In this sense, EA has a huge surface area which can only be improved by critique, not weakened… A pleasant, informative example is EA Rob Wiblin interviewing Russ Roberts, who presents himself as disagreeing with EA. But through (most of) the interview, Roberts tacitly accepts the basic ideas of EA, while disagreeing with particular instantiations. And Wiblin practices EA judo, over and over, turning it into a very typical EA-type debate over how to do the most good. It's very interesting and both participants are very thoughtful, but it's not really a debate about the merits of EA.

I am an occasional practitioner of EA judo myself. But it doesn’t take opposition to effective altruism seriously on its own terms. From an EA judoka’s perspective, everyone is a temporarily embarrassed effective altruist who simply needs to be brought into the fold.

My belief is that effective altruists do unusual things, because they believe unusual things. Other people don’t act like effective altruists, because they disagree with effective altruists. These disagreements are about fundamental worldview matters, so they can be hard to talk about or even identify. But they are real.

Most definitions of effective altruism are unsatisfying, because their goal is primarily persuasive: their purpose is to get you to become an effective altruist, or to change the direction of the effective altruist movement. The former kind of definition tries to make effective altruism seem obvious and uncontroversial, and to smuggle in the broader worldview through the back door. The latter kind of definition makes claims that the author would like to be true of effective altruism, but that often aren’t.

This post series defines effective altruism in a way that’s anthropological. What do effective altruists believe that other people tend not to believe? Why do they believe that? What do the vegans and the kidney donors, the AI safety researchers and the randomized controlled trial lovers, have in common?

For ease of writing, I say “effective altruism says this” or “effective altruists believe that.” In reality, effective altruism is a diverse movement, and many effective altruists believe different things. And while I’m trying my best to describe the beliefs that are distinctive to the movement, no effective altruist (including me) believes everything I put in these posts. However, I still believe these posts can be useful.

I end up reducing effective altruism to four beliefs:

  1. Don’t care about some strangers more than other strangers because of arbitrary group membership.
  2. Think with numbers.
  3. We don’t know what we’re doing—but we can figure it out together.
  4. You can do hard things.

To elaborate on these points, I wrote a series of ten posts, expanding on one of nine fundamental beliefs that most effective altruists share.

  1. Welfarist, maximizing consequentialism: you should take actions that cause people to be as well-off as possible.
  2. Moral circle expansionism: the well-being of all beings capable of well-being matters more-or-less equally; in particular, you should disregard special relationships and moral desert.
  3. Quantitative mindset: reasoning with numbers is a useful tool that sheds light on many problems.
  4. Taking ideas seriously: if logic and evidence suggest that a particular conclusion is true, but it seems absurd, work under the assumption that it’s true.
  5. Rationalist epistemics:
    1. Bayesianism and cognitive biases.
    2. Social epistemology and the free marketplace of ideas.
  6. Ambition: you should strive to achieve big things and have a large effect on the world.
  7. Importance, tractability, neglectedness: The correct way to pick causes is based on how important they are, how easy it is to make changes, and how much effort is already being directed into them.
  8. The effective altruist narrative of history: the past was terrible; the future will be weird.
  9. The effective altruist approach to politics: small-l liberal, economically informed, positive-sum, technocratic, incrementalist.

I will be serializing them on the Effective Altruism Forum over the next few weeks, or you can click on the posts now if you would prefer spoilers. 

  1. ^

    Which is a good essay!


NunoSempere @ 2025-07-24T07:06 (+14)

There is an additional set of beliefs which EAs share, which is something like: EA institutions are a good mean to pursue those goals, their leaders worth deferring to, their forum worth using, the brand worth expanding, the community worth recruiting people into. I think this is important, otherwise it risks stolen valor and e.g., counting Bill Gates as "an EA", and it risks eliding the ways in which the community is very dysfunctional.

I personally had a small crisis when I came to believe that the Center for Effective Altruism wasn't cost-effective. I wrote some of my unsatisfactions up here, but since then I've drifted further appart.

Which is to say that I'd put more emphasis on the "together" in "We don’t know what we’re doing—but we can figure it out together" when pointing at EAs.

DavidNash @ 2025-07-24T13:42 (+13)

I don't get that sense. Compared to most other movements I feel that people involved in EA are less likely to think those things (blind deferral/spread the brand/trust institutions)

Vasco Grilo🔸 @ 2025-07-31T11:05 (+4)

Thanks for doing this! I have only read this post, but it seems like the series could be a valuable reference.

Sarah Tegeler 🔹 @ 2025-08-08T14:43 (+3)

Thank you for the post!

I also find that how I explain EA changes a lot depending on context and who I’m talking to—and sometimes, I realise afterwards that I haven’t quite got across what makes EA distinct.

The four beliefs you identify remind me very much of the general EA principles by CEA:

  1. Prioritization
  2. Impartial altruism
  3. Open truthseeking
  4. Collaborative spirit


I would probably sort them like this: 

  1. Don’t care about some strangers more than other strangers because of arbitrary group membership. --> aligns well with impartial altruism
  2. Think with numbers. --> aligns with prioritisation and open truth-seeking with an emphasis on using quantitative methods
  3. We don’t know what we’re doing—but we can figure it out together. --> aligns with open truthseeking and collaborative spirit
  4. You can do hard things. --> my question if someone mentioned this would be: how can we do hard things? I think the answer is a mix of prioritisation and collaborative spirit.

I like how your approach captures the culture of EA in a descriptive way, while CEA's EA principles feel more like a set of aspirations or guiding rules. I might use your wordings for explaining the principles in a more tangible way in the future!

michel @ 2025-08-21T05:20 (+2)

Helpful post! Especially liked this comment with respect to how EA is defined and what a more helpful definition can include:

Most definitions of effective altruism are unsatisfying, because their goal is primarily persuasive: their purpose is to get you to become an effective altruist [...]

This post series defines effective altruism in a way that’s anthropological. What do effective altruists believe that other people tend not to believe? Why do they believe that? What do the vegans and the kidney donors, the AI safety researchers and the randomized controlled trial lovers, have in common?

SummaryBot @ 2025-07-24T18:26 (+1)

Executive summary: In this exploratory series introduction, the author argues that conventional definitions of effective altruism (EA) are overly vague or persuasive, and instead proposes an anthropological account that identifies four core beliefs and nine worldview traits that unify the otherwise diverse actions and subgroups within EA.

Key points:

  1. Mainstream EA definitions are unsatisfying because they are either tautological ("use evidence and reason to help others") or aimed at persuasion rather than accurate description.
  2. The concept of "EA judo" explains how EA often absorbs critiques by framing them as internal disagreements over how to do the most good, but this can mask genuine philosophical or worldview-level disagreements.
  3. The author contends that EA reflects genuinely unusual beliefs, which explain its distinctive actions and cannot be reduced to general moral aspirations shared by everyone.
  4. The post proposes four central beliefs of EA: impartial concern for strangers, quantitative reasoning, collaborative epistemic humility, and the conviction that ambitious good is achievable.
  5. A set of nine worldview components—including maximizing consequentialism, moral circle expansion, a quantitative mindset, rationalist epistemics, and technocratic politics—further define EA's internal coherence and distinguish it from other philosophies.
  6. The series aims to provide a descriptive (not prescriptive) account of EA as a cultural phenomenon, recognizing internal diversity while identifying patterns that clarify what makes EA unique.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.