Welfarist, Maximizing Consequentialism (Effective Altruism Definitions Sequence)
By ozymandias @ 2025-07-25T15:36 (+25)
Effective altruism is a form of maximizing, welfarist consequentialism.[1]
“Consequentialism,” as I’m using it, means that the primary criterion for whether an action is good is its effect on the world. A non-consequentialist ethical system is mainstream Judaism. You’re supposed to perform mitzvot such as keeping kosher, even if (often, especially if) keeping kosher does no good for the world at all. The rationale for keeping kosher doesn’t involve balancing remembrance of your covenant with God with how tasty bacon cheeseburgers are. You’re supposed to keep kosher because the rules say so.
Of course, Judaism has many consequentialist elements, such as tikkun olam. I tend to think of consequentialism as a spectrum. Almost no one is completely consequentialist or completely non-consequentialist. But some people tend, on average, to care more about the consequences of their actions, and some people tend, on average, to care more about other things.
In my experience, most effective altruists are a little bit non-consequentialist: we believe that (say) it is wrong to brutally murder innocent people even if you were sure it would have positive consequences. But those qualms don’t come up in effective altruist thinking much: the effective altruist charity evaluator GiveWell has never had to consider Assassins Without Borders for Top Charity status. So effective altruism is basically consequentialist.
“Welfarism” means that, if something is good, then it has to be good for someone (that is, that it increases their well-being). Many people assume that welfarists only care about happiness or pleasure. In reality, welfarism can refer to anything that is good or bad for a specific person: freedom, virtue, beauty, the ability to fully develop their gifts. If you’re one of those pseudo-Nietzschean Greek-statue-avatar people, well-being is the ability to crush your enemies, see them driven before you, and to hear the lamentations of their women.
Like consequentialism, everyone is at least a little bit welfarist. The Stanford Encyclopedia of Philosophy (above link) says “A theory which said that [well-being] just does not matter would be given no credence at all,” which is the strongest statement I’ve ever read it make about morality. But many people care about non-welfarist goods. For example, you might value the existence of the Mona Lisa or the Notre Dame Cathedral, separately from whether anyone benefits from their existence. Or you might care about the preservation of intact natural ecosystems, separately from their economic benefit for humans or the happiness of the animals who live there. Or you might think that it’s better for everyone to be equally okay than for some people to be flourishing and some people to be miserable.
“Maximization” means that we want things to be as good as possible. Again, no one is like “I specifically want things to be worse than they could be.” If a genie appeared to you and said “do you want me to eliminate hookworm? I promise it has no bad side effects, the only thing that will happen is that children are no longer infected with parasitic worms,” everyone says ‘yes.’
But many people have an intuition that morality should only be so demanding.[2] You’re supposed to be kind to those around you and work hard at your job and tip generously and not murder anyone; once that’s done, you get to spend your time knitting and writing mediocre fanfic. You’re supposed to give to charities that have a positive effect on the world, but if your emotions are stirred more by children with cancer than by children with malaria, there’s no reason to donate to the latter.
A general trend which I hope you’ve caught onto is that effective altruism is simpler. Everyone is a little bit maximizing, everyone is a little bit welfarist, everyone is a little bit consequentialist. Effective altruism involves going all-in on these intuitions everyone shares.
Moral foundations theory is the theory that human moral reasoning relies on multiple moral intuitions. While different researchers give different lists, the “standard five” are:
- Care: wanting people to be happy and not to suffer.
- Care commands that you feed the hungry, clothe the naked, and avoid kicking adorable puppies for no reason.
- Fairness: treating people justly, according to what they deserve.
- Fairness commands that you pay your debts, pay your workers a reasonable wage, and punish wrongdoers.
- Loyalty: helping members of your ingroup; patriotism, filial piety.
- Loyalty commands that you visit your mother, stick with your friends instead of dropping them for cooler new friends, and die for your country in battle.
- Authority: obedience to those in power over you; respect for tradition.
- Authority demands that you follow the law, do what your boss tells you to do, and follow the ancient customs of your culture.
- Purity: avoiding things which are disgusting or contaminating, and seeking out things which are sanctified and pure.
- Purity demands that you honor the flag and your religion’s holy symbols, follow your religion’s dietary rules, and don’t have weird sex.
Most social psychology is bullshit, and I don’t think these are the exact five inborn moral intuitions or anything. But I think moral foundations theory’s basic insight is true: people draw their moral intuitions from multiple sources. As long as one of these impulses is “Care,” I think my argument in the next few paragraphs goes through.
A lot of people associate effective altruism with utilitarianism. I don’t think that’s true. Many prominent effective altruists—such as Will MacAskill and Toby Ord—don’t identify as utilitarians. But I do think that effective altruism involves weighting “care” far above the other four moral foundations. Effective altruists generally are concerned about Fairness and Loyalty,[3] at least as a tool for making sure people are better off. But they often see Purity, Authority, and even Loyalty as not only distractions but as biases, mistaken impulses that push people towards wrongdoing—and effective altruists can certainly make the case by mustering any number of disgust-laden anti-Semitic rants, “I was just following orders” excuses for atrocities, and well-paying sinecures given to undeserving nephews.
My point here isn’t to defend the effective altruist impulse to care, however. I’m not going to reason you out of your moral system. But I think something like moral foundations theory is why effective altruist morality often seems like normal morality but simpler. The Care foundation tends to be welfarist (obviously) but also consequentialist and maximizing. If you care about someone, you want your actions to actually leave them better off. And suffering is bad no matter what, while most people feel okay going “eh, good enough” about the amount of weird sex they’re having. Most people intermingle the care foundation with other non-welfarist, non-consequentialist, non-maximizing foundations. Effective altruists don’t.
Because effective altruists care about things everyone cares about, the findings of effective altruism can be useful, even for people with more complex moral systems. GiveWell tells you how to most efficiently turn money into a lower mortality rate and higher consumption for human beings. If you have a strong Loyalty foundation, unlike most effective altruists, you’re unlikely to give 10% of your income to GiveWell-recommended charities when you could spend it on your kids. But if you have an extra $100, your Care foundation might nudge you to donate it to the GiveWell All Grants Fund. In this way, the existence of effective altruists can help advance the goals even of people who aren’t effective altruists. This kind of cooperation is something I hope to encourage in this series.
- ^
I’m sorry about the philosophical jargon but I promise these words are simpler than they look.
- ^
It’s possible to have maximizing moral ideologies that aren’t very demanding, like certain forms of ethical egoism (the belief that everyone should act in their enlightened self-interest).
- ^
Some people might doubt that effective altruists care about loyalty, but I think they do—effective altruists generally think it’s good to visit your own mother in the nursing home, and bad to instead try to redistribute the visits to the loneliest old person there.
SummaryBot @ 2025-07-28T20:36 (+1)
Executive summary: This exploratory post argues that effective altruism can be best understood as a form of maximizing, welfarist consequentialism—emphasizing the moral importance of outcomes that improve individual well-being—while acknowledging that most people, including effective altruists, blend multiple moral intuitions and may reject extreme conclusions from this framework.
Key points:
- Effective altruism is grounded in three philosophical pillars: consequentialism (judging actions by their outcomes), welfarism (valuing things only insofar as they affect well-being), and maximization (aiming to do as much good as possible).
- Most people intuitively share these values to some extent, but effective altruists prioritize them more consistently, reducing reliance on other moral foundations like purity, authority, and loyalty.
- Welfarism is broader than just happiness or pleasure, encompassing anything that benefits individuals—freedom, virtue, beauty, or even more idiosyncratic ideals.
- Moral foundations theory helps explain how EA diverges from typical moral reasoning: while most people mix multiple moral intuitions, effective altruists largely elevate “Care” (helping others) above the rest.
- This simplification makes EA morality seem intuitive yet radical, and enables EA tools (like GiveWell) to be useful even to those who don’t fully endorse EA values.
- The author emphasizes pluralism and cooperation, suggesting that EA's methods can support broader moral goals without demanding full philosophical alignment.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.