Effective Altruism Deconfusion, Part 2: Causes, Philosophy, and Social Constraints

By Davidmanheim @ 2023-02-05T10:13 (+44)

This is part 2 of my attempt to disentangle and clarify some parts of the overall set of claims that comprise effective altruism, in this case, the set of philosophical positions and cause areas - not to evaluate them, but to enable clearer discussion of the claims, and disagreements, to both opponents and proponents. In my previous post, I made a number of claims about Effective Altruism as a philosophical position. I claimed that there is a nearly universally accepted normative claim that doing good things is important, and a slightly less universal but widely agreed upon claim that one should do those things effectively. 

My central claim in this post is that the notion of “impartial,” when determining “how to maximize the good with a given unit of resources, in impartial welfarist terms” is hiding almost all of the philosophical complexity and debate that occurs. In the previous post, I said that Effective Altruism as a philosophy was widely shared. Now I’m saying that the specific goals are very much not shared. Unsurprisingly, this mostly appears in discussions of cause area prioritization[1]. But the set of causes that could be prioritized are, I claim, far larger than the set effective altruists typically assume - and embed lots of assumptions and claims that aren’t getting questioned clearly.

Causes and Philosophy

To start, I’d like to explore the compatibility or lack of compatibility of Effective Altruism with other philosophical positions. There are many different philosophical areas and positions, and most of them aren’t actually limited to philosophers. Without going into the different areas of philosophy in detail, I’ll say that I think all of axiology, which includes both aesthetics and ethics, and large parts of metaphysics, are all actually pretty central to the questions Effective Altruism addresses. These debates are central to any discussion of how to pursue cause-neutrality, but are often, in fact, nearly always, ignored by the community.

Aesthetics and EA, or Aesthetics versus EA?

For example, aesthetics, the study of beauty and joy, could be central to the question of maximizing welfare. According to some views, joy and beauty tell us what welfare is. Many point out that someone can derive great pleasure from something physically painful or unpleasant directly - working out, or sacrificing themselves for a cause, whether it be their children, or their religious beliefs. Similarly, many actually value art and music highly personally. Given a choice between, say, losing their hearing and never again getting to listen to music, or living another decade, many would choose music. Preference utilitarianism (or the equivalent preference beneficentrism,) would say that people benefit from getting what they want.

Similarly, many people place great value on aesthetics, and think that music and the arts are an important part of benefiting others. On the other hand, a thought experiment that is sometimes used to argue about consequentialism is to imagine a museum on fire, and weigh saving, say, the Mona Lisa against saving a patron who was there to look at it. Typically, the point being made is that the Mona Lisa has a monetary value that far exceeds the cost of saving a life, and so a certain type of person, say, an economist, might say to save the painting. (An EA might then sell it, and use the proceeds to save many lives.) But a different viewpoint is that there is a reason the Mona Lisa is valued so highly - aesthetics matters to people so much that, when considering public budgeting between fighting homelessness and funding museums, they think the correct moral tradeoff is to spend some money funding museums.

None of this is to say that impartial welfare maximization should prioritize funding museums in New York or Paris over saving lives, but it points to why the issue is less obvious, and more contingent on philosophical questions outside of what Effective Altruism normally discusses, than most Effective Altruists assume.

Effective Altruism versus Egalitarianism

According to egalitarian views, it would be morally beneficial to destroy the wealth of the very rich, because this results in decreased inequality. This is a pareto disimprovement, but many people endorse the idea despite that. This is Parfit’s leveling down objection against egalitarianism, but it is a bullet that some would gladly bite[2]. If this wealth destruction is thought to improve welfare, many leftist criticisms of effective altruism make far more sense.

Similarly, if self-determination and control are valuable independent of outcomes, solving problems can reduce welfare. Many people have expressed very clear preferences to suffer from preventable diseases rather than be forced to vaccinate or wear masks. This is often seen as self-defeating or irrational, but only because of a strong presumption that (easily measurable, physical, and medical) welfare is more important than the ability to control your own decisions. 

But a key objection to both of the above is that an honest appraisal of actual benefits is missing. Starving children don’t actually value aesthetics or self-control, and people dying of malaria, or cancer, don’t want ethical debate, they want to not be sick. If we actually want to be impartial, we need to consider the benefits to those receiving them, not the objections of the first world defenders of aesthetics or egalitarianism. And while I think that this objection and defense of Effective Altruism is unfairly dismissive, it points to a key issue, of what impartiality means and how it matters.

Impartiality

Without the criterion of impartiality, beneficence can also allow either impartial beneficence, or partial beneficence. Utilitarianism is totalizing, optimizing, impartial beneficentrism - and most criticisms of Utilitarianism are about the fact that it is totalizing, not that it is impartial. Effective altruism, however, is non-totalizing. So for philosophical Effective Altruism, we need to qualify beneficence to be impartial. Effective Altruists in the movement don’t tend to fund museums, but aesthetics-focused philosophical effective altruism might. At the same time, even if aesthetics is important, it does not justify funding the local museum over museums in developing countries. Again, a critical part of effective altruism is impartiality.

But impartiality is tricky, because it’s insufficiently well scoped. Within the movement, there is effectively universal agreement that the impartial beneficence of Effective Altruism includes impartiality on the basis of geography, social class, and nationality[3].  However, we do need to consider different varieties of impartiality - primarily temporal, speciesist, and probabilistic.

Longtermism and Temporal Impartiality

Temporal impartiality is not always addressed in classic utilitarian discussions. Longtermism, however, as defined by MacAskill in his book, is equivalent to temporally impartial beneficentrism[4]. In the book, it is not carefully totalizing, so it retains effective altruism’s claims that it is only one of several priorities.

This is another example where philosophical debates that predate Effective Altruism turn out to be rather critical in our choices for implementing effective altruist ideas. One critical question for impartiality here is about person-affecting views. The central question relates to population ethics, and who benefits. 

I think the question of who benefits needs a slight digression. A well-known objection to a straw-man version of utilitarianism is the question of whether you can murder one healthy person for their organs, to save five others awaiting transplants. This is obviously a bad idea on narrowly utilitarian grounds - for example, because of the broader impacts of accepting such a philosophy, or because it breaks deontological guardrails that are critically important when making decisions with imperfect information. However, there is another fundamental objection, which is that it’s not pareto-improving. That is, someone specific loses something.

A pithy summary of the debate about person affecting views is whether we care about making lives happy, or making happy lives. One might ask, should a couple still in college immediately have a child, knowing that they will be financially strained and will be unable to provide for the child well, or wait, and have more children that they can support more easily and give happier lives later? In this case, none of the children exist, and we are comparing two different sets of non-existing people. Even though we’re causing one person to not exist, we are causing more and likely happier people to exist instead. This seems morally fairly easy, and is a strong argument framing the discussion about the priority of the long-term future. But it doesn’t ask the more difficult question.

To transform the question about the murder-happy transplant surgeon to a question of person-affecting or non-person-affecting population ethics, we can ask about an expectant first-time mother planning to have a family of five children, who tragically discovers, at the very earliest stage of pregnancy, that the blastocyst[5] which could become a child who will have a debilitating physical ailment[6]. If she decides to carry the child to term, she is vanishingly unlikely to have more children, needing to deal with the baby she is carrying. However, if she chooses to have an abortion, she will have her planned large family. She must choose between making one specific and existing (proto-)life happy, that of the unborn child, or making five future happy lives. And for those who embrace person affecting views, this is a far more difficult question than when all of the future people are contingent[7].

A parallel criticism of longtermism is that it ‘steals’ resources[8] from real people who live now in order to improve the future. This is a related and valid philosophical debate, and is dependent on whether we think we can trade off benefits to present lives for happier future lives. It is also a question of impartiality between these sets of lives. If we accept temporal impartiality, we cannot give unfair advantage to those already alive in favor of the unborn.

 As a distinct philosophical objection, I will also note that “Strong Longtermism,” as defined in his earlier paper, both rejects discount rates and accepts full impartiality, more consistent with his definition of Effective Altruism. Unfortunately, especially when paired with expected value decisionmaking, this makes it totalizing, since it receives nearly complete priority over any other issue.

Animal Welfare is About How Far to Take Speciesism 

Specieist impartiality, at least in a naive form, is less well accepted than other types. To raise a straw-man objection, perhaps we must be impartial between animals and humans, treating a chicken’s happiness as equivalent to a human’s. But even among Effective Altruists who prioritize reducing animal suffering, my understanding is that very few EAs actually advocate actual impartiality on this basis[9]. In this case, impartiality is often replaced with non-ignorability, or perhaps well-being-adjusted care; animals matter to some non-negligible extent, an extent plausibly dependent on the relative amount of well-being, but in any case potentially enough so that in large numbers improving their lives could be the single largest moral priority.

This can be justified in a variety of ways, and precisely because of the flexibility in deciding how much to weigh the welfare of animals, it is less obviously totalizing than longtermism.

Probabilistic Impartiality

Lastly, Effective Altruism is closely associated with what I think is best understood as probabilistic impartiality - the use of expected value, being impartial between a moderate chance of a positive outcome and a certainty of a less positive outcome[10]. This is not a debate within the movement in general, and in fact was a central part of the early conception of effective altruism[11]. For example, it is a key justification for prioritizing large risks, especially in non-longtermist effective altruism. The extent to which this should be embraced in the most extreme cases, however, is less clear, with MacAskill professing unease at the possibility of fanaticism. 

It is also not obviously necessary for the other claims, and in fact, some effective altruists prefer to donate to strongly evidence-backed interventions in part because they do not embrace certain types of expected value decision making[12]. Interestingly, this may partly explain the relative lack of emphasis on systemic change within Effective Altruism. That is, among those who do not fully embrace expected value, systemic change is far less certain to have an impact, and so is neglected. On the other hand, among those who do embrace expected value across hard-to-measure interventions, the impact of systemic change is dwarfed by that of existential risk reduction, and is neglected on that basis. 

Cause Prioritization is Communal, and Therefore Broken

Unfortunately, these philosophical debates are likely unresolvable. On the other hand, prioritization is value-laden, and community consensus evolves without reference to resolution of debates. 

One less-strongly embraced Effective Altruist cause area is mental health. If we define wellbeing primarily as related to subjective feelings of happiness, we need to compare WELLBYs across cause areas. Ignoring the still-unresolvable comparisons with animals, we might consider the current trend towards sadder people, higher levels of depression, and high levels of suicide in rich countries. If the richest people are the saddest, perhaps investing in slightly more fun video games for people in developed countries is higher leverage than pushing for slightly faster convergence in already-skyrocketing life expectancy in the developing world. If this sounds sacrilegious to Effective Altruists, or morally horrific to its critics, rest assured I personally agree - but want to note that this is a value judgment, not a fact about the world.

Going perhaps even farther, it would be perfectly compatible with effective altruism as a philosophy to embrace aesthetics as a moral priority, and investigate the impact of music programs in sub-saharan africa. We might want to compare different programs, or provide charitable entrepreneurial funding to start more effective new ones, to find the best value-per-dollar. Socially, however, this project would violate shared asserted values, which prioritize health and economic prosperity over music.

Adjacent and Overlapping Philosophical Beliefs

Even within cause areas, however, the community strongly influences the choices that are made. The fact that many members of the community embrace views that are conceptually separate from Effective Altruism can be seen to strongly impact their norms and behaviors, and the choices that are made.

Some Effective Altruists are libertarian, strongly distrusting governments, while others view state capacity as critical. This has led to debates about the viability of many interventions that promote systemic change, which are not easily resolvable with any plausible data or trial. These philosophical questions are also critical in discussing the place of policy change for extreme and existential risks. However,these are debates that are both orthogonal to the philosophical claims of Effective Altruism, and ones that are critical for actual cause selection. And these debates can be communally divisive.

For example, Cremer and Kemp criticize the “techno-utopian approach” to existential risk within Effective Altruism - though they are careful to differentiate between the philosophy of techno-utopianism and the cause area of existential risk reduction. Nothing about reduction of existential risks requires embracing transhumanism, but communally, there is overlap between the two.

Implicit in their critique, however, is promotion of another set of views unrelated to Effective Altruism which are embraced by many academics, including the many academics who are leaders within Effective Altruism. For example, these values include value pluralism and academic norms about the importance of building conceptual frameworks. And even within that narrow framing, Choosing to invest in more work on philosophy instead of econometric studies, or in analytic philosophy and moral uncertainty instead of aesthetics or sociology, is a communal choice to prioritize certain viewpoints.

Almost any set of beliefs have important implications to the tasks of Effective Altruism, and the community engagement around the writing and publication of the paper show how influential and divisive these factors are - despite being unrelated to any of the central claims of effective altruism.

Once I am discussing these key debates, I feel remiss not to also mention economism, the belief that economic forces are primary in understanding the world. This view has been embraced by many within effective altruism, but the belief is distinct, and is not necessary for even the epistemic task of choosing the most effective charities. The related issue of reification of metrics is more fundamental, since the epistemic task does rely on having some basis with which to compare. Because of this, some still see “metricization” as a critical and fundamental failure of Effective Altruism, in place of embracing a more holistic and less numerical approach. In fact, however, any procedure for comparison of impact can be used, and even many of the most strident advocates of econometrics and RCTs within effective altruism agree that holistic evaluation has some place in understanding impact.

Social Consensus and Community as a Limit to Effective Altruism

As the above dissection implies, there are a number of coherent sets of priorities or specific interventions that are compatible with effective altruism as a philosophy - but only a few can be explored by a single community. Which of the possible cause areas are embraced depends on differing moral views, as well as (to a lesser extent) different views about tractability and impact of the specific interventions. So I want to point out that the vast majority of the reason that Effective Altruism is so narrow, despite the nearly universal philosophical basis I discussed in the previous post, is that we have embraced the idea of Effective Altruism as a single global community. 

On the other hand, community is beneficial. One key benefit of community, mentioned above, is the creation of behavioral norms and Schelling points for altruistic actions. A second facet is the collaborative and interactive pursuit of the epistemic tasks of Effective Altruism. The original impetus for creating GiveWell, for example, was to provide a service to allow people to outsource the evaluation of the effectiveness of their giving. At the same time, the interaction between the epistemic task and a variety of viewpoints makes for tension points, since the different claims to priorities often conflict. 

I do not plan to discuss the many benefits and drawbacks of various aspects of the community in this post, but note that among the core adherents, Effective Altruism can be seen to function as a combination of religion, both sociologically and teleologically, and as extended family and clan. This means that the community influences and greatly narrows the actual selection of causes, and keeps the potentially nearly universal ideas of effective altruism inside of a very small bubble.

I hope that the bubble doesn’t filter out all of the fundamental objections to the choices Effective Altruists make, especially when they are made on the basis of unrelated and unquestioned viewpoints. I also hope that the bubble doesn’t keep the ideas of being effectively and impartially beneficent stuck inside. But it seems that the current community does not seem to share these hopes, or at least, isn’t acting on them.

  1. ^

    Unfortunately, I think this largely undermines a different key claim of Effective Altruism, that of cause neutrality. But I will need to return to that issue.

  2. ^

    This seems counterintuitive and even indefensible to some Effective Altruists, but it can be defended on the basis that many things humans care about, such as status or power, are in fact zero sum, and pareto improvements are always actually costly to those people who naively didn’t lose anything. Regardless, however, egalitarianism is a view that many outside of Effective Altruism hold.

  3. ^

    I unfortunately need to clarify that claims of racial bias, sexism, or partiality on the basis of characteristics like career and IQ are obviously excluded in discussions of the targets for effective altruism, but - despite clear messages from EA organizations and leaders rejecting discrimination - this does not automatically imply that there are not widely noted issues with insufficient diversity and inclusion within the movement itself. I do not think that they are notably worse than similar problems in other communities, but I think that fact neither minimizes the extent of the problems, nor does it excuse shortcomings

  4. ^

    Technically, this is modal rather than temporal partiality - but over even moderately long time frames, the set of people who will exist will certainly change depending on almost any significant action taken today.

  5. ^

    To ensure that the question does not involve murder of anything like a sentient being, the blastocyst at this stage does not yet have a brain, or even a pumping heart.

  6. ^

    We can assume the disability is not genetic, and is vanishingly unlikely to occur to her future children, or is reinforcing a recessive allele, and the future children will be with a partner who does not have this recessive gene.

  7. ^

    I do think that there’s a strange blind-spot among most of EA, which thinks that shrimp lives matter, but are horrified by restricting abortion - and I don’t think this is actually about non-person affecting views, I think it is cultural importation from adjacent communities, which I discuss (related to less controversial topics than abortion,) below.

  8. ^

    Of course, the fundamental and debateable premise is that the resources being used morally belong to the recipients. However, assuming beneficentrism, whatever resources are being dedicated to improving welfare are intended for that project, and the allocation question is a reasonable one.

  9. ^

    It is possible to defend an approach equally weighting experience or well-being, equivalent to looking at weighting by complexity of the animal. For example, see this textbook, which says “individuals of different species may differ drastically in their capacity to suffer or flourish: a typical human or dolphin may have vastly more well-being at stake than a typical mouse or chicken. The point is just that an equal amount of suffering matters equally no matter who it is that experiences it.” But this (reasonable) reframing still shifts from impartiality to a different approach. If fully embraced, however, I would claim it leads to clear violations of impartiality, for example, saying that satisfying some humans’ preferences is less important because their capacity is lower due to age, mental handicaps, or other contingent factors.

  10. ^

    Conceptually, this is impartiality between possible people, or possible futures. I think this is actually far more closely related to impartiality than the way people generally think about it, especially if using a multiple-possible-worlds lens or concept of probability. 

  11. ^

    It could be claimed that the use of expected value decisionmaking is a sociological and contingent part of EA, due to the involvement of economists and people who use bayesian decision theory. I am unsure that this makes a difference, but will assert that it is closer to a philosophical claim than a socially contingent one. 

  12. ^

    In some cases, the rejection of pure expected value is justified based on arguments about certainty and confidence, and in other cases, people are simply not risk-neutral in their decision making. That is, in the latter case, they can prefer saving a single life with certainty to a 50% chance of saving 3 lives.


quinn @ 2023-02-06T17:44 (+4)

Really good post, you're really getting at what counts. 

Sometimes I get a vibe from community building efforts that people feel like 100% conversion from "wants to do good" to EA is the target or goal. When I often want to say, "no, a failed conversion might be great, it might mean someone is thinking clearly about niche philosophical conundrum xyz and happens to fall to the opposite side of us". I think this is good and should be celebrated, but it means certain estimates of what the ceiling of EA participation is (like people who think we'll be as big as "environmentalism" or "antiabortion" someday) are straightforwardly wrong. 

(Note "conversion" has religious overtones but it's also a marketing term, I'm using it in the marketing sense) 

Davidmanheim @ 2023-02-06T19:16 (+4)

Yes, I strongly agree - and don't think failed conversions are actually opposed to us. Usually, we want people to be effective at doing good even if they disagree about what the most good is. I'd prefer someone spending money to improve music education effectively to someone doing it ineffectively, even if I don't think it's the highest priority.

Elka Weber @ 2023-06-26T19:12 (+3)

Hello! Just an aside from an EA newbie: there's a missing word in the following sentence.

To transform the question about the murder-happy transplant surgeon to a question of person-affecting or non-person-affecting population ethics, we can ask about an expectant first-time mother planning to have a family of five children, who tragically discovers, at the very earliest stage of pregnancy, that the blastocyst[5] which could become a child will have a debilitating physical ailment[6].

 Insert "who" between "child" and "will." Thanks! I'll keep reading & learning quietly now. 

Davidmanheim @ 2023-06-27T07:11 (+2)

Thanks, fixed!