Why Not EA? [paper draft]

By Richard Y Chappell🔸 @ 2023-05-09T19:03 (+47)

This is a linkpost to https://www.dropbox.com/s/mpr78cffc68gkb0/Chappell-WhyNotEA.pdf?dl=0

Hi all, I'm currently working on a contribution to a special issue of Public Affairs Quarterly on the topic of "philosophical issues in effective altruism". I'm hoping that my contribution can provide a helpful survey of common philosophical objections to EA (and why I think those objections fail)—the sort of thing that might be useful to assign in an undergraduate philosophy class discussing EA.

The abstract:

Effective altruism sounds so innocuous—who could possibly be opposed to doing good, more effectively? Yet it has inspired significant backlash in recent years. This paper addresses some common misconceptions, and argues that the core ideas of effective altruism are both excellent and widely neglected. Reasonable people may disagree on details of implementation, but every decent person should share the basic goals or values underlying effective altruism.

I cover:

Given the broad (survey-style) scope of the paper, each argument is addressed pretty briefly. But I hope it nonetheless contains some useful insights. For example, I suggest the following "simple dilemma for those who claim that EA is incapable of recognizing the need for 'systemic change'":

Either their total evidence supports the idea that attempting to promote systemic change would be a better bet (in expectation) than safer alternatives, or it does not. If it does, then EA principles straightforwardly endorse attempting to promote systemic change. If it does not, then by their own lights they have no basis for thinking it a better option. In neither case does it constitute a coherent objection to EA principles.

On earning to give:

Rare exceptions aside, most careers are presumably permissible. The basic idea of earning to give is just that we have good moral reasons to prefer better-paying careers, from among our permissible options, if we would donate the excess earnings. There can thus be excellent altruistic reasons to pursue higher pay. This claim is both true and widely neglected. The same may be said of the comparative claim that one could easily have more moral reason to pursue "earning to give" than to pursue a conventionally "altruistic" career that more directly helps people. This comparative claim, too, is both true and widely neglected. Neither of these important truths is threatened by the deontologist's claim that one should not pursue an impermissible career. The relevant moral claim is just that the directness of our moral aid is not intrinsically morally significant, so a wider range of possible actions are potentially worth considering, for altruistic reasons, than people commonly recognize.

On billionaire philanthropy:

EA explicitly acknowledges the fact that billionaire philanthropists are capable of doing immense good, not just immense harm. Some find this an inconvenient truth, and may dislike EA for highlighting it. But I do not think it is objectionable to acknowledge relevant facts, even when politically inconvenient... Unless critics seriously want billionaires to deliberately try to do less good rather than more, it's hard to make sense of their opposing EA principles on the basis of how they apply to billionaires.

I still have time to make revisions -- and space to expand the paper if needed -- so if anyone has time to read the whole draft and offer any feedback (either in comments below, or privately via DM/email/whatever), that would be most welcome!


Jason @ 2023-05-11T18:35 (+18)

On page 27, you clarify that many concepts in the article are not core to EA, but are "specific ideas contingently associated with EA, such as earning to give and life-affirming longtermism" that could be rejected "while still embracing the core of effective altruism." I think it would be helpful to distinguish core commitments from non-core issues early on the article.

I would also consider toning down some of the strong rhetorical claims, like "every decent person." You'd need much more space to cover every potential objection to EA's philosophical underpinnings and potentially be able to substantiate this claim to that level of confidence. Moreover, the reader knows they are reading a journal volume on philosophical issues in EA, which implies that the journal editors at least think there are plausible philosophical criticisms. Likely the reader knows that other contributors have identified what they think are substantial philosophical problems, and some EA principles do not align with the assumptions a reader new to EA likely has. 

All that is to say that I think a tone of "core concepts are obviously right, and no decent person would argue otherwise" would lead most neutral readers to conclude (1) that you're setting up strawmen, or (2) that you're defining the "core ideas" broadly enough to almost be truisms, leaving a lot of the heavy lifting to be done by unclearly-defined "details of implementation."

Paul Currion @ 2023-05-10T21:41 (+17)

I think this paper is weak from the outset in similar ways to the entire philosophical project of EA overall. You start with the definition of EA as "the project of trying to find the best ways of helping others, and putting them into practice". In that definition "the best" means "the most effective", which is one of the ways in which EA arguments rhetorically load the dice. If I don't agree that the most effective way to help people (under EA definitions) is always and necessarily the best way to help people, then the whole paper is weakened. Essentially, one ends up preaching to the choir - which is fine if that's what one wants to do, of course.

I take issue with a number of the arguments in the paper, but I have no desire to respond to the entire thing. However I will focus on the part of the Moral Prioritisation section that quotes Mark Goldring of Oxfam - not because I'm a fan of him or Oxfam, which I am not, but because your misinterpretation of his position is quite illustrative. You claim that "Goldring seems to be implying that so long as we help some children in each country, it does not matter how many children we end up abandoning", but this is not the argument or an implication of the argument.

First, Goldring is referring to Oxfam's country portfolio rather than a specific group of children, and he obviously believed that applying EA principles to Oxfam's portfolio would require the organisation simply to cease working in South Sudan because the cost of getting children into education is higher in South Sudan than e.g. Bangladesh. It seems to me that his belief was correct, and that it is morally unjustifiable to abandon the people of South Sudan because somebody sitting in a comfortable office somewhere has done some calculations and decided that those people are not worth it.

You may object to my characterisation of EA in this way, but as far as I can tell that is the fundamental argument. Oxfam claims to, tries to and perhaps even does operate on the basis of need, and the need of children in South Sudan is at least equal to the need of children in Bangladesh. In fact it might be greater, since as Goldring points out, the barriers to school attendance are high in South Sudan compared to Bangladesh. This also highlights (to me, at least) that these situations are sufficiently complex that the type of utilitarian calculus applied by EA is largely self-defeating in many real-world attempts to help people.

Anticipating the downvotes, hoping for discussion.

Richard Y Chappell @ 2023-05-10T22:44 (+8)

I'm very puzzled by this comment. Your characterization of Goldring's argument is precisely the argument I'm responding to, so I'm confused that you present this as though you think I am interpreting Goldring as saying something different.  I argue that an objectionable implication of Goldring's position (and yours) is that we should abandon a larger group of children because they are in a country (Bangladesh) for which we have already helped some other children. You haven't responded to my argument at all.

Paul Currion @ 2023-05-11T12:03 (+8)

Thank you for replying, although I admit to being equally puzzled by your puzzlement.

What Goldring is paraphrased as saying is that "For a certain cost, the charity might enable only a few children to go to school in a country such as South Sudan, where the barriers to school attendance are high, he says; but that does not mean it should work only in countries where the cost of schooling is cheaper, such as Bangladesh, because that would abandon the South Sudanese children."

Goldring is not "implying that so long as we help some children in each country, it does not matter how many children we end up abandoning". I simply don't see where you get that from. It's just not the argument that he's making. His argument is that the needs of children in South Sudan and Bangladesh are equally important, that the foundation for Oxfam's work is needs rather than costs, and that the accident of birth that placed a child in South Sudan and not Bangladesh is thus not a justification to abandon the former.

What Goldring does imply is that applying "EA principles" would require Oxfam to abandon all the children of South Sudan - and probably for every aid organisation to abandon the entire country, since South Sudan is a difficult and costly working environment. In this case "quantity has a quality all of its own" - the argument that justifies abandoning 100 children in one country in favour of 1000 children in another looks markedly different when it's used to justify withdrawing all forms of assistance from an entire country.

This highlights the conflict between EA's approach - which takes "effectiveness" (specifically cost-effectiveness) as an intrinsic rather than instrumental value - and the framework used by others, who have other intrinsic values. That conflict is the reason why we may be talking past each other - I recognise that you probably won't agree with this argument, and may continue to be puzzled. I would suggest to you that this is the fundamental weakness of the paper - that you are not taking these criticisms of EA in good faith, and in some cases are addressing straw man versions of them.

David Mathers @ 2023-05-11T16:03 (+10)

How far are you willing to push this? Presumably, you wouldn't educate 1 child in South Sudan and 10 in Bangladesh, rather than 0 in Sudan and 10 000 in Bangladesh, just so that you can say South Sudan hasn't been abandoned? So exactly how many more children have to go without education before you say "that's too many more" and switch to one country? What could justify a particular cut-off?  

Paul Currion @ 2023-05-11T19:16 (+3)

I'm not a utilitarian, so I reject the premise of this question when presented in the abstract as it is here. Effectiveness for me is an instrumental value, so I would need to have a clearer picture of the operating environments in both countries and the funding environment at the global level before I would be able to answer it.

zchuang @ 2023-05-12T01:44 (+5)

Just because you're not a utilitarian doesn't mean you can reject the premise of the question. Deontologists have the same problem with trade offs! The premise of the question is one even the Oxfam report accepts. I also don't think you know what an instrumental value is. I think you keep throwing the term out but don't understand what it means in terms of how it is frames the instrumental empirical question in a way that other values dissolve. 

Paul Currion @ 2023-05-12T04:15 (+4)

Can you give me an argument for why I can't reject the premise of the question, rather than just telling me I can't? I've explained why I reject it in these comments. Goldring "accepts" the premise only in the sense that he's attending an event which is based entirely on that premise, and has had that premise forced onto him through the rhetorical trick which I described in my reply to Chappell.

I think you're partly right about my confusion about instrumental values. Now that I reconsider, the humanitarian principles are a strange mix of instrumental and intrinsic values; regardless, effectiveness remains solely an instrumental value. Perhaps you could explain what you mean by "other values dissolve"?

David Mathers @ 2023-05-12T10:16 (+2)

What is "the premise" that you reject?

Paul Currion @ 2023-05-12T11:07 (+11)

The premise that a naive utilitarian calculus that abstracts and dehumanises individuals by presenting them as numbers in an equation unmoored from reality is a useful or ethical way to frame the question of how "best" to help people. As I've said in another comment, the trolley problem was meant as a stimulus to discussion, not as a guide for making policy decisions around public transport systems.

EDIT: I realise that this description may come across as harsh on a forum populated almost entirely by utilitarians, but I felt that it was important to be clear about the exact nature of my objection. My position is that I agree that utilitarianism should be a tool in our ethical toolkit, but I disagree that it is the tool that we should reach for exclusively, or even first of all.

David Mathers @ 2023-05-12T11:14 (+3)

How can we discuss whether or not it makes sense to help more people over less without discussing cases where more/less people are helped? 

Paul Currion @ 2023-05-12T11:32 (+3)

I suppose that part of my point is that we may not be discussing whether or not it makes sense to help more people over less. We may be discussing how we can help people who are most in need, who may cost more or less to help than other people.

I've claimed that naive utilitarian calculus is simply not that useful in guiding actual policy decisions. Those decisions - which happen every day in aid organisations - need to include a much wider range of factors than just numbers.

If we keep it in the realm of thought experiments, it's a simple question and an obvious answer. But do you really believe that the philosophical thought experiment maps smoothly and clearly to the real world problem?

David Mathers @ 2023-05-12T11:47 (+8)

'But do you really believe that the philosophical thought experiment maps smoothly and clearly to the real world problem?'

No, of course not. But in assessing the real world problem, you seemed to be relying on some sort of claim that sometimes it better to help less people if it means a fairer distribution of help. So I was raising a problem for that view: if you think it is sometimes better to distribute money to more countries even though it helps less people, then either that is always better in any possible circumstance, realistic or otherwise or its sometimes better and sometimes not depending on circumstance. Then the thought experiment comes in to show that there are possible, albeit not very realistic circumstances where it clearly isn't better. So that shows one of the two options available to someone with your view is wrong. 

Then, I challenged the other option that it is sometimes better and sometimes not, but the thought experiment wasn't doing any work there. Instead, I just asked what you think determines when it is better to distribute the money more evenly between countries versus when it is better to just help the most people, and implied that this is a hard question to answer. As it happens, I don't actually think that this view is definitely wrong, and you have hinted at a good answer, namely that we should sometimes help less people in order to prioritize the absolutely worst off. But I think it is a genuine problem for views like this that its always going to look a bit hazy what determines exactly how much you should prioritize the best of, and the view does seem to imply there must be an answer to that. 

Paul Currion @ 2023-05-12T12:21 (+7)

I think we need to get away from “countries” as a frame - the thought experiment is the same whether it’s between countries, within a country, or even within a community. So my claim is not that “it is sometimes better to distribute money to more countries even though it helps less people”.

If we take the Bangladeshi school thought experiment - that with available funding, you can educate either 1000 boys or 800 girls, because girls face more barriers to access education - my claim is obviously not that “it is sometimes better to distribute money to more genders even though it helps less people”. You could definitely describe it that way - just as Chappell describes Goldring’s statement - but that is clearly not the basis of the decision itself, which is more concerned with relative needs in an equity framework.

You are right to describe my basis for making decisions as context-specific. It is therefore fair to say that I believe that in some circumstances it is morally justified to help fewer people if those people are in greater need. The view that this is *always* better is clearly wrong, but I don’t make that assessment on the basis of the thought experiment, but on the basis that moral decisions are almost always context-specific and often fuzzy around the edges.

So while I agree that it is always going to look a bit hazy what determines your priorities, I don’t see it as a problem, but simply as the background against which decisions need to be made. Would you agree that one of the appeals of utilitarianism is that it claims to resolve at least some of that haziness?

David Mathers @ 2023-05-12T12:43 (+7)

'Would you agree that one of the appeals of utilitarianism is that it claims to resolve at least some of that haziness?'

Yes, indeed, I think I agree with everything in this last post. In general non-utilitarian views tend to capture more of what we actually care about at the cost of making more distinctions that look arbitrary or hard to justify on reflection. It's a hard question how to trade off between these things. Though be careful not to make the mistake of thinking utilitarianism implies that the facts about what empirical effects an action will have are simple: it says nothing about that at all. 

 Or at least, I think that, technically speaking, it is true that "it is sometimes better to distribute money to more genders even though it helps less people" is something you believe, but that's a highly misleading way of describing your view: i.e. likely to make a reasonable person who takes it at face value believe other things about you and your view that are false. 

I think the countries thing probably got this conversation off on the wrong foot, because EAs have very strong opposition to the idea that national boundaries ever have moral significance. But it was probably the fault of Richard's original article that the conversation started there, since the charitable reading of Goldring was that he was making a point about prioritizing the worst off and using an example with countries to illustrate that, not saying that it's inherently more fair to distribute resources across more countries. 

As a further point: EAs who are philosophers likely are aware, when they are being careful and reflective, that some people reasonably think that it is better to help a person the worse off they are, since the philosopher Derek Parfit, who is one of the intellectual founders of EA, invented a particular famous variant of that view: https://oxfordre.com/politics/politics/view/10.1093/acrefore/9780190228637.001.0001/acrefore-9780190228637-e-232

My guess (though it is only a guess) is that if you ask Will MacAskill he'll tell you that at least in an artificial case where you can either help a million people who are very badly off, or a million and one people who are much better off by the same amount, you ought to help the worse off people. It's hard to see how he could deny that, given that he recommends giving some weight to all reasonable moral views in your decision-making, prioritizing the worse off is reasonable, and in this sort of case, helping the worse off people is much better if we ought to prioritize the worse off, while helping the million and one is only a very small amount better on the view where you ought just to help the most people. 

Note by the way that you can actually have the 'always bring about the biggest benefit when distributing resources view, without worrying about prioritizing the worst off' view and still reject utiltarianism overall. For example, its consistent with "help more people rather than less when the benefit per person is the same size" that you value things other than happiness/suffering or preference satisfaction, that you believe it is sometimes wrong to violate rights in order to bring about the best outcome etc. 

Paul Currion @ 2023-05-12T12:59 (+3)

Likewise I think I agree with everything in this post. I appreciate that you took the time to engage with this discussion, and for finding grounds for agreement at least around the hazy edges.

sphor @ 2023-05-12T18:04 (+3)

Thanks to you and @Dr. David Mathers for this useful discussion. 

zchuang @ 2023-05-12T11:43 (+1)

Wait I just want to make an object level objection for the third party readers that most policy-making is guided by cost-benefit analysis and the assigning of value of statistical life (VSL) in most liberal democracies. 

Paul Currion @ 2023-05-12T11:51 (+4)

To clarify your objection: such policy-making is guided by, but not solely determined by, such approaches.

Richard Y Chappell @ 2023-05-11T17:43 (+5)

What do you mean by "not... good faith"? I take that to imply a lack of intellectual integrity, which seems a pretty serious (and insulting) charge. I don't take Goldring to be arguing in bad faith -- I just think his position is objectively irrational and poorly supported. If you think my arguments are bad, you're similarly welcome to explain why you believe that, but I really don't think anyone should be accusing me of failing to engage in good faith.

On to the substance: you (and Goldring) are especially concerned not to "withdraw all... assistance from an entire country." You would prefer to help fewer children, some in South Sudan and some in Bangladesh, rather than help a larger number of children in Bangladesh. When you help fewer people, you are thereby "abandoning", i.e. not helping, a larger number of people.  Does it matter how many more we could help in Bangladesh? It doesn't seem to matter to you or Goldring. But that is just to say that it does not matter how many (more) children we end up abandoning, on your view, so long as we help some in each country.  That's the implication of your view, right?  Can you explain why you think this isn't an accurate characterization?

ETA: I realize now there's a possible reading of the "it doesn't matter" claim on which it could be taken to impute a lack of concern even for Pareto improvements, i.e. saving just one person in each country being no better than 10 people in each country. I certainly don't mean to attribute that view to Goldring, so will be sure to reword that sentence more carefully!

zchuang @ 2023-05-11T15:40 (+5)

I don't think you're understanding what EAs truly object to though. If the problem is the moral arbitrariness and moral luck of South Sudan vs. Bangladesh then you end up having to prioritise. EA works on the margins so the argument conditionally breaks at the point quantity has a quality all of its own. 

If borders and the birth lottery are truly arbitrary I don't understand why it would be so bad to "abandon" a country if there are equally needs for kids of each country. In the same way typical humanitarians are ok with donations moved from the first world to the developing world.

To put inversely your example, the argument that justifies funding every single country because they are distinct categories also justifies abandoning 1000 children in one country for 100 children in another country. If anything your example weighs on the fact South Sudan and Bangladesh feel worthy on both ends so it feels intuitive. But the categories of countries themselves are wonderfully arbitrary, South Sudan did not exist until 2011!

Moreover, I wish you defended another intrinsic value that could be isolated away from cost-effectiveness. Is it a desserts claim that the most difficult places to administer aid are also the most "needy" and therefore deserve it more even if it costs more? 

Paul Currion @ 2023-05-11T20:45 (+1)

I'm not sure what the last sentence of your first paragraph means - can you explain it for me?

For most of the rest of your comment, I'd refer you to my other answer at https://forum.effectivealtruism.org/posts/ShCENF54ZN6bxaysL/why-not-ea-paper-draft?commentId=o4q6AFoKt7kDpN5cD. I don't know if that answers your points, but it should clarify a little.

The intrinsic values that I would point to in this context are the humanitarian principles of humanity, neutrality, impartiality and independence. (However I should note that these are the subject of continual debate, and neutrality in particular has come under serious pressure during the Ukraine war.) 

zchuang @ 2023-05-13T02:44 (+2)

Also to be clear, "humanity, neutrality, impartiality and independence" aren't values as most philosophers know of them. Neutrality and impartiality are not ones you seem to defend above which is why people find you to be confused.

Paul Currion @ 2023-05-13T04:41 (+1)

Yes, you're absolutely right. Academic philosophy has largely failed to engage with contemporary humanitarianism, which is puzzling given that the field of humanitarianism provides plenty of examples of actual moral dilemmas. That failure is also what leads to the situation we have now, where an academic paper that wants to engage with that topic lacks the language to describe it accurately.

This might be because the ethics of humanitarian action is (broadly) a species of virtue ethics, in which those humanitarian principles are the values that need to be cultivated by individuals and organisations in order to make the sort of utilitarian, deontological or other ethical decisions that we are using as thought experiments here, guided by the sort of "practical wisdom" that is often not factored into those thought experiments.

zchuang @ 2023-05-13T05:21 (+3)

I think the problem is actually reversed. Most humanitarian organisations do not have firm foundational beliefs and are about using poverty porn and feelings of the donor to guide judgements. The language you use of the value of "humanity" is a non-sequitur and doesn't provide information -- even those with high status in humanitarian aid circles like Rory Stewart express a lot of regret over this fuzziness. Put sharply, I don't think contemporary humanitarianism has language to describe itself accurately and "humanity, neutrality, impartiality and independence" are not values but rather buzzwords for charity reports and pamphlets. 

From what I've inferred is that you're some sort of Bernard Williams type moral particularism instead of virtue ethics in that you think there are morally salient facts everywhere on the ground in these cases and that the configuration of the morally relevant features of the action in a particular context. But the problem in this discourse is you won't name the thing you're defending because I don't think you know what exactly your moral system is beyond being against thought experiments and vibes of academic philosophy.

Paul Currion @ 2023-05-13T06:06 (+1)

This is definitely an uncharitable reading of humanitarian action. The humanitarian principles are rarely to be found in "charity reports and pamphlets" (by which I assume you mean public-facing documents) and if they are found there, they are not the focus of those documents at all. The exception would be for the ICRC, for the obvious reason that the principles largely originated in their work and they act as stewards to some extent.

Your characterisation of humanitarian organisations as "using poverty porn and feelings of the donor to guide judgements" and so on - well, you're welcome to your opinion, but that clearly obviates the hugely complex nature of decision-making in humanitarian action. Humanitarian organisations clearly have foundational beliefs, even if they're not sufficiently unambiguous for you. The world is unfortunately an ambiguous place.

(I should explain at this point that I am not a full-throated and unapologetic supporter of the humanitarian sector. I am in fact a sharp critic of the way in which it works, and I appreciate sharp criticism of it in general. But that criticism needs to be well-informed rather than armchair criticism, which I suppose is why I'm in this thread!)

I do in fact practice virtue ethics, and while there is some affinity between humanitarian decision-making and moral particularism, there are clearly moral principles in the former which the latter might deny - the principle of impartiality means that one is required to provide assistance to (for example) genocidaires from Rwanda when they find themselves in a refugee camp in Tanzania, regardless of what criminal actions they might have carried out in their own country.

I'm not sure what you mean when you say that I won't name the thing defending because I don't know what my moral system is. My personal moral framework is one of virtue ethics, taking its cue from classical virtue ethics but aware that the virtues of the classical age are not necessarily best for flourishing in the modern age; and my professional moral framework is - as you might have guessed - based on the humanitarian principles.

You might not believe that either of these frameworks is defensible, but that's different from saying that I don't know what they are. Could you explain exactly what you meant, and why you believe it?

Moya @ 2023-05-11T07:07 (+5)

When seeing the title of this post I really wanted to like it, and I appreciate the effort that went into it all so far.

Unfortunately, I have to agree with Paul - both the post as well as the paper draft itself read pretty weak to me. In many instances, it seems that you argue against strawpeople rather than engaging with criticism of EA in good faith, and even worse, the arguments you use to counter the criticism boil down to what EA is advocating for “obviously” being correct (you wrote in the post that the arguments are very much shortened because there is just so much ground to cover, but I believe that if an argument cannot be made in a convincing way, we should either focus more time on making it properly, or dropping the discussion entirely, rather than just vaguely pointing towards something and hoping for the best.)

Also, you seem to not defend all of EA, but whatever part of EA that is most easily defendable in the particular paragraph, such as arguing that EA does not require people to always follow its moral implications, only sometimes - which some EAers might agree with, but certainly not all.

David Mathers @ 2023-05-11T09:03 (+18)

Can you mention some places where you think he has strawmanned people and what you think the correct interpretation of them is? 

pseudonym @ 2023-05-12T23:40 (+4)

This is more of a misread than a strawman, but on page 8 the paper says:

Sometimes the institutional critique is stated in ways that illegitimately presuppose that “complicity” with suboptimal institutions entails net harm. For example, Adams, Crary, and Gruen (2023, xxv) write:

> EA’s principles are actualized in ways that support some of the very social structures that cause suffering, thereby undermining its efforts to “do the most good.” (emphasis added)

This reasoning is straightforwardly invalid. It’s entirely possible—indeed, plausible—that you may do the most good by supporting some structures that cause suffering. For one thing, even the best possible structures—like democracy—will likely cause some suffering; it suffices that the alternatives are even worse. For another, even a suboptimal structure might be too costly, or too risky, to replace. But again, if there’s evidence that current EA priorities are actually doing more harm than good, then that’s precisely the sort of thing that EA principles are concerned with. So it makes literally no sense to express this as an external critique 10 (i.e. of the ideas, rather than their implementation).

I don't think saying that Adams, Crary, and Gruen "illegitimately presuppose that “complicity” with suboptimal institutions entails net harm" is correct. The paper misunderstands what they were saying. Here's the full sentence (emphasis added):

Taken together, the book's chapters show that in numerous interrelated areas of social justice work - including animal protection, antiracism, public health advocacy, poverty alleviation, community organizing, the running of animal sanctuaries, education, feminist and LGBTQ politics, and international advocacy - EA's principles are actualized in ways that support some of the very social structures that cause suffering, thereby undermining its efforts to “do the most good.”

I interpret it as saying:

The way the EA movement/community/professional network employs EA principles in practice fundamentally support and enable fundamental causes of suffering, which undermines EA's ability to do the most good. 

In other words, it is an empirical claim that the way EA is carried out in practice has some counterproductive results. It is not a normative claim about whether complicity with suboptimal institutions is ever okay. 

Richard Y Chappell @ 2023-05-13T15:01 (+4)

But they never even try to argue that EA support for "the very social structures that cause suffering" does more harm than good. As indicated by the "thereby", they seem to take the mere fact of complicity to suffice for "undermining its efforst to 'do the most good'."

I agree that they're talking about the way that EA principles are "actualized". They're empirically actualized in ways that involve complicity with suboptimal institutions. And the way these authors argue, they take this fact to suffice for critique.  I'm pointing out that this fact doesn't suffice. They need to further show that the complicity does more harm than good.

Moya @ 2023-05-23T08:31 (+3)

Here is my criticism in more detail:

Effective altruism sounds so innocuous—who could possibly be opposed to doing good, more effectively? Yet it has inspired significant backlash in recent years. ... Every decent person should share the basic goals or values underlying effective altruism.

It starts here in the abstract - writing this way immediately sounds condescending to me, making disagreement with EA sound like an entirely unreasonable affair. So this is devaluing the position of a hypothetical someone opposing EA, rather than honestly engaging with their criticisms.

Either their total evidence supports the idea that attempting to promote systemic change would be a better bet (in expectation) than safer alternatives, or it does not. ... If it does not, then by their own lights they have no basis for thinking it a better option.

On systemic change: The whole point is that systemic change is very hard to estimate. It is like sitting on a local maximum of awesomeness, and we know that there must be higher hills - higher maxima - out there, but we do not know how to get there; any particular systemic change might as well make things worse. But if EA principles told us to only ever sit at this local maximum and never even attempt to go anywhere else, then those would not be principles I would be happy following.
So yes, people who support systemic change often do not have the mathematical basis to argue that it necessarily will be a good deal - but that does not mean that there is no basis for thinking attempting it is a good option.
Or, more clearly: By not mentioning uncertainty in this paragraph, I do believe you are arguing against a strawperson, as the presence of uncertainty is absolutely crucial to the argument.

Rare exceptions aside, most careers are presumably permissible. ... This claim is both true and widely neglected. ... Neither of these important truths is threatened by the deontologist's claim that one should not pursue an impermissible career.

On earning to give: Again, the arguments are very simplified here. A career being permissible or not is not a binary choice, true or false. It is a gradient, and it fluctuates and evolves over time, depending on how what you are asked to do on the job fluctuates over time, and depending on how the ambient morality of yourself and society shifts over time. So the question is not "among all of these completely equivalent permissible options, should I choose the highest-paying one and earn to give?" but "what is the tradeoff I should be willing to make between the career being more morally iffy, and the positive impact I can have by donation from a larger income baseline?", and additionally, if you still just donate e.g. 10% of your income but your income is higher it means that also there is a larger amount of money you do not donate, which counterfactually you might use to buy things you do not actually need that need to be produced and shipped and so on, in the worse case making the world a worse place for everyone to be in, so even just "more money = more good" is not a simple truth that just holds.
And despite all these simplifications, the sentence "This claim is ... true" just really, really gets to me - such binary language again completely sweeps any criticism, any debate, any nuance under the rug.

EA explicitly acknowledges the fact that billionaire philanthropists are capable of doing immense good, not just immense harm. Some find this an inconvenient truth ... Unless critics seriously want billionaires to deliberately try to do less good rather than more, it's hard to make sense of their opposing EA principles on the basis of how they apply to billionaires.

On billionaire philanthropy: Yes, billionaires are capable of doing immense good, and again, I have not seen anyone actually arguing against that. The most common arguments I am aware of against billionaire philanthropists are (1) that billionaires in the first place just shouldn't exist, as yes they have the capacity to do immense good, but also the capacity to do immense harm, and no single person should be allowed to have the capacity to do so much harm to living beings on a whim. And (2) billionaires are capable of paying people to advise them on how to best make it look like they are doing good, when actually, they are not (such as creating huge charitable foundations and equipping them with lots of money, but these foundations then actually just re-investing that money into projects run by companies these billionaires have shares in, etc.)

So that is what I mean by "arguing against strawpeople" - claims are so far simplified and/or misrepresented that they do not accurately represent the actual positions of EAers, or of people who criticise them.

Richard Y Chappell @ 2023-05-23T13:52 (+2)

So this is devaluing the position of a hypothetical someone opposing EA, rather than honestly engaging with their criticisms.

That's a non-sequitur. There's no inconsistency between holding a certain conclusion -- that "every decent person should share the basic goals or values underlying effective altruism" -- and "honestly engaging with criticisms". I do both. (Specifically, I engage with criticisms of EA principles; I'm very explicit that the paper is not concerned with criticisms of "EA" as an entity.)

I've since reworded the abstract since the "every decent person" phrasing seems to rub people the wrong way. But it is my honest view. EA principles = beneficentrism, and rejecting beneficentrism is morally indecent. That's a view I hold, and I'm happy to defend it. You're trying to assert that my conclusion is illegitimate or "dishonest", prior to even considering my supporting reasons, and that's frankly absurd.

The whole point is that systemic change is very hard to estimate. It is like sitting on a local maximum of awesomeness, and we know that there must be higher hills - higher maxima - out there, but we do not know how to get there; any particular systemic change might as well make things worse.

Yes, and my "whole point" is to respond to this by observing that one's total evidence either supports the gamble of moving in a different direction, or it does not. You don't seem to have understood my argument, which is fine (I'm guessing you don't have much philosophy background), but it really should make you more cautious in your accusations.

Or, more clearly: By not mentioning uncertainty in this paragraph, I do believe you are arguing against a strawperson, as the presence of uncertainty is absolutely crucial to the argument.

It's all about uncertainty -- that's what "in expectation" refers to. I'm certainly not attributing certainty to the proponent of systemic change -- that would indeed be a strawperson, but it's an egregious misreading to think that I'm making any such misattribution. (Especially since the immediately preceding paragraphs were discussing uncertainty, explicitly and at length!)

the sentence "This claim is ... true" just really, really gets to me

Again, I think this is just a result of your not being familiar with the norms of philosophy. Philosophers talk about true claims all the time, and it doesn't mean that they're failing to engage honestly with those who disagree with them.

So the question is not "among all of these completely equivalent permissible options, should I choose the highest-paying one and earn to give?"

Now this is a straw man! The view I defend there is rather that "we have good moral reasons to prefer better-paying careers, from among our permissible options, if we would donate the excess earnings." Reasons always need to be balanced against countervailing reasons. The point of the appeal to permissibility is just to allow that some careers may be ruled out as a matter of deontic constraints. But obviously more moderate harms also need to be considered, and balanced against the benefits, and I never suggest otherwise.

The most common arguments I am aware of against billionaire philanthropists are...

Those aren't arguments against how EA principles apply to billionaires, so aren't relevant to my paper.

So that is what I mean by "arguing against strawpeople"

You didn't accurately identify any misrepresentations or fallacies in my paper. It's just a mix of (i) antecedently disliking the strength of my conclusion, (ii) not understanding philosophy, and (iii) your being more interested in a different topic than what my paper addresses.

Sanjay @ 2023-05-11T09:05 (+2)

I was confused by the first paragraph of Paul's comment.

  • Is it saying that EA assumes that "the best" way to help people = "the most effective" way to help people?
  • If so, could you please define what you meant "best" and "effective"?

I get the impression Paul has some distinction in mind, but I don't understand what it is. (Paragraph copied below)

I think this paper is weak from the outset in similar ways to the entire philosophical project of EA overall. You start with the definition of EA as "the project of trying to find the best ways of helping others, and putting them into practice". In that definition "the best" means "the most effective", which is one of the ways in which EA arguments rhetorically load the dice. If I don't agree that the most effective way to help people (under EA definitions) is always and necessarily the best way to help people, then the whole paper is weakened. Essentially, one ends up preaching to the choir - which is fine if that's what one wants to do, of course.

Paul Currion @ 2023-05-11T10:34 (+3)

Yes, I am claiming that when Effective Altruism is defined as "trying to find the best ways" what it really means is "trying to find the most effective ways". As far as I can tell the reasons for using "the best" are to avoid a circular definition ("Effective Altruism is trying to find the most effective ways to perform altruism") and as a rhetorical device to deflect criticism ("Surely you can't object to trying to find the best ways of helping others?!").

Despite protests to the contrary EA is a form of utilitarianism, and when the word effective is used it has generally been in the sense of "cost effective". If you are not an effective altruist (which I am not), then cost effectiveness - while important - is an instrumental value rather than an intrinsic value. Depending on your ethical framework, therefore, what you define as "the best way" to help people will differ from the effective altruist.

Paul Currion @ 2023-05-10T21:54 (+2)

p.s. I'm aware that Oxfam's programs are also currently decided by "somebody sitting in a comfortable office somewhere [who] has done some calculations", and I object to this as well while recognising that it may be inevitable given how the world works. My argument is that EA is no better than this current situation in principle, and may be worse than this *in practice* given that it could lead to the complete abandonment of entire countries.

Nathan Young @ 2023-05-10T12:30 (+12)

Any chance we can have a google doc version to read/comment on?

Oscar Delaney @ 2023-05-12T23:36 (+1)

I would also find this useful, the formatting makes me think it is made in LATEX though which would make that hard I think.

Jamie Elsey @ 2023-05-10T09:08 (+8)

"Unless critics seriously want billionaires to deliberately try to do less good rather than more, it's hard to make sense of their opposing EA principles on the basis of how they apply to billionaires."

I don't think the only alternative to wanting billionaires to actively try to do good is that you would be arguing for the obviously foolish idea that they should be trying to do less good. There might be many reasons you would not want to promote the ideas of billionaires 'doing more good'. E.g., you believe they have an inordinate amount of power and in actively trying to do good they will ultimately do harm, either by misalignment or mistakes in EA's ideas of what would do good, even if the person remains aligned (a particular problem when people have certain magnitudes of money/influence that is not such an issue when people have less power/influence, where the damage will be less). You may also just not want to draw such powerful people's attention to the orders of magnitude more influence they could have.

I think in your statement you are arguing that the possible effect on billionaires is not an argument against EA principles per se and on that I'd agree, but in my view that reasonable side of the argument loses force when paired with what seems like a silly statement, that people would be arguing something that no person would argue.

Richard Y Chappell @ 2023-05-10T13:22 (+4)

I think the full section addresses this (but let me know if you disagree), via the following:

Alternatively, if one believes that there are compelling arguments that billionaire philanthropy necessarily does more harm than good, then they might instead conclude that the best thing billionaires can do is voluntarily pay more taxes (i.e., donate to the US Treasury). That would be a surprising result, and I doubt that many actually believe it, but it is at least conceptually possible. But even that is no objection to EA principles, but just a possible implication of them (when combined with unusual empirical assumptions).

The general point (as stressed throughout the paper) being that we need to take total evidence into account. If there's evidence that "actively trying to do good they will ultimately do harm" then rationally doing good actually entails something different from what you're imagining when you describe them as "actively trying". EA principles would imply that we draw billionaires' attention to these risks, and encourage them to help in whatever ways are actually better in expectation.

Jamie Elsey @ 2023-05-10T13:53 (+1)

Sure, I don't think what you're saying is technically incorrect it is just for me rhetorically, I would read you as being less sincere and therefore less convincing in engagement with critics if there seems to be some implication that comes across a bit like 'unless people believe something stupid, then their critiques don't make sense' - but this may also be a reaction to seeing only the excerpted quote and not the whole text

David Mathers @ 2023-05-12T11:03 (+7)

Actually, on reading the passage you quote Goldring again I think you have been uncharitable to him. The passage says 'Goldring says it would be wrong to apply the EA philosophy to all of Oxfam’s programmes because it could mean excluding people who most need the charity’s help.'

That could be read as expressing not the idea that more people in total get abandoned on EA views, which is indeed confused, but rather the (fairly philosophically mainstream!) prioritarian idea that all things being equal it is better to help people the worse off they currently are. That is the claim is "don't abandon the worst off to help more in all cases, because the worst off have priority', not some confused claim that you help more people by distributing help more evenly across countries. 

Richard Y Chappell @ 2023-05-12T14:44 (+2)

That doesn't fit well with his concern for "abandonment". It would imply instead that Prioritarian-Oxfam should instead pour all of their resources into South Sudan (abandoning Bangladeshi kids entirely).  But yeah, probably worth mentioning this explicitly! It's part of a more general lesson I'd like the paper to bring out, namely, that one can of course optimize for things other than prima facie utilitarian impact, but even so the results are going to look very different from the (thoroughly unoptimized) old-fashioned approaches to philanthropy.

David Mathers @ 2023-05-11T17:39 (+6)

I think the claim that your view doesn't license replaceability because it prioritizes currently existing people is a bit misleading. Unless the priority is infinite, there is presumably some level of well-being at which you swap (i.e. kill) all current people for a population with higher well-being at the same size. 'Oh, but not if they're just a little higher' doesn't seem that comforting. Of course, as you say in a footnote, you can appeal to side constraints here, but if you think side constraints can be overridden when the stakes are high enough (i.e. for example, it's right to kill 1 innocent to save a billion) then again, that just pushes up how well the people replacing us have to be doing before replacement becomes mandatory, rather than getting ride of replaceability altogether. 

Richard Y Chappell @ 2023-05-12T14:50 (+2)

Thanks, yeah that's definitely worth addressing. I was implicitly thinking that strict replaceability was the philosophically interesting/objectionable claim. The mere possibility of high-stakes swamping seems a bit more generic, and less distinctive to longtermism. E.g. neartermists may be equally committed to killing (or failing to save) one innocent in order to save a sufficiently large number of other, already-existing people. In general, not wanting to be sacrificed isn't a good reason to deny that others have value at all.  But yeah, worth mentioning this in the paper itself.

David Mathers @ 2023-05-12T15:36 (+4)

My sense is that many people will think killing for replacement is distinctively objectionable, however many people are being added and however good their lives are, even though they accept that in extreme cases its okay to kill one to save very many who already exist. To capture that intuition, you need more than just that you should prioritize current people's lives a lot, the priority has to be infinite. 

lilly @ 2023-05-11T14:37 (+6)

Thanks for writing this! My sense from talking to non-EAs about longtermism is that most buy into asymmetric views of population ethics. I'm not sure what you say here will be very reassuring to them:

"Longtermism is a big tent, and includes room for “asymmetric” views of population ethics on which additional miserable lives are bad, but additional happy lives are not good but merely neutral. Such views still imply that we should be concerned about the risk of dystopian futures containing immense suffering (or “S-risks”). If there is a non-trivial chance of such S-risks eventuating, reducing these risks should plausibly be a key moral priority: astronomical suffering is not something to be viewed lightly, on any account."

If you only care about S-risks, not X-risks, and still want to get longtermism, you need to think that the level of suffering in the future could be much greater than the level of suffering at present, such that our diminished ability to prevent future suffering is offset by the scale of that suffering. In other words, if you think that there is already astronomical suffering in the world, due to, e.g., the tens of billions of factory-farmed animals living lives full of suffering, then you have to think that there is a "non-trivial" chance of a far more dystopian future in order to be a longtermist. It's pretty understandable to me why these people would think that we should work on fixing the dystopia we're already in rather than working to prevent a theoretically worse dystopia. I would probably tweak the language in the above paragraph to acknowledge that.

Separately, I didn't read the whole paper, so maybe you say this somewhere, but it might be worth mentioning that you don't need longtermism to think that many of the "longtermist" things EAs are working on (e.g., preventing pandemics; reducing AI risk) are worth working on.

Thanks again for writing this!

David Mathers @ 2023-05-11T16:15 (+4)

'that the level of suffering in the future could be much greater than the level of suffering at present'

When you say "level" here, did you mean "amount"? If you think that people will suffer the same amount per person, or even less per person in the future, but also that there will be far more future people than current people, and you can improve things for a large fraction of the future people, you can still get the result that you will reduce suffering more by working on long-termist stuff than by working on present stuff. 

lilly @ 2023-05-11T22:25 (+2)

Yes, I meant amount.

Richard Y Chappell @ 2023-05-11T17:11 (+2)

Thanks, I appreciate the helpful suggestions!

yefreitor @ 2023-05-10T08:29 (+5)

In this paper, I’ve argued that there are no good intellectual critiques of effective altruist principles. We should all agree that the latter are straightforwardly correct. But it’s always possible that true claims might be used to ill effect in the world. Many objections to effective altruism, such as the charge that it provides “moral cover” to the wealthy, may best be understood in these political terms.

I don’t think philosophers have any special expertise in adjudicating such empirical disagreements, so will not attempt to do so here. I’ll just note two general reasons for being wary of such politicized objections to moral claims.

First, I think we should have a strong default presumption in favour of truth and transparency. While it’s always conceivable that esotericism or “noble lies” could be justified, we should generally be very skeptical that lying about morality would actually be for the best. In this particular case, it seems especially implausible that discouraging people from trying to do good effectively is a good idea. I can’t rule it out—it’s a logical possibility—but it sure would be surprising. So there’s a high bar for allowing political judgments to override intellectual ones.

 

This is pretty uncharitable. Someone somewhere has probably sincerely argued for claiming helping people is bad on the grounds that doing so helps people, but "political" critics of EA are critics of EA, the particular subculture/professional network/cluster of organizations that exists right now, not "EA principles". This is somewhat obscured by the fact that the loudest and best-networked ones come from "low decoupling" intellectual cultures, and often don't take talk of principles qua principles seriously enough to bother indicating that they're talking about something else - but it's not obscure to them, and they're not going to give you any partial credit here. 

Richard Y Chappell @ 2023-05-10T13:11 (+6)

Critics like Srinivasan, Crary, etc., pretty explicitly combine a political stance with criticism of EA's "utilitarian" foundations, so I'm not sure what's uncharitable about this? If they said something like, "EA has great principles, but we think the current orgs aren't doing a great job of implementing their own principles", that would be very different from what they actually say! (It would also mean I didn't need to address them in this paper, since I'm purely concerned with evaluating EA principles, not orgs etc.)

But I guess it wouldn't hurt to flag the broader point that one could think current EA orgs are messing up in various ways while agreeing with the broader principles and wishing well for future iterations of EA (that better achieve their stated goals). Are there any other specific changes to my paper that you'd recommend here?

yefreitor @ 2023-05-10T19:48 (+11)

Critics like Srinivasan, Crary, etc., pretty explicitly combine a political stance with criticism of EA's "utilitarian" foundations

Yes, they're hostile to utilitarianism and to some extent agent-neutrality in general, but the account of "EA principles" you give earlier in the paper is much broader.

Effective altruism is sometimes confused with utilitarianism. It shares with utilitarianism the innocuous claim that, all else equal,it’s better to do more good than less. But EA does not entail utilitarianism’s more controversial claims. It does not entail hegemonic impartial maximizing: the EA project may just be one among many in a well-rounded life ...

I’ve elsewhere described the underlying philosophy of effective altruism as“beneficentrism”—or “utilitarianism minus the controversial bits”—that is, “the view that promoting the general welfare is deeply important, and should be amongst one’s central life projects.” Utilitarians take beneficentrism to be exhaustive of fundamental morality, but others may freely combine it with other virtues, prerogatives, or pro tanto duties.

Critics like Crary and Srinivasan (and this particular virtue-ethicsy line of critique should not be conflated with "political" critique in general) are not interested in discussing "EA principles" in this sense. When they say something like "I object to EA principles" they're objecting to what they judge to be the actual principles animating EA discourse, not the ones the community "officially" endorses. 

They might be wrong about what those principles are - personally my impression is that EA is very strongly committed to consequentialism but in practice not all that utilitarian - but it's an at least partially empirical question, not something that can be resolved in the abstract. 

Brendan Mooney @ 2023-05-10T18:28 (+4)

Haven't read the draft, just this comment thread, but it seems to me the quoted section is somewhat unclear and that clearing it up might reduce the commenter's concerns.

You write here about interpreting some objections so that they become "empirical disagreements". But I don't see you saying exactly what the disagreement is. The claim explicitly stated is that "true claims might be used to ill effect in the world" -- but that's obviously not something you (or EAs generally) disagree with.

Then you suggest that people on the anti-EA side of the disagreement are "discouraging people from trying to do good effectively," which may be a true description of their behavior, but may also be interpreted to include seemingly evil things that they wouldn't actually do (like opposing whatever political reforms they actually support, on the basis that they would help people too well). That's presumably a misinterpretation of what you've written, but that interpretation is facilitated by the fact that the disagreement at hand hasn't been explicitly articulated.

Saul Munn @ 2023-05-10T05:25 (+3)

Hey Richard!

Big fan of Good Thoughts :)

I'd love to edit/help! Is there a rough date that you'd want edits by?

~ Saul Munn

Richard Y Chappell @ 2023-05-10T13:15 (+3)

Hi Saul, any time this month would be great.  Thanks!