New Article on "The Repugnant Conclusion of Effective Animal Altruism"
By Vera Flocke @ 2026-03-04T20:53 (+12)
Hi All,
I have a new article out in the Journal of Animal Ethics on "The Repugnant Conclusion of Effective Animal Altruism". Abstract below! I am curious how this will land with the audience here, and I welcome your thoughts.
Abstract
Effective animal advocates want to help animals as effectively as possible. I explore a popular way of spelling out this idea, according to which, when choosing between two actions to help animals, we should pick the one that maximizes the net aggregate welfare of animals. I argue that, if this is right, then—counterintuitively—we ought to build more confined animal feeding operations. This argument is an application of Parfit's mere addition paradox. My aim in laying out how this applies to animal ethics is to aid animal advocates wishing to examine the philosophical foundations of their advocacy.
Link to article: https://scholarlypublishingcollective.org/uip/jane/article/16/1/84/407918/The-Repugnant-Conclusion-of-Effective-Animal?guestAccessKey=9caf7280-2b9a-4b76-acec-89d30f1060fb
Jacob_Peacock @ 2026-03-05T18:58 (+18)
Hi Vera, kind of you to thank me in the acknowledgements and I appreciate your thinking on this problem. I'll flag for readers that I'm on the board of Animal Charity Evaluators, which is discussed in the article, but I'm not speaking for the organization.
As you know, I'm no professional philosopher, but I thought I'd share a few thoughts:
-
Your central argument feels structurally very close to the logic of the larder—the classic position that we do animals a favor by bringing them into existence for consumption, provided their lives have net positive welfare. Do you distinguish between your position and this logic of the larder?
-
The discussion of the Cumulative Pain Framework is valuable, but whether a cage-free hen's life clears the threshold of "net positive welfare" is generally a normative judgment, not an empirical finding. Even hedonic welfare theories still have to make a normative call on how to balance pleasures and pains. Similarly, calling the threshold "a very low bar" is itself a normative stance. I'm sympathetic to Tännsjö and others who argue the conclusion isn't actually repugnant once you think carefully about what "barely worth living" means: if a life is by definition net positive, it's worth having.
-
I'm not familiar with Frick (2022), but my sense is it still proposes an axiology? To the extent that it does, it still needs to face Arrhenius's impossibility results, so it's not clear to me this actually provides an escape from the RC, unless it gives up some other desiderata.
-
Negative utilitarianism is dismissed because the surest way to minimize suffering would be to eliminate all sentient life—you call this "not an option I can take seriously," which is fair. But Frick's Procreation Asymmetry holds that there is no moral reason to create a life just because it would be net positive. Taken strictly, wouldn't this imply a world with no sentient beings is not axiologically worse than a world full of flourishing ones, ~the conclusion you aimed to avoid? (I know Frick responds to this elsewhere, but I'm curious if that coincides with your view.)
-
I fear there's some conflation between the philosophical sense of welfarism ("what is good for someone or what makes a life worth living, is the only thing that has intrinsic value") and the sense in animal advocacy of "favoring tactics and strategies that lead to on-farm improvements in the welfare of animals." It seems possible to accept many of your arguments against philosophical welfarism, while still endorsing animal advocacy welfarism. In the same vein, I think the reccommendation of turning to reducetarianism/abolitionism similarly relies on empirical facts that aren't covered: what are these (cost-)effective reducetarian/abolitionist interventions?
Thanks for the opportunity to spend some time thinking about these issues.
Vera Flocke @ 2026-03-06T20:26 (+4)
Thanks so much for your response, Jacob. I really enjoy being in this conversation with you.
I'll take your questions point by point!
- Regarding the logic of the larder: I do not endorse the claim that we do animals a favor by bringing them into existence for consumption, provided their lives have net positive welfare. My argument is instead meant as a reductio of the maximizing assumptions behind EAA. If those assumptions are accepted, they appear to push one toward something very close to the logic of the larder. So the point is not to defend that view, but to expose a troubling implication of the underlying framework.
- On the Cumulative Pain Framework: I agree that whether a cage-free hen’s life is, all things considered, net positive is not a purely empirical finding. There is an irreducibly normative question here about how pleasures and pains should be weighed. I also agree that calling the threshold “a very low bar” is itself a normative claim. At the same time, I do not think “normative” means arbitrary or merely a matter of opinion. If effective altruists want to know how to do the most good, they should want these judgments to be constrained by empirical evidence as much as possible, even if the evidence does not by itself settle all evaluative questions.
- On Frick: Utilitarians start with a definition of the good (=welfare), and then derive from this starting point an account of what you ought to do (do whatever maximizes welfare). Frick does not start with a definition of the good. Instead, he proposes that in a context c1, outcome o1 is better than outcome o2 if and only if you have overall most reason to bring about o1. So, he starts with a primitive notion of reasons and derives from that an account of outcome betterness (or the good). He discusses how he avoids the Mere Addition Paradox (and the repugnant conclusion) in this article: https://academic.oup.com/book/38952/chapter-abstract/338159303?redirectedFrom=fulltext My paper only gestures in that direction; it does not try to show that Frick’s view avoids every impossibility result or satisfies every desideratum in population ethics.
- Relatedly, I do not mean in the paper to endorse the procreation asymmetry, or the claim that adding a happy life is simply neutral. In fact, I explicitly say that the neutrality view still ultimately faces the repugnant conclusion. The view I treat most sympathetically is Frick’s broader nonconsequentialist, reasons-first framework, which tries to avoid the repugnant conclusion not by saying flourishing lives add no value, but by rejecting the consequentialist assumption that value is always something to be promoted through maximization.
Finally, on “welfarism”: I agree that there is an important distinction between welfarism in the philosophical sense and “welfarism” in animal advocacy as a practical orientation toward on-farm welfare improvements. It is entirely possible to reject the former while still endorsing the latter as a tactic in pursuit of different goals. More generally, the same tactic can serve different ultimate aims. So I agree that my argument against maximizing net aggregate welfare does not by itself settle the strategic question of which interventions are most effective in practice. That is why I framed my conclusion relatively modestly: not that abolitionist or reductionist approaches are thereby shown to be superior, but that I hope the argument encourages greater interest in nonconsequentialist foundations and in strategies that are not guided by maximizing welfare alone. In practice, I have on several occasions actively supported welfare-oriented work, including work by organizations such as ACE, despite my doubts about the underlying philosophical framework.
Thanks again for the careful engagement, Jacob!
Jacob_Peacock @ 2026-03-10T17:32 (+2)
Thanks, Vera, appreciate your responses here! I'll have to learn more about Frick's work at some point.
I think my key uncertainty remains what sorts of lives are acceptable to create? My intuition is that the sorts of lives cage-free layer hens live are still far from worth creating. For example, to my mind, lives of sufficient quality probably have meaningful availability of individual moderate-to-high-quality health care—so that an individual would not die due to infection of a minor wound or a condition requiring surgical intervention. I think this bar makes it quite unlikely that lives on CAFOs would ~ever be worth creating. But, perhaps that's too high a bar, especially if chickens don't experience, say, anxiety about uncertain health care availability as a human might, even if that health care is never needed.
Perhaps somewhat beyond the scope of your paper, although it does seem like a crux of the argument, do you have a sense for the sorts of lives you think are acceptable to create?
Dustin Crummett @ 2026-03-12T22:40 (+3)
Without taking a stance on cage-free layer hens, I wonder if your standard isn't too demanding. I guess that no humans had access to medical care good enough that they wouldn't die due to infection of a minor wound until like 80 years ago or something. Were there no lives worth creating before then?
Vera Flocke @ 2026-03-13T13:59 (+3)
I take it this is a question for Jacob, right? I'll just chime in with one thought. - I think the comparison to wildlife suffering is relevant here too. Most wild animals live short lives and die of starvation, predation, disease of exposure. If the bar for net zero welfare is too high, it appears that one would be either pressed to drastically intervene and turn ecosystems upside down to avoid this suffering, or to eliminate all wildlife.
Jacob_Peacock @ 2026-03-13T15:04 (+2)
I agree this would be an implication of such a bar and that it seems demanding, to say the least! I'll reiterate I have a great deal on this and related topics. That said, I do think the answer is potentially yes, or that those lives were possibly mostly instrumental in getting to a world where some lives were worth creating.
I think it's also notably "convenient" that the bar was crossed so recently; perhaps the bar is even higher and we have largely not yet reached it. Of course, this seems like a very counter-intuitive conclusion, although I think most conclusions on the topic will be.
Vera Flocke @ 2026-03-10T18:47 (+3)
That's a great question, Jacob, and I don't think I have a full answer!
I had funny conversations with my partner about this. He is (or was) an anti-natalist, and thought it is always wrong to "inflict existence" on someone. But I thought that I would always choose life over no-life, and would want to be born into virtually any context (provided the alternative is not being born at all). This is just to illustrate that people have widely different intuitions on this question.
I think the challenge for effective altruists is to find an answer to this question that is as much constrained by empirical evidence and good argument as possible.
Kevin Xia 🔸 @ 2026-03-12T11:27 (+10)
Thank you for sharing your paper, Vera! I have been trying to discuss and understand a lot of adjacent themes around foundational philosophical assumptions, ad absurdum arguments, and moral intuitions vs. strict theory. Since you're asking how this will land with the audience here, I'd like to offer my personal account based on engaging with many EA(A)s. This is very much my subjective impression, not endorsed by any one person I discussed this with in particular. Also, I say a lot of "they" here - to be transparent, I endorse most, but not all, of these stances myself.
- My impression is that EAs are largely aware of the counterintuitive implications their theory of choice faces. This has broadly been my experience with consequentialist-leaning people: they rarely claim, or even want to claim, that their ethical theory is perfect or aligned with all intuitions. They just believe the counterintuitive implications of their theory are less counterintuitive than those of other theories, and/or find other theories either inconsistent or arbitrary.
- Underlying this is a general desire to reason through ethics, and underlying that is a general skepticism toward moral intuition as a definitive argument. My impression is that EAs are very analytical in their approach to philosophy, and as a result, they often don't consider their own moral intuitions particularly trustworthy — to varying degrees; some might want to abandon them altogether, others simply don't weigh them heavily.
I think these two points are why many EAs would read the paper and think something like: "Yes, I know. This isn't surprising. And also, show me a theory that doesn't run into such issues."
- However, most EAs don't fundamentally start from a purely philosophical stance on what is true in ethics and try to apply it to all their actions. They "first" want to do good and almost instrumentally try to figure out what that means. I think most EAs are "quasi-consequentialist": when pressed, or when wanting to defend their views in a theoretical discussion, they consider consequentialism the strongest perspective to take — perhaps because they find it least counterintuitive, or closest to explaining how they conceptualize ethics.
- When put into practice, this stance becomes largely action-guiding. It acts as a first proxy for identifying what to do or which choice is better. But unlike in theoretical discussions, a reductio ad absurdum isn't just a "bullet to bite" where one can rest calmly on knowing that others have "bigger bullets to bite" — it's a practical blocker that is rarely broken through. I think that's why everyone is "leaning utilitarian" or "leaning consequentialist": the theory acts as a guide and pushes the limits of one's otherwise-unquestioned moral intuitions to some degree, but not far beyond what one would consider generally reasonable. This is also exemplified by how often I hear things like "I know that X is probably right, but I just don't feel comfortable doing that." Strict theory comes a close second, but almost no one completely abandons their moral intuitions in favour of it.
- I think a neat resolution to all of this is the concept of moral uncertainty. I wouldn't confidently claim that the people I describe above are likely to explicitly endorse this framing, but I think it explains much of the friction between theory and intuition. Under moral uncertainty, one doesn't simply act on one's best-guess ethical theory; one hedges across plausible theories, weighted by credence. That naturally prevents the kind of single-theory extremes that generate repugnant/counterintuitive conclusions in practice, even while allowing consequentialism to carry significant weight. This sort of uncertainty, I think, is very much in line with typical EAs' general way of thinking.
I think that's why most EAs won't feel particularly "addressed" by this or similar arguments — and why they'll likely end up with something like: "Yeah, I know this could be an issue, but I wouldn't do this anyway." I also think this explains why many will first try to argue based on empirical assumptions (e.g., can CAFO's even be net-positive).
Hope this makes sense, curious to hear whether this has mirrored your experience discussing this piece :)
Vera Flocke @ 2026-03-13T13:43 (+3)
Hi Kevin,
Thanks so much, this quasi-sociological perspective is quite helpful.
One thing that puzzles me is the role of intuition in this context. A few people have responded to the repugnant conclusion by saying that animals in CAFOs, even in cage-free poultry systems, have negative welfare. But that's not borne out by the empirical research on the topic. In my view, it's largely an unverified assumption, or intuition. That seems to run against the general project of "using reason and evidence to do the most good".
Similar tensions seemed apparent to me in what you write about stances of some effective altruists. You say that many EAs want to rely on reason rather than intuition, and don't consider their own moral intuitions trustworthy. But then you also say that they "consider consequentialism the strongest perspective to take — perhaps because they find it least counterintuitive." So, the acceptance of consequentialism itself is based on intuition.
The use of intuitions appears to be quite selective and arbitrary, when it serves prior commitments or helps to insulate parts of the worldview against objections.
Vera
Kevin Xia 🔸 @ 2026-03-13T15:52 (+8)
On the first point - that seems right; I think in a discussion like this, there can be a lot of confusion and conflation about what is meant by net negative welfare, lives worth living/barely worth living, too what degree one can and should trust empirical assessments about the welfare of animals, etc. - my best guess is that people are typically "somewhat" risk-averse and "somewhat" negative leaning consequentialist; so the bar for empirical evidence to show that chickens live net positive lives is intuitively set higher; both for how solid it is as well as how positive. That being said, I do think one can have distrust in intuitions as information for moral judgments while leaning on them for empirical questions - that doesn't seem inherently at odds for me. (I do think that the latter still clashes with "using evidence and reason," of course, but can be "explained for" with risk aversion and negative-leaning positions - which would change what "... to do the most good" means). But at this point, I am just speculating about what people are thinking about in making these arguments.
On the second point; my impression is that EAs rarely completely abandon moral intuition. They don't consider it particularly trustworthy, but don't think it's useless either. It serves some function (e.g., find the internally consistent theory that is least counterintuitive or that satisfies the most or the deepest lying moral intuitions), but then they'd basically have the theory take it from there (in theoretical discussion; once again, this tends to be different when actually being acted upon). I agree that it is plausibly arbitrary (where to draw the line at which to abandon moral intuitions; others might disagree with calling that arbitrary); but I don't think it usually serves prior commitments (in my experience, EAs are the social impact group that are most open to just changing commitments). That being said, I do think that some form of this (drawing a seemingly arbitrary line from where to not trust moral intuition) is true for effectively everyone who doesn't completely lean into moral intuitionism. My best guess explanation here is basically what I expressed in the last three points of my initial comment; most (perhaps all) EAs I am thinking of in this context are "doing-good" first; and underlying that is a strong moral compass/intuition that can be at real practical tension with trying to abandon moral intuition as information, so they try to find the right balance; with the balance being on-average more in favor of a well reasoned through theory vs. intuition; but not completely.
Clara Torres Latorre 🔸 @ 2026-03-04T21:15 (+8)
I think your reductio is standing on the big if that animals in CAFOs have a net positive existence, and the abstract / post skips that.
Vera Flocke @ 2026-03-04T22:02 (+1)
Thanks for your feedback, Clara! Not quite. My argument rests on the claim that it is possible for animals in CAFOs to have net positive welfare (not that they actually do). This creates an empirical and practical question for people wanting to maximize net aggregate welfare: figure out which exact living conditions ensure net positive welfare, and, based on that, which exact living conditions allow us to maximize net aggregate welfare. I have a long section in the paper were I discuss empirical research in that area, in particular the excellent work by Cynthia Schuck-Paim and her team at the Welfare Footprint Institute. As far as I can tell, with respect to layer hens, there is little evidence that lives in cage-free CAFOs would have net negative welfare.
Clara Torres Latorre 🔸 @ 2026-03-04T22:12 (+3)
Right. Then I think this should be in the abstract. Because right now the abstract says:
we should pick the one that maximizes the net aggregate welfare of animals. I argue that, if this is right, then—counterintuitively—we ought to build more confined animal feeding operations
and the "if this is right" only refers to the assumption of aggegation, not to the assumption of positive welfare in cafo
and the conclusion also doesn't say we maybe ought (if there are cases where cafo > 0)
Vera Flocke @ 2026-03-04T22:24 (+1)
Hi Clara,
I appreciate that there are different writing styles and that there certainly are other good ways to write the abstract and the conclusion of this article.
However, to clarify, the claim that net positive welfare is possible in CAFOs is not an assumption, but a premise in my argument that I provide evidence for. It's not a norm in academic journals to discuss every premise of an argument in both the abstract and the conclusion of an article. That would defeat the purpose of these sections, which are meant to provide brief overviews with tight word limits.
I agree with you that, if the conclusion was conditional on an unargued-for assumption, this should be highlighted prominently.
Clara Torres Latorre 🔸 @ 2026-03-04T22:43 (+3)
Hi Vera,
I agree on the meta point that you make here in principle. I think it's fine to not state every premise in the abstract and the conclusion, if it's something that it's argued for.
I also agree that "net positive welfare is possible in CAFOs" is not an assumption, but a premise that is argued for (and I find the arguments sound).
However, I still think the abstract as it stands now is saying something different, namely, that [maximizing aggregate welfare] => [we should build more CAFOs].
Afaik, this would be the logical conclusion from aggregationism if we assume that [animals in CAFOs have net positive lives], not only if [it is possible that animals in CAFOs have net positive lives].
Vera Flocke @ 2026-03-04T23:01 (+1)
Hi Clara,
The logical shape of my full argument is this:
If [we ought to maximize net aggregate welfare] then [we should build more CAFOs of the kind in which animals have above 0 welfare].
I also hold that:
If [we should build more CAFOs of the kind in which animals have above 0 welfare], then [we should build more CAFOs], since we cannot do the former without doing the latter.
Provided that if-then is transitive, it follows that:
If [we ought to maximize net aggregate welfare], then [we should build more CAFOs].
For these reasons, I continue to believe that the logic of the abstract is sound.
As I said, I can see that stylistic preferences could draw one towards wanting to make the difference between CAFOs in which animals have net positive welfare and CAFOs in which they don't explicit in the abstract.
Clara Torres Latorre 🔸 @ 2026-03-04T23:13 (+1)
Thank you for spelling out your reasoning in such a transparent way. I think our disagreement is not a matter of stylistic preferences.
I believe the following is incorrect:
If [we should build more CAFOs of the kind in which animals have above 0 welfare], then [we should build more CAFOs].
Let me rephrase your argument as
If [CAFOs > 0 is should] then [CAFOs is should].
I believe for this to hold you would need to know that [CAFOs < 0] is impossible, not just that [CAFOs > 0] is possible.
Vera Flocke @ 2026-03-04T23:25 (+1)
Nice! I like that we are clear on the disagreement now.
Let me substantiate my point then with a couple of examples.
- If you ought to plant an apple tree, it follows that you ought to plant a tree.
- If you ought to donate to GiveWell, it follows that you ought to donate to charity.
And so on.
Whenever you ought to do an a specific action of kind A, it follows that you ought to do an action of kind A. (This follows by existential generalization, if you want to go down into the symbolic logic of the argument.)
Furthermore, it can be true that you ought to do an action of kind A, even though, for some specific action t of kind A, if is not true that you ought to do t.
For example:
- It can be true that you ought to teach your children manners, even if it is not true that you ought to physically punish your children until they learn manners.
- It can be true that you ought to bring your mother a gift for her birthday, even if it is not true that you ought to give her a Ferrari for her birthday.
And so on.
... That's why I don't agree with your last point "I believe for this to hold you would need to know that [CAFOs < 0] is impossible, not just that [CAFOs > 0] is possible."
Clara Torres Latorre 🔸 @ 2026-03-04T23:38 (+1)
Okay. Thank you for your patience. I understand your point, and agree with the formal argument.
However, I still disagree. I don't know how to explain why without using some maths.
Let A be a subset of B, both sets of actions. Let G be the set of actions that we ought to do.
Existential generalization is something like
If exists x in A ^ G, exists x in B ^ G.
But this is not how I would expect readers to understand "we ought to build more confined animal feeding operations" in your abstract. This reads like a general recommendation, or even an unqualified/universal statement, not like an existential.
And let me add: even if the formal argument is airtight in your examples, it doesn't sound as obvious (in my intuition, it sounds obviously wrong) in your original case. This suggests that the same words mean different things in the different contexts, at least in how I'm reading it.
Vera Flocke @ 2026-03-05T00:06 (+1)
Thanks, Clara.
What I'm understanding from what you're saying is this: some people might read my abstract and think that I argue that, if we ought to maximize net aggregate welfare, then we ought to build CAFOs of any kind, including ones in which animals have net negative welfare. Then they read my article and find out that, actually, my argument shows that if we ought to maximize net aggregate welfare, then we ought to build CAFOs of certain kinds, in which animals have net positive welfare. And they might feel disappointed or mislead by that.
To which my response is: Fair! Explaining the difference between CAFOs in which animals have net negative welfare and CAFOs in which they have net positive welfare in the abstract could potentially have forestalled certain misunderstandings.
Clara Torres Latorre 🔸 @ 2026-03-05T00:09 (+3)
Yes. I'm one of those possible people. I'm happy to have reached mutual understanding.
Jacob_Peacock @ 2026-03-05T19:04 (+1)
If only because I read this whole comment chain, I'll add: I agree, Vera, this sentence is logically correct, but I agree with Clara that it seems like a significant risk of misinterpretation, especially since we should expect far more people will read the abstract than the article itself.
Vera Flocke @ 2026-03-05T19:34 (+1)
I'm still curious, apart from how I worded the abstract, what's your take on the substance of the argument? If you're willing to share!
Jacob_Peacock @ 2026-03-05T19:38 (+1)
Yup, I posted a longer top-level comment.
Vera Flocke @ 2026-03-05T20:13 (+2)
Thank you! I'll think it through and get back to you in 1-2 days.
Vera Flocke @ 2026-03-05T19:15 (+1)
Thanks, Jacob! From my perspective, the difference between CAFOs with negative welfare and those with slightly positive welfare is not very significant. Using metaphors, the zero point is often described by comparing it with suicidality. People with 0 welfare are indifferent wrt suicide. CAFOs with chickens whose welfare level is only a hair better than that do not appear like a good thing to me. As I say in the paper, a world with a small number of blissful chickens seems clearly better to me than a world with a large number of miserable chickens, even if their welfare is above 0. Since, from my perspective, the difference between CAFOs with positive and those with negative welfare is not very significant, this did not seem like a strong risk of misinterpretation to me. I can see that this would seem different, however, if for you the 0 welfare inflection point is crucially important.
Jacob_Peacock @ 2026-03-05T19:37 (+1)
Ah, that reasoning makes sense! From my perspective, the difference is (by definition) small, but (again, by definition) very meaningful since it differentiates lives worth creating and those that are not.
Leo @ 2026-03-09T20:18 (+1)
Effective altruists subscribe to a version of utilitarianism according to which actions are to be judged by their consequences.
While many EAs subscribe to utilitarianism, many others don't. Andreas Mogensen is just one example. The movement doesn't officially endorse utilitarianism either, as you can see in the objections here.
Vera Flocke @ 2026-03-10T01:59 (+1)
Hi Joe,
My understanding of the relationship betwee effective altruism and utilitarianism is informed by this article by Will MacAskill on "What Is Effective Altruism": https://static1.squarespace.com/static/5506078de4b02d88372eee4e/t/68ac5735dad92b460f97786d/1756124981213/What+Is+Effective+Altruism+International+Encyclopedia+of+Ethics.pdf
He draws out welfarism and aggregationism as core commitments of the view. It's clear that there are many versions of utilitarianism, as well as many versions of welfarism (stronger an weaker) and many ways of aggregating welfare.
The main difference between utilitarianism and EA that MacAskill draws out is that utiliarians think you're doing something wrong when you're not maximizing utility, while EAs (on this presentation) don't think that. MacAskill describes EA as a "project", not a normative theory.
By "effective altruist" you can mean someone who is a member of the EA community. That's who you seem to have in mind. Members of the EA community of course are a heterogenous group, they hold many different views. Or you can mean someone who believes in the core commitments outlined above and is motivated by them. That's who I had in mind.
I agree with you that the phrasing "subscribing to a version of utilitarianism" slides over some nuances, since, as I said, effective altruism is not a normative theory. My text goes on to explain what I mean though.
Vera
Leo @ 2026-03-10T11:33 (+3)
Thanks for your answer, Vera. I think there's a significant issue with how the article comes across to readers. While it's clear that your text addresses a specific EA—the utilitarian type—the article reads as though it's describing the entire EA community rather than a very particular subspecies of it. Additionally, I find the framing of "philosophical foundations of EA" problematic. These (or other) philosophies may have been an inspiration or influence for some people in the movement, but the project of EA is not based on these "foundations" in the way your article suggests. If the article clearly signaled upfront that it's analyzing one particular philosophical approach within EA rather than EA as a whole, that would avoid much of this confusion (but it would be less engaging, I guess).
Vera Flocke @ 2026-03-10T14:13 (+1)
Hi Leo,
Your concern was actually on my mind when writing a paper, and I made sure to address it head-on repeatedly. Let me just point to a few of the passages in the article that clearly refer to variations in the EA and effective animal advocacy movements:
- "Effective animal advocates want to help animals as effectively as possible. I explore a popular way of spelling out this idea, according to which..." (second sentence in the abstract)
- "Many effective animal advocates do not subscribe to a specific philosophical view but rather want to 'help animals as effectively as possible.' But if we try to add more specificity to this idea, the most popular interpretation is that when choosing between two actions to help animals, we should pick the one that maximizes the net aggregate welfare of animals." (first paragraph of the article)
- "I call this more specific thesis effective animal altruism (EAA)." (Coining of the term "Effective Animal Altruism", in contrast to the more broadly used "Effective Animal Advocacy", to pick out a specific subgroup) (first paragraph).
- "If the goal is to 'benefit animals as much as possible,' we need a different way of explaining what that means." (last line of the article)
My ascriptions of philosophical foundations to the movement was based on published articles by Will MacAskill, widely described as one of the founders or "originators" of the EA movement, with titles such as "What is Effective Altruism?" or "The definition of Effective Altruism", along with the book "Doing Good Better". Of course, as with any social movement, it is to be expected that there is a lot of variations in what individual members of the movement think and say.
Vera
Leo @ 2026-03-10T16:31 (+1)
I guess phrases like the one cited above, or "I hope that by drawing attention to a central philosophical problem of the approach, it will encourage animal advocates to look for alternatives, both philosophical and strategic." made me think that your view was that EAs were already committed (in some way or other) to some utilitarian type of philosophical foundation. I'll reread your article more carefully to see if I got it all wrong or if there's some ambiguity at play here.