Open Phil Should Allocate Most Neartermist Funding to Animal Welfare

By Ariel Simnegar 🔸 @ 2023-11-19T17:00 (+521)

Key Takeaways

Summary

Thanks to Michael St. Jules for his comments.

The Evidence Endorses Prioritizing Animal Welfare in Neartermism

GiveWell estimates that its top charity (Against Malaria Foundation) can prevent the loss of one year of life for every $100 or so.

We’ve estimated that corporate campaigns can spare over 200 hens from cage confinement for each dollar spent. If we roughly imagine that each hen gains two years of 25%-improved life, this is equivalent to one hen-life-year for every $0.01 spent.

If you value chicken life-years equally to human life-years, this implies that corporate campaigns do about 10,000x as much good per dollar as top charities. … If one values humans 10-100x as much, this still implies that corporate campaigns are a far better use of funds (100-1,000x).

Holden Karnofsky, "Worldview Diversification" (2016)

"Worldview Diversification" (2016) describes OP's approach to cause prioritization. At the time, OP's research found that if the interests of animals are "at least 1-10% as important" as those of humans, then "animal welfare looks like an extraordinarily outstanding cause, potentially to the point of dominating other options".[2] After the better part of a decade, the latest and most rigorous research funded by OP has endorsed a stronger claim: Any significant moral weight for animals implies that OP should prioritize animal welfare in neartermism. This sentence is operationalized in the paragraphs that follow.

In 2021, OP granted $315,500 to RP for moral weight research, which "may help us compare future opportunities within farm animal welfare, prioritize across causes, and update our assumptions informing our worldview diversification work" [emphasis mine].[3] RP assembled an interdisciplinary team of experts in philosophy, comparative psychology, animal welfare science, entomology, and veterinary research to review the literature's latest evidence.[4] RP's moral weights and analysis of cage-free campaigns suggest that the average cost-effectiveness of cage-free campaigns is on the order of 1000x that of GiveWell's top charities.[5] Even if the campaigns' marginal cost-effectiveness is 10x worse than the average, that would be 100x.

In 2019, the mean EA leader endorsed allocating a majority of neartermist resources over the next 5 years to animal welfare.[6] Given the strength of the evidence that animal welfare dominates in neartermism by orders of magnitude, this allocation seems sensible for OP. In actuality, OP has allocated an average of 17% of its neartermist funding to animal welfare each year, with 83% going to other neartermist causes.[7] Since OP funded RP's moral weight research specifically in order to "prioritize across causes, and update our assumptions informing our worldview diversification work", one might have expected OP to update their allocations in response to RP's evidence. However, OP's plans for 2023 give no indication that this will happen.

The EA movement currently spends more on global health than on animal welfare and AI risk combined. It clearly isn't even following near-termist ideas to their logical conclusion, let alone long-termist ones.

Scott Alexander

If you didn't want animals to dominate, maybe you shouldn't have been a utilitarian! … When people want to put the blame on these welfare range estimates, I think that's just not taking seriously your own moral commitments.

Bob Fischer, EAG Bay Area 2023

Objections

Animal Welfare Does Not Dominate in Neartermism

OP may reject that animal welfare dominates in neartermism. If so, I'm unaware of any public clarification of OP's beliefs on the topic. In the following sections, I attempt to deduce what views OP may hold in order for animal welfare to not dominate in neartermism, and show that such views would be highly peculiar and dubious. If OP doesn't think animal welfare dominates, I ask them to publicly clarify their views, so that they can be constructively engaged with.

RP's Project Assumptions are Incorrect

If OP rejects RP's conclusions, they must reject some combination of RP's project assumptions: utilitarianism, valence symmetry, hedonism, and unitarianism. I don't think OP rejects utilitarianism or valence symmetry, so the following will focus upon OP's possible objections to:

  1. Hedonism: The view that welfare derives only from happiness and suffering.
  2. Unitarianism: The view that the moral importance of welfare doesn't depend upon species membership.

Crucially, rejecting hedonism is not enough to avoid animal welfare dominating in neartermism. As Bob Fischer points out, "Even if hedonic goods and bads (i.e., pleasures and pains) aren't all of welfare, they’re a lot of it. So, probably, the choice of a theory of welfare will only have a modest (less than 10x [i.e. at least 10 % weight for hedonism]) impact on the differences we estimate between humans' and nonhumans' welfare ranges".[8] One would need to endorse an overwhelmingly non-hedonic theory, and/or an overwhelmingly hierarchical theory, such that the combined views discount three orders of magnitude of animal welfare impact. For example, OP could hold an overwhelmingly non-hedonic view where almost none (0.1%) of the human welfare range comes from pleasure and pain.

OP could also hold an overwhelmingly hierarchical view where just for being a human, one unit of a human's welfare is considered vastly (1000x) more important than the same amount of welfare in another animal. OP could also hold a combination of less-overwhelming versions of the two, such as 1% of human welfare coming from pleasure/pain and one unit of human welfare being 10x as important as one unit of animal welfare, so long as the combined views discount three orders of magnitude of animal welfare impact.

The following two sections will critique overwhelming non-hedonism and overwhelming hierarchicalism respectively. If the overwhelming views were significantly less overwhelming, my critique would be substantially the same. Therefore, I request that the reader consider the following critiques to also address whichever combination of less-overwhelming views OP may hold.

Endorsing Overwhelming Non-Hedonism

We [OP] think that most plausible arguments for hedonism end up being arguments for the dominance of farm animal welfare. … If we updated toward more weight on hedonism, we think the correct implication would be even more work on FAW, rather than work on human mental health.

Alexander Berger

Alexander has stated that "Hedonism doesn't seem very compelling to me".[9] Overwhelming non-hedonism, combined with the implicit premise that humans are vastly more capable of realizing non-hedonic goods than animals, may explain OP's neartermist cause prioritization: Enabling humans to realize non-hedonic goods may be better than reducing extreme suffering for orders of magnitude more animals.

The implicit premise seems non-obvious. It's plausible that both humans and other animals would have "not being tortured" pretty high in their preferences/objective list.

Even if the implicit premise is assumed, there's substantial empirical evidence that overwhelmingly non-hedonic theories are dubious:

Evidently, many people who experience severe suffering find it to outweigh many of the non-hedonic goods in life. If one endorses an overwhelmingly non-hedonic view, they’d have to argue persuasively that these people’s revealed preferences are deeply misguided.

Furthermore, if one accepts RP’s findings given hedonism but rejects prioritizing animals due to an overwhelmingly non-hedonic theory, they must endorse deeply unintuitive conclusions. To endorse human interventions over animal interventions, the human welfare range under the overwhelmingly non-hedonic view would have to be ~1000x the human welfare range under hedonism. Imagine a world with hundreds of people in extreme hedonic pain (e.g. drowning in lava) but one person with extreme non-hedonic good (e.g. love, knowledge, friendship). The overwhelming non-hedonist would consider this world net good.

An overwhelmingly non-hedonic view would also be out of step with much of the EA community. A poll of EAs found that most respondents would give up years of extreme good, whether from hedonic or non-hedonic sources, to avoid a day of extreme hedonic pain (drowning in lava). Nearly a third responded that "No amount of happiness could compensate".

I experienced "disabling"-level pain for a couple of hours, by choice and with the freedom to stop whenever I want. This was a horrible experience that made everything else seem to not matter at all.

A single laying hen experiences hundreds of hours of this level of pain during their lifespan, which lasts perhaps a year and a half - and there are as many laying hens alive at any one time as there are humans. How would I feel if every single human were experiencing hundreds of hours of disabling pain? 

A single broiler chicken experiences fifty hours of this level of pain during their lifespan, which lasts 4-6 weeks. There are 69 billion broilers slaughtered each year. That is so many hours of pain that if you divided those hours among humanity, each human would experience about 400 hours (2.5 weeks) of disabling pain every year. Can you imagine if instead of getting, say, your regular fortnight vacation from work or study, you experienced disabling-level pain for a whole 2.5 weeks? And if every human on the planet - me, you, my friends and family and colleagues and the people living in every single country - had that same experience every year? How hard would I work in order to avert suffering that urgent?

Every single one of those chickens are experiencing pain as awful and all-consuming as I did for tens or hundreds of hours, without choice or the freedom to stop. They are also experiencing often minutes of 'excruciating'-level pain, which is an intensity that I literally cannot imagine. Billions upon billions of animals. The numbers would be even more immense if you consider farmed fish, or farmed shrimp, or farmed insects, or wild animals.

If there were a political regime or law responsible for this level of pain - which indeed there is - how hard would I work to overturn it? Surely that would tower well above my other priorities (equality, democracy, freedom, self-expression, and so on), which seem trivial and even borderline ridiculous in comparison.

Ren Springlea

Endorsing Overwhelming Hierarchicalism

I don't know whether or not OP endorses overwhelming hierarchicalism. However, after overwhelming hedonism, I think overwhelming hierarchicalism is the next most likely crux for OP's rejection of animal welfare dominating in neartermism.

Many properties of the human condition have been proposed as justifications for valuing one unit of human welfare vastly (1000x) more than one unit of another animal's welfare. For every property I know of that's been proposed, a case can be constructed where a person lacks that property, but we still have the intuition that we shouldn't care much less about them than we do about other people:

I personally feel much more empathy for humans than for chickens, and a benefit of believing in overwhelming hierarchicalism would be that I could prioritize helping humans over chickens. It might also make eating meat permissible, which would make life much easier. However, the losses would be real. I'd feel like I'm compromising on my epistemics by adding an arbitrary line to my moral system which lets me ignore a possible atrocity of immense scale. I'd be doing this for the sake of the warm fuzzies I'd feel from helping humans, and convenience in eating meat. That's untenable to a mind built the way mine is.

It's Strongly Intuitive that Helping Humans > Helping Chickens

I agree! But many also find it strongly intuitive that saving a child drowning in front of them is better than donating 10k to AMF, and that atrocities happening right now are more important than whatever may occur billions of years from now. In both of these cases, strong arguments to the contrary have persuaded many EAs to revise their intuitions.

If the latest and most rigorous research points to cage-free campaigns being 1000x as good as AMF, should a strong intuition to the contrary discount that by three orders of magnitude?

Skepticism of Formal Philosophy

Though this section has invoked formal philosophy for the purpose of rigor, formal philosophy isn't actually required to make the high-level argument for animal welfare dominating in neartermism:

  1. If you hurt a chicken, that probably hurts the chicken on the order of ⅓ as much as if you hurt a human similarly.
  2. Extreme suffering matters enough that reducing it can sometimes be prioritized over cultivating friendship, love, or other goods. 
  3. Reducing an animal's suffering isn't overwhelmingly less important than reducing a human's suffering.
  4. Therefore, if one's $5000 can either (a) prevent serious suffering for 50,000 hens for 1 year[14] or (b) enable a single person to realize a lifetime of love and friendship, (a) seems orders of magnitude more cost-effective.

By analogy, one might be skeptical of many longtermists' use of formal philosophy to justify rejecting temporal discounting, rejecting person-affecting views, and accepting the repugnant conclusion. However, the high-level case for longtermism doesn't require formal philosophy: "I think the human race going extinct would be extra bad, even compared to many billions of deaths".

Even if Animal Welfare Dominates, it Still Shouldn't Receive a Majority of Neartermist Funding

Even if OP accepts that animal welfare dominates in neartermism, they may have other reasons for not allocating it a majority of neartermist funding.

Worldview Diversification Opposes Majority Allocations to Controversial Cause Areas

OP might state that on principle, worldview diversification shouldn’t allow a majority allocation to a controversial cause area. However, in 2017, 2019, and 2021, OP allocated a majority of longtermist funding to AI x-risk reduction.[15] While OP and I myself think AI x-risk is a major concern, thoughtful people within and outside the EA community disagree. Those who don’t think AI x-risk is a concern may consider nuclear war, pandemics, and/or climate change to be the most pressing x-risks.[16] Those who think AI x-risk is a concern often regard it as ~10x more pressing than other x-risks. In 2017, 2019, and 2021, OP judged that the 10x importance of AI x-risk reduction, under the controversial view that AI x-risk is a concern, was high enough to warrant a majority of longtermist funding.

Similarly, thoughtful people within and outside the EA community disagree on whether animals merit moral consideration. If animals do, then the most impactful animal welfare interventions are likely ~1000x as cost-effective as the most impactful alternatives. Just as controversy regarding whether AI x-risk is a concern should not preclude OP allocating AI x-risk a majority of longtermist funding, controversy regarding whether animals merit moral concern should not preclude allocating animal welfare a majority of neartermist funding.

OP is Already a Massive Animal Welfare Funder

OP is the world’s largest funder in many extremely important and neglected cause areas. However, this should not preclude OP updating its prioritization between those cause areas if given sufficient evidence. For example, if a shocking technological breakthrough shortened TAI forecasts to 2025, even though OP is already the world’s largest funder of AI x-risk reduction, OP would be justified in increasing its allocation to that cause area.

Animal Welfare has Faster Diminishing Marginal Returns than Global Health

I agree that if OP prematurely allocated a majority of neartermist funding to animal welfare, then the marginal cost-effectiveness of OP's animal welfare grants would drop substantially. Instead, I suggest that OP scale up animal welfare funding over several years to approach a majority of OP's neartermist grantmaking.

To absorb such funding, many ambitious animal welfare megaprojects have been proposed. Even if these megaprojects would be an order of magnitude less cost-effective than corporate chicken campaigns, I've argued above that they'd likely be far more cost-effective than the best neartermist alternatives.

Even so, it seems that OP's Farm Animal Welfare program may currently be able to allocate millions more without an order of magnitude decrease in cost-effectiveness:

Although tens of millions of dollars feels like a lot of money, when you compare it to the scope of the problem it quickly feels like not that much money at all, so we are having to make tradeoffs. Every dollar we give to one project is a dollar we can’t give to another project, and so unfortunately we do have to decline to fund projects that probably could do a lot of good for animals in the world.

Amanda Hungerford, Program Officer for Farm Animal Welfare for OP (8:12-8:34).

Increasing Animal Welfare Funding would Reduce OP’s Influence on Philanthropists

Over time, we aspire to become the go-to experts on impact-focused giving; to become powerful advocates for this broad idea; and to have an influence on the way many philanthropists make choices. Broadly speaking, we think our odds of doing this would fall greatly if we were all-in on animal-focused causes. We would essentially be tying the success of our broad vision for impact-focused philanthropy to a concentrated bet on animal causes (and their idiosyncrasies) in particular. And we’d be giving up many of the practical benefits we listed previously for a more diversified approach. Briefly recapped, these are: (a) being able to provide tangibly useful information to a large set of donors; (b) developing staff capacity to work in many causes in case our best-guess worldview changes over time; (c) using lessons learned in some causes to improve our work in others; (d) presenting an accurate public-facing picture of our values; and (e) increasing the degree to which, over the long run, our expected impact matches our actual impact (which could be beneficial for our own, and others’, ability to evaluate how we’re doing).

Holden Karnofsky

Though this is unfortunate, it makes sense, and Holden should be trusted here. That said, there’s a world of difference between being “all-in on animal-focused causes” and allocating a majority of OP’s neartermist funding to animal welfare, while continuing to fund many other important neartermist cause areas. It doesn’t seem to me that the latter proposal runs nearly as much risk of alienating philanthropists. Some evidence of this is that OP is the world’s largest funder of AI x-risk reduction, another niche cause area which few philanthropists are concerned with. In spite of this, OP seems to have maintained its giving capacity. Given the overwhelming case for prioritizing animal welfare in neartermism, OP may be able to communicate its change in cause prioritization in a way which maintains the donor relationships which have done so much good for others.

Request for Reasoning Transparency from OP

Though I've endeavored to critique whichever views OP may plausibly hold that preclude prioritizing animal welfare in neartermism, I'm still deeply unsure about what OP's views actually are. Here are several reasons why OP should clarify their views:

It's also possible that OP lacks a formal theory for why animal welfare doesn't dominate in neartermism. As Alexander Berger has said, "I’ve always recognized that my maximand is under-theorized". If so, it would seem even more important for OP to clarify their view. If there's a chance that 1 million dollars to corporate campaigns is actually worth 1 billion dollars to GiveWell-recommended charities, understanding one's answers to the relevant philosophical questions seems very important.

Here are some specific questions I request that OP answer:

Conclusion

When I started as an EA, I found other EAs' obsession with animal welfare rather strange. How could these people advocate for helping chickens over children in extreme poverty? I changed my mind for a few reasons.

The foremost reason was my realization that my love for another being shouldn't be conditional on any property of the other being. My life is pretty different from the life of an African child in extreme poverty. We likely have different cultural values, and I'd likely disagree with many of the decisions they'll make over their lives. But those differences aren't important—each and every one of them is a special person whose feelings matter just the same.

The second reason was understanding the seriousness of the suffering at stake. When I think about the horrors animals experience in factory farms, it makes me feel horrible. 

When a quarter million birds are stuffed into a single shed, unable even to flap their wings, when more than a million pigs inhabit a single farm, never once stepping into the light of day, when every year tens of millions of creatures go to their death without knowing the least measure of human kindness, it is time to question old assumptions, to ask what we are doing and what spirit drives us on.

Matthew Scully, "Dominion"

Thirdly, I've been asked whether the prospect of helping millions of beings cheapens the value of helping a single being. If I can save hundreds of African children over the course of my life, does each individual child matter proportionally less? Absolutely not. If helping a single being is worth so much, how much more is helping billions of beings worth? I can't make a difference for billions of beings, but you can.

We aspire to radical empathy: working hard to extend empathy to everyone it should be extended to, even when it’s unusual or seems strange to do so. As such, one theme of our work is trying to help populations that many people don’t feel are worth helping at all.

Holden Karnofsky

  1. ^
  2. ^

     Karnofsky, Holden (2016). "Worldview Diversification". https://www.openphilanthropy.org/research/worldview-diversification/

  3. ^

     Open Philanthropy. "Rethink Priorities — Moral Patienthood and Moral Weight Research". https://www.openphilanthropy.org/grants/rethink-priorities-moral-patienthood-and-moral-weight-research/

  4. ^

     "Our team was composed of three philosophers, two comparative psychologists (one with expertise in birds; another with expertise in cephalopods), two fish welfare researchers, two entomologists, an animal welfare scientist, and a veterinarian." Fischer, Bob (2022). "The Welfare Range Table". https://forum.effectivealtruism.org/s/y5n47MfgrKvTLE3pw/p/tnSg6o7crcHFLc395

  5. ^

     Grilo, Vasco (2023). "Prioritising animal welfare over global health and development?". https://forum.effectivealtruism.org/posts/vBcT7i7AkNJ6u9BcQ/prioritising-animal-welfare-over-global-health-and

  6. ^

     Gertler, Aaron (2019). "EA Leaders Forum: Survey on EA priorities (data and analysis)". https://forum.effectivealtruism.org/posts/TpoeJ9A2G5Sipxfit/ea-leaders-forum-survey-on-ea-priorities-data-and-analysis

    For the question "What (rough) percentage of resources should the EA community devote to the following areas over the next five years", the mean EA leader answered 10.7% for global health and 9.3% + 3.5% = 12.8% for farm and wild animal welfare respectively. No other neartermist cause areas were listed.

  7. ^
  8. ^

     Fischer, Bob (2023). "Theories of Welfare and Welfare Range Estimates". https://forum.effectivealtruism.org/posts/WfeWN2X4k8w8nTeaS/theories-of-welfare-and-welfare-range-estimates

  9. ^

     Rob Wiblin, Kieran Harris (2021). "Alexander Berger on improving global health and wellbeing in clear and direct ways".

  10. ^

     Rencz et al (2020). "Parallel Valuation of the EQ-5D-3L and EQ-5D-5L by Time Trade-Off in Hungary". https://www.sciencedirect.com/science/article/pii/S1098301520321173

  11. ^

     Doth et al (2010). "The burden of neuropathic pain: A systematic review and meta-analysis of health utilities". https://www.sciencedirect.com/science/article/abs/pii/S0304395910001260

  12. ^

     Lee et al (2019). "Increased suicidality in patients with cluster headache". https://pubmed.ncbi.nlm.nih.gov/31018651/

  13. ^

     Goossens et al (1999). "Patient utilities in chronic musculoskeletal pain: how useful is the standard gamble method?". https://www.sciencedirect.com/science/article/abs/pii/S0304395998002322 

  14. ^

      Simcikas, Saulius (2019). "Corporate campaigns affect 9 to 120 years of chicken life per dollar spent". https://forum.effectivealtruism.org/posts/L5EZjjXKdNgcm253H/corporate-campaigns-affect-9-to-120-years-of-chicken-life

  15. ^
  16. ^

     Toby Ord's x-risk table from The Precipice has AI 3x greater than pandemics, 100x greater than nuclear war, and 100x greater than climate change.

  17. ^

      Gertler, Aaron (2019). "EA Leaders Forum: Survey on EA priorities (data and analysis)". https://forum.effectivealtruism.org/posts/TpoeJ9A2G5Sipxfit/ea-leaders-forum-survey-on-ea-priorities-data-and-analysis


Emily Oehlsen @ 2023-11-20T01:32 (+189)

(Hi, I'm Emily, I lead GHW grantmaking at Open Phil.)

Thank you for writing this critique, and giving us the chance to read your draft and respond ahead of time. This type of feedback is very valuable for us, and I’m really glad you wrote it.

We agree that we haven’t shared much information about our thinking on this question. I’ll try to give some more context below, though I also want to be upfront that we have a lot more work to do in this area.

For the rest of this comment, I’ll use “FAW” to refer to farm animal welfare and “GHW” to refer to all the other (human-centered) work in our Global Health and Wellbeing portfolio. 

To date, we haven’t focused on making direct comparisons between GHW and FAW. Instead, we’ve focused on trying to equalize marginal returns within each area and do something more like worldview diversification to determine allocations across GHW, FAW, and Open Philanthropy’s other grantmaking. In other words, each of GHW and FAW has its own rough “bar” that an opportunity must clear to be funded. While our frameworks allow for direct comparisons, we have not stress-tested consistency for that use case. We’re also unsure conceptually whether we should be trying to equalize marginal returns between FAW and GHW or whether we should continue with our current approach. We’re planning to think more about this question next year. 

One reason why we are moving more slowly is that our current estimates of the gap between marginal animal and human funding opportunities is very different from the one in your post – within one order of magnitude, not three. And given the high uncertainty around our estimates here, we think one order of magnitude is well within the “margin of error” .

Comparing animal- and human-centered interventions involves many hard-to-estimate parameters. We think the most important ones are:

  1. Moral weights
  2. Welfare range (i.e. should we treat welfare as symmetrical around a neutral point, or negative experiences as being worse than positive experiences are good?)
  3. The difference between the number of humans and chickens, respectively, affected by a marginal intervention in each area 

There is not a lot of existing research on these three points. While we are excited to support work from places like Rethink Priorities, this is a very nascent field and we think there is still a lot to learn.

To ground your 1,000x claim, our understanding is it implies a tradeoff of 0.85 chicken years moving from pre-reform to post-reform farming conditions vs one year of human life. 

A few more details on our estimate and where we differ:

Thanks again for the critique; we wish more people would do this kind of thing!

Best,
Emily

Ariel Simnegar @ 2023-11-20T01:54 (+119)

Hi Emily,

Thanks so much for your engagement and consideration. I appreciate your openness about the need for more work in tackling these difficult questions.

our current estimates of the gap between marginal animal and human funding opportunities is very different from the one in your post – within one order of magnitude, not three.

Holden has stated that "It seems unlikely that the ratio would be in the precise, narrow range needed for these two uses of funds to have similar cost-effectiveness." As OP continues researching moral weights, OP's marginal cost-effectiveness estimates for FAW and GHW may eventually differ by several orders of magnitude. If this happens, would OP substantially update their allocations between FAW and GHW?

Our current moral weights, based in part on Luke Muehlhauser’s past work, are lower.

Along with OP's neartermist cause prioritization, your comment seems to imply that OP's moral weights are 1-2 orders of magnitude lower than Rethink's. If that's true, that is a massive difference which (depending upon the details) could have big implications for how EA should allocate resources between FAW charities (e.g. chickens vs shrimp) as well as between FAW and GHW.

Does OP plan to reveal their moral weights and/or their methodology for deriving them? It seems that opening up the conversation would be quite beneficial to OP's objective of furthering moral weight research until uncertainty is reduced enough to act upon.

I'd like to reiterate how much I appreciate your openness to feedback and your reply's clarification of OP's disagreements with my post. That said, this reply doesn't seem to directly answer this post's headline questions:

  • How much weight does OP's theory of welfare place on pleasure and pain, as opposed to nonhedonic goods?
  • Precisely how much more does OP value one unit of a human's welfare than one unit of another animal's welfare, just because the former is a human? How does OP derive this tradeoff?
  • How would OP's views have to change for OP to prioritize animal welfare in neartermism?

Though you have no obligation to directly answer these questions, I really wish you would. A transparent discussion could update OP, Rethink, and many others on this deeply important topic.

Thanks again for taking the time to engage, and for everything you and OP have done to help others :)

Emily Oehlsen @ 2023-11-22T20:33 (+9)

Hi Ariel,

As OP continues researching moral weights, OP's marginal cost-effectiveness estimates for FAW and GHW may eventually differ by several orders of magnitude. If this happens, would OP substantially update their allocations between FAW and GHW?

We’re unsure conceptually whether we should be trying to equalize marginal returns between FAW and GHW or whether we should continue with our current approach of worldview diversification. If we end up feeling confident that we should be equalizing marginal returns and there are large differences (we’re uncertain about both pieces right now), I expect that we’d adjust our allocation strategy. But this wouldn’t happen immediately; we think it’s important to give program staff notice well in advance of any pending allocation changes.

Your comment seems to imply that OP's moral weights are 1-2 orders of magnitude lower than Rethink's.

I’m wary of sharing precise numbers now, because we’re highly uncertain about all three of the parameters I listed and I don’t want people to over-update on our views. But the 2 orders of magnitude are coming from a combination of the three parameters I listed and not just moral weights. We may share more information on our views and methodology later, but I can’t commit to a particular date or any specifics on what we’ll publish.

I unfortunately won’t have time to engage with further responses for now, but whenever we publish research relevant to these topics, we’ll be sure to cross-post it on the Forum! 

We think these discussions are valuable, and I hope we’ll be able to contribute more of our own takes down the line. But we’re working on a lot of other research we hope to publish, and I can’t say with certainty when we’ll share more on this topic.

Thank you again for the critique! 

RedStateBlueState @ 2023-11-20T04:09 (+74)

If OpenPhil’s allocation is really so dependent on moral weight numbers, you should be spending significant money on research in this area, right? Are you doing this? Do you plan on doing more of this given the large divergence from Rethink’s numbers?

Emily Oehlsen @ 2023-11-22T20:32 (+9)

Several of the grants we’ve made to Rethink Priorities funded research related to moral weights; we’ve also conducted our own research on the topic. We may fund additional moral weights work next year, but we aren’t certain. In general, it's very hard to guarantee we'll fund a particular topic in a future year, since our funding always depends on which opportunities we find and how they compare to each other — and there's a lot we don't know about future opportunities.

I unfortunately won’t have time to engage with further responses for now, but whenever we publish research relevant to these topics, we’ll be sure to cross-post it on the Forum!

Will Aldred @ 2023-11-24T00:33 (+38)

Here, you say, “Several of the grants we’ve made to Rethink Priorities funded research related to moral weights.” Yet in your initial response, you said, “We don’t use Rethink’s moral weights.” I respect your tapping out of this discussion, but at the same time I’d like to express my puzzlement as to why Open Phil would fund work on moral weights to inform grantmaking allocation, and then not take that work into account.

CarlShulman @ 2023-11-26T18:45 (+29)

One can value research and find it informative or worth doing without being convinced of every view of a given researcher or team.  Open Philanthropy also sponsored a contest to surface novel considerations that could affect its views on AI timelines and risk. The winners mostly present conclusions or considerations on which AI would be a lower priority, but that doesn't imply that the judges or the institution changed their views very much in that direction.

At large scale, Information can be valuable enough to buy even if it only modestly adjusts proportional allocations of effort, the minimum bar for funding a research project with hundreds of thousands or millions of dollars presumably isn't that one pivots billions of dollars on the results with near-certainty.

Will Aldred @ 2023-11-27T23:56 (+29)

Thank you for engaging. I don’t disagree with what you’ve written; I think you have interpreted me as implying something stronger than what I intended, and so I’ll now attempt to add some colour.

That Emily and other relevant people at OP have not fully adopted Rethink’s moral weights does not puzzle me. As you say, to expect that is to apply an unreasonably high funding bar. I am, however, puzzled that Emily and co. appear to have not updated at all towards Rethink’s numbers. At least, that’s the way I read:

  • We don’t use Rethink’s moral weights.
    • Our current moral weights, based in part on Luke Muehlhauser’s past work, are lower. We may update them in the future; if we do, we’ll consider work from many sources, including the arguments made in this post.

If OP has not updated at all towards Rethink’s numbers, then I see three possible explanations, all of which I find unlikely, hence my puzzlement. First possibility: the relevant people at OP have not yet given the Rethink report a thorough read, and have therefore not updated. Second: the relevant OP people have read the Rethink report, and have updated their internal models, but have not yet gotten around to updating OP’s actual grantmaking allocation. Third: OP believes the Rethink work is low quality or otherwise critically corrupted by one or more errors. I’d be very surprised if one or two are true, given how moral weight is arguably the most important consideration in neartermist grantmaking allocation. I’d also be surprised if three is true, given how well Rethink’s moral weight sequence has been received on this forum (see, e.g., comments here and here).[1] OP people may disagree with Rethink’s approach at the independent impression level, but surely, given Rethink’s moral weights work is the most extensive work done on this topic by anyone(?), the Rethink results should be given substantial weight—or at least non-trivial weight—in their all-things-considered views?

(If OP people believe there are errors in the Rethink work that render the results ~useless, then, considering the topic’s importance, I think some sort of OP write-up would be well worth the time. Both at the object level, so that future moral weight researchers can avoid making similar mistakes, and to allow the community to hold OP’s reasoning to a high standard, and also at the meta level, so that potential donors can update appropriately re. Rethink’s general quality of work.)

Additionally—and this is less important, I’m puzzled at the meta level at the way we’ve arrived here. As noted in the top-level post, Open Phil has been less than wholly open about its grantmaking, and it’s taken a pretty not-on-the-default-path sequence of events—Ariel, someone who’s not affiliated with OP and who doesn’t work on animal welfare for their day job, writing this big post; Emily from OP replying to the post and to a couple of the comments; me, a Forum-goer who doesn’t work on animal welfare, spotting an inconsistency in Emily’s replies—to surface the fact that OP does not give Rethink’s moral weights any weight.

  1. ^

    Edited to add: Carl has left a detailed reply below, and it seems that three is, in fact, what has happened.

Vasco Grilo @ 2023-11-28T07:08 (+15)

Fair points, Carl. Thanks for elaborating, Will!

  • We don’t use Rethink’s moral weights.
    • Our current moral weights, based in part on Luke Muehlhauser’s past work, are lower. We may update them in the future; if we do, we’ll consider work from many sources, including the arguments made in this post.

Interestingly and confusingly, fitting distributions to Luke's 2018 guesses for the 80 % prediction intervals of the moral weight of various species, one gets mean moral weights close to or larger than 1:

It is also worth noting that Luke seemed very much willing to update on further research in 2022. Commenting on the above, Luke said (emphasis mine):

Since this exercise is based on numbers I personally made up, I would like to remind everyone that those numbers are extremely made up and come with many caveats given in the original sources. It would not be that hard to produce numbers more reasonable than mine, at least re: moral weights. (I spent more time on the "probability of consciousness" numbers, though that was years ago and my numbers would probably be different now.)

Welfare ranges are a crucial input to determining moral weights, so I assume Luke would also have agreed that it would not have been that hard to produce more reasonable welfare ranges than his and Open Phil's in 2022. So, given how little time Open Phil seemingly devoted to assessing welfare ranges in comparison to Rethink, I would have expected Open Phil to give major weight to Rethink's values.

CarlShulman @ 2023-12-06T14:43 (+164)

I can't speak for Open Philanthropy, but I can explain why I personally was unmoved by the Rethink report (and think its estimates hugely overstate the case for focusing on tiny animals, although I think the corrected version of that case still has a lot to be said for it).
 
Luke says in the post you linked that the numbers in the graphic are not usable as expected moral weights, since ratios of expectations are not the same as expectations of ratios.
 

However, I say "naively" because this doesn't actually work, due to two-envelope effects...whenever you're tempted to multiply such numbers by something, remember two-envelope effects!)

[Edited for clarity] I was not satisfied with Rethink's attempt to address that central issue, that you get wildly different results from assuming the moral value of a fruit fly is fixed and reporting possible ratios to elephant welfare as opposed to doing it the other way around. 

It is not unthinkably improbable that an elephant brain where reinforcement from a positive or negative stimulus adjust millions of times as many neural computations could be seen as vastly more morally important than a fruit fly, just as one might think that a fruit fly is much more important than a thermostat (which some suggest is conscious and possesses preferences). Since on some major functional aspects of mind there are differences of millions of times, that suggests a mean expected value orders of magnitude higher for the elephant if you put a bit of weight on the possibility that moral weight scales with the extent of, e.g. the computations that are adjusted by positive and negative stimuli. A 1% weight on that plausible hypothesis means the expected value of the elephant is immense vs the fruit fly. So there will be something that might get lumped in with 'overwhelming hierarchicalism' in the language of the top-level post. Rethink's various discussions of this issue in my view missed the mark.

Go the other way and fix the value of the elephant at 1, and the possibility that value scales with those computations is treated as a case where the fly is worth ~0. Then a 1% or even 99% credence in value scaling with computation has little effect, and the elephant-fruit fly ratio is forced to be quite high so tiny mind dominance is almost automatic. The same argument can then be used to make a like case for total dominance of thermostat-like programs, or individual neurons, over insects. And then again for individual electrons

As I see it, Rethink basically went with the 'ratios to fixed human value', so from my perspective their bottom-line conclusions were predetermined and uninformative. But the alternatives they ignore lead me to think that the expected value of welfare for big minds is a lot larger than for small minds (and I think that can continue, e.g. giant AI minds with vastly more reinforcement-affected computations and thoughts could possess much more expected welfare than humans, as many humans might have more welfare than one human).

I agree with Brian Tomasik's comment from your link:

the moral-uncertainty version of the [two envelopes] problem is fatal unless you make further assumptions about how to resolve it, such as by fixing some arbitrary intertheoretic-comparison weights (which seems to be what you're suggesting) or using the parliamentary model.

By the same token, arguments about the number of possible connections/counterfactual richness in a mind could suggest superlinear growth in moral importance with computational scale. Similar issues would arise for theories involving moral agency or capacity for cooperation/game theory (on which humans might stand out by orders of magnitude relative to elephants; marginal cases being socially derivative), but those were ruled out of bounds for the report. Likewise it chose not to address intertheoretic comparisons and how those could very sharply affect the conclusions. Those are the kinds of issues with the potential to drive massive weight differences.

I think some readers benefitted a lot from reading the report because they did not know that, e.g. insects are capable of reward learning and similar psychological capacities.  And I would guess that will change some people's prioritization between different animals, and of animal vs human focused work. I think that is valuable. But that information was not new to me, and indeed I had argued for many years that insects met a lot of the functional standards one could use to identify the presence of well-being, and that even after taking two-envelopes issues and nervous system scale into account expected welfare at stake for small wild animals looked much larger than for FAW. 

I happen to be a fan of animal welfare work relative to GHW's other grants at the margin because animal welfare work is so highly neglected (e.g. Open Philanthropy is a huge share of world funding on the most effective FAW work but quite small compared to global aid) relative to the case for views on which it's great. But for me Rethink's work didn't address the most important questions, and largely baked in its conclusions methodologically.

Bob Fischer @ 2023-12-12T19:07 (+36)

Thanks for your discussion of the Moral Weight Project's methodology, Carl. (And to everyone else for the useful back-and-forth!) We have some thoughts about this important issue and we're keen to write more about it. Perhaps 2024 will provide the opportunity!

For now, we'll just make one brief point, which is that it’s important to separate two questions. The first concerns the relevance of the two envelopes problem to the Moral Weight Project. The second concerns alternative ways of generating moral weights. We considered the two envelopes problem at some length when we were working on the Moral Weight Project and concluded that our approach was still worth developing. We’d be glad to revisit this and appreciate the challenge to the methodology.

However, even if it turns out that the methodology has issues, it’s an open question how best to proceed. We grant the possibility that, as you suggest, more neurons = more compute = the possibility of more intense pleasures and pains. But it's also possible that more neurons = more intelligence = less biological need for intense pleasures and pains, as other cognitive abilities can provide the relevant fitness benefits, effectively muting the intensities of those states. Or perhaps there's some very low threshold of cognitive complexity for sentience after which point all variation in behavior is due to non-hedonic capacities. Or perhaps cardinal interpersonal utility comparisons are impossible. And so on. In short, while it's true that there are hypotheses on which elephants have massively more intense pains than fruit flies, there are also hypotheses on which the opposite is true and on which equality is (more or less) true. Once we account for all these hypotheses, it may still work out that elephants and fruit flies differ by a few orders of magnitude in expectation, but perhaps not by five or six. Presumably, we should all want some approach, whatever it is, that avoids being mugged by whatever low-probability hypothesis posits the largest difference between humans and other animals.

That said, you've raised some significant concerns about methods that aggregate over different relative scales of value. So, we’ll be sure to think more about the degree to which this is a problem for the work we’ve done—and, if it is, how much it would change the bottom line. 

CarlShulman @ 2023-12-13T00:33 (+16)

Thank you for the comment Bob.

I agree that I also am disagreeing on the object-level, as Michael made clear with his comments (I do not think I am talking about a tiny chance, although I do not think the RP discussions characterized my views as I would), and some other methodological issues besides two-envelopes (related to the object-level ones).  E.g. I would not want to treat a highly networked AI mind (with billions of bodies and computation directing them in a unified way, on the scale of humanity) as a millionth or a billionth of the welfare of the same set of robots and computations with less integration (and overlap of shared features, or top-level control), ceteris paribus. 

Indeed, I would be wary of treating the integrated mind as though welfare stakes for it were half or a tenth as great, seeing that as a potential source of moral catastrophe, like ignoring the welfare of minds not based on proteins. E.g. having tasks involving suffering  and frustration done by large integrated minds, and pleasant ones done by tiny minds, while increasing the amount of mental activity in the former. It sounds like the combination of object-level and methodological takes attached to these reports would favor ignoring almost completely the integrated mind.

Incidentally, in a world where small animals are being treated extremely badly and are numerous, I can see a temptation to err in their favor, since even overestimates of their importance could be shifting things in the right marginal policy direction. But thinking about the potential moral catastrophes on the other side helps sharpen the motivation to get it right.

In practice, I don't prioritize moral weights issues in my work, because I think the most important decisions hinging on it will be in an era with AI-aided mature sciences of mind, philosophy and epistemology. And as I have written regardless of your views about small minds and large minds, it won't be the case that e.g. humans are utility monsters of impartial hedonism (rather than something bigger, smaller, or otherwise different), and grounds for focusing on helping humans won't be terminal impartial hedonistic in nature. But from my viewpoint baking in that integration (and unified top-level control or mental overlap of some parts of computation) close to eliminates mentality or welfare (vs less integrated collections of computations) seems bad in non-Pascalian fashion.

MichaelStJules @ 2023-12-13T09:28 (+9)

(Speaking for myself only.)

FWIW, I think something like conscious subsystems (in huge numbers in one neural network) is more plausible by design in future AI. It just seems unlikely in animals because all of the apparent subjective value seems to happen at roughly the highest level where everything is integrated in an animal brain.

Felt desire seems to (largely) be motivational salience, a top-down/voluntary attention control function driven by high-level interpretations of stimuli (e.g. objects, social situations), so relatively late in processing. Similarly, hedonic states depend on high-level interpretations, too.

Or, according to Attention Schema Theory, attention models evolved for the voluntary control of attention. It's not clear what the value would be for an attention model at lower levels of organization before integration.

And evolution will select against realizing functions unnecessarily if they have additional costs, so we should provide a positive argument for the necessary functions being realized earlier or multiple times in parallel that overcomes or doesn't incur such additional costs.

So, it's not that integration necessarily reduces value; it's that, in animals, all the morally valuable stuff happens after most of the integration, and apparently only once or in small number.

In artificial systems, the morally valuable stuff could instead be implemented separately by design at multiple levels.

EDIT:

I think there's still crux about whether realizing the same function the same number of times but "to a greater degree" makes it more morally valuable. I think there are some ways of "to a greater degree" that don't matter, and some that could. If it's only sort of (vaguely) true that a system is realizing a certain function, or it realizes some but not all of the functions possibly necessary for some type of welfare in humans, then we might discount it for only meeting lower precisifications of the vague standards. But adding more neurons just doing the same things:

  1. doesn't make it more true that it realizes the function or the type of welfare (e.g. adding more neurons to my brain wouldn't make it more true that I can suffer),
  2. doesn't clearly increase welfare ranges, and
  3. doesn't have any other clear reason for why it should make a moral difference (I think you disagree with this, based on your examples).

But maybe we don't actually need good specific reasons to assign non-tiny probabilities to neuron count scaling for 2 or 3, and then we get domination of neuron count scaling in expectation, depending on what we're normalizing by, like you suggest.

RedStateBlueState @ 2023-12-07T02:07 (+30)

This consideration is something I had never thought of before and blew my mind. Thank you for sharing.

Hopefully I can summarize it (assuming I interpreted it correctly) in a different way that might help people who were as befuddled as I was. 

The point is that, when you have probabilistic weight to two different theories of sentience being true, you have to assign units to sentience in these different theories in order to compare them. 

Say you have two theories of sentience that are similarly probable, one dependent on intelligence and one dependent on brain size. Call these units IQ-qualia and size-qualia. If you assign fruit flies a moral weight of 1, you are implicitly declaring a conversion rate of (to make up some random numbers) 1000 IQ-qualia = 1 size-qualia. If you assign elephants however to have a moral weight of 1, you implicitly declare a conversion rate of (again, made-up) 1 IQ-qualia = 1000 size-qualia, because elephant brains are much larger but not much smarter than fruit flies. These two different conversion rates are going to give you very different numbers for the moral weight of humans (or as Shulman was saying, of each other).

Rethink Priorities assigned humans a moral weight of 1, and thus assumed a certain conversion rate between different theories that made for a very small-animal-dominated world by sentience. 

MichaelStJules @ 2023-12-08T11:13 (+18)

It is not unthinkably improbable that an elephant brain where reinforcement from a positive or negative stimulus adjust millions of times as many neural computations could be seen as vastly more morally important than a fruit fly, just as one might think that a fruit fly is much more important than a thermostat (which some suggest is conscious and possesses preferences). Since on some major functional aspects of mind there are differences of millions of times, that suggests a mean expected value orders of magnitude higher for the elephant if you put a bit of weight on the possibility that moral weight scales with the extent of, e.g. the computations that are adjusted by positive and negative stimuli.

 

This specific kind of account, if meant to depend inherently on differences in reinforcement, is very improbable to me (<0.1%), and conditional on such accounts, the inherent importance of reinforcement would also very probably scale very slowly, with faster scaling increasingly improbable. It could work out that the expected scaling isn't slow, but that would be because of very low probability possibilities.

The value of subjective wellbeing, whether hedonistic, felt desires, reflective evaluation/preferences, choice-based or some kind of combination, seems very probably logically independent from how much reinforcement happens EDIT: and empirically dissociable. My main argument is that reinforcement happens unconsciously and has no necessary or ~immediate conscious effects. We could imagine temporarily or permanently preventing reinforcement without any effect on mental states or subjective wellbeing in the moment. Or, we can imagine connecting a brain to an artificial neural network to add more neurons to reinforce, again to no effect.

And even within the same human under normal conditions, holding their reports of value or intensity fixed, the amount of reinforcement that actually happens will probably depend systematically on the nature of the experience, e.g. physical pain vs anxiety vs grief vs joy. If reinforcement has a large effect on expected moral weights, you could and I'd guess would end up with an alienating view, where everyone is systematically wrong about the relative value of their own experiences. You'd effectively need to reweight all of their reports by type of experience.

So, even with intertheoretic comparisons between accounts with and without reinforcement, of which I'd be quite skeptical specifically in this case but also generally, this kind of hypothesis shouldn't make much difference (or it does make a substantial difference, but it seems objectionably fanatical and alienating). If rejecting such intertheoretic comparisons, as I'm more generally inclined to do and as Open Phil seems to be doing, it should make very little difference.

 

There are more plausible functions you could use, though, like attention. But, again, I think the cases for intertheoretic comparisons between accounts of how moral value scales with neurons for attention or probably any other function are generally very weak, so you should only take expected values over descriptive uncertainty conditional on each moral scaling hypothesis, not across moral scaling hypotheses (unless you normalize by something else, like variance across options). Without intertheoretic comparisons, approaches to moral uncertainty in the literature aren't so sensitive to small probability differences or fanatical about moral views. So, it tends to be more important to focus on large probability shifts than improbable extreme cases.

MichaelStJules @ 2023-12-08T09:34 (+13)

(I'm not at Rethink Priorities anymore, and I'm not speaking on their behalf.)

Rethink's work, as I read it, did not address that central issue, that you get wildly different results from assuming the moral value of a fruit fly is fixed and reporting possible ratios to elephant welfare as opposed to doing it the other way around. 

(...)

Rethink's discussion of this almost completely sidestepped the issue in my view.

RP did in fact respond to some versions of these arguments, in the piece Do Brains Contain Many Conscious Subsystems? If So, Should We Act Differently?, of which I am a co-author.

CarlShulman @ 2023-12-08T16:58 (+4)

Thanks, I was referring to this as well, but should have had a second link for it as the Rethink page on neuron counts didn't link to the other post. I think that page is a better link than the RP page I linked, so I'll add it in my comment.

MichaelStJules @ 2023-12-09T00:20 (+5)

(Again, not speaking on behalf of Rethink Priorities, and I don't work there anymore.)

(Btw, the quote formatting in your original comment got messed up with your edit.)

I think the claims I quoted are still basically false, though?

Rethink's work, as I read it, did not address that central issue, that you get wildly different results from assuming the moral value of a fruit fly is fixed and reporting possible ratios to elephant welfare as opposed to doing it the other way around. 

Do Brains Contain Many Conscious Subsystems? If So, Should We Act Differently? explicitly considered a conscious subsystems version of this thought experiment, focusing on the more human-favouring side when you normalize by small systems like insect brains, which is the non-obvious side often neglected.

There's a case that conscious subsystems could dominate expected welfare ranges even without intertheoretic comparisons (but also possibly with), so I think we were focusing on one of strongest and most important arguments for humans potentially mattering more, assuming hedonism and expectational total utilitarianism. Maximizing expected choiceworthiness with intertheoretic comparisons is controversial and only one of multiple competing approaches to moral uncertainty. I'm personally very skeptical of it because of the arbitrariness of intertheoretic comparisons and its fanaticism (including chasing infinities, and lexically higher and higher infinities). Open Phil also already avoids making intertheoretic comparisons, but was more sympathetic to normalizing by humans if it were going to.

CarlShulman @ 2023-12-09T02:42 (+6)

I don't want to convey that there was no discussion, thus my linking the discussion and saying I found it inadequate and largely missing the point from my perspective. I made an edit for clarity, but would accept suggestions for another.

 

MichaelStJules @ 2023-12-09T02:52 (+1)

Your edit looks good to me. Thanks!

Vasco Grilo @ 2023-12-06T16:40 (+12)

Thanks for elaborating, Carl!

Luke says in the post you linked that the numbers in the graphic are not usable as expected moral weights, since ratios of expectations are not the same as expectations of ratios.

Let me try to restate your point, and suggest why one may disagree. If one puts weight w on the welfare range (WR) of humans relative to that of chickens being N, and 1 - w on it being n, the expected welfare range of:

  • Humans relative to that of chickens is E("WR of humans"/"WR of chickens") = w*N + (1 - w)*n.
  • Chickens relative to that of humans is E("WR of chickens"/"WR of humans") = w/N + (1 - w)/n.

You are arguing that N can plausibly be much larger than n. For the sake of illustration, we can say N = 389 (ratio between the 86 billion neurons of a humans and 221 M of a chicken), n = 3.01 (reciprocal of RP's median welfare range of chickens relative to humans of 0.332), and w = 1/12 (since the neuron count model was one of the 12 RP considered, and all of them were weighted equally). Having the welfare range of:

  • Chickens as the reference, E("WR of humans"/"WR of chickens") = 35.2. So 1/E("WR of humans"/"WR of chickens") = 0.0284.
  • Humans as the reference (as RP did), E("WR of chickens"/"WR of humans") = 0.305.

So, as you said, determining welfare ranges relative to humans results in animals being weighted more heavily. However, I think the difference is much smaller than the suggested above. Since N and n are quite different, I guess we should combine them using a weighted geometric mean, not the weighted mean as I did above. If so, both approaches output exactly the same result:

  • E("WR of humans"/"WR of chickens") = N^w*n^(1 - w) = 4.49. So 1/E("WR of humans"/"WR of chickens") = (N^w*n^(1 - w))^-1 = 0.223.
  • E("WR of chickens"/"WR of humans") = (1/N)^w*(1/n)^(1 - w) = 0.223.

The reciprocal of the expected value is not the expected value of the reciprocal, so using the mean leads to different results. However, I think we should be using the geometric mean, and the reciprocal of the geometric mean is the geometric mean of the reciprocal. So the 2 approaches (using humans or chickens as the reference) will output the same ratios regardless of N, n and w as long as we aggregate N and n with the geometric mean. If N and n are similar, it no longer makes sense to use the geometric mean, but then both approaches will output similar results anyway, so RP's approach looks fine to me as a 1st pass. Does this make any sense?

Of course, it would still be good to do further research (which OP could fund) to adjudicate how much weight should be given to each model RP considered.

I had argued for many years that insects met a lot of the functional standards one could use to identify the presence of well-being, and that even after taking two-envelopes issues and nervous system scale into account expected welfare at stake for small wild animals looked much larger than for FAW.

True!

I happen to be a fan of animal welfare work relative to GHW's other grants at the margin because animal welfare work is so highly neglected

Thanks for sharing your views!

CarlShulman @ 2023-12-06T17:44 (+16)

I'm not planning on continuing a long thread here, I mostly wanted to help address the questions about my previous comment, so I'll be moving on after this. But I will say two things regarding the above. First, this effect (computational scale) is smaller for chickens but progressively enormous for e.g. shrimp or lobster or flies.  Second, this is a huge move and one really needs to wrestle with intertheoretic comparisons to justify it:

I guess we should combine them using a weighted geometric mean, not the weighted mean as I did above. 

Suppose we compared the mass of the human population of Earth with the mass of an individual human. We could compare them on 12 metrics, like per capita mass, per capita square root mass, per capita foot mass... and aggregate mass. If we use the equal-weighted geometric mean, we will conclude the individual has a mass within an order of magnitude of the total Earth population, instead of billions of times less.

Vasco Grilo @ 2023-12-06T18:54 (+7)

I'm not planning on continuing a long thread here, I mostly wanted to help address the questions about my previous comment, so I'll be moving on after this.

Fair, as this is outside of the scope of the original post. I noticed you did not comment on RP's neuron counts post. I think it would be valuable if you commented there about the concerns you expressed here, or did you already express them elsewhere in another post of RP's moral weight project sequence?

First, this effect (computational scale) is smaller for chickens but progressively enormous for e.g. shrimp or lobster or flies.

I agree that is the case if one combines the 2 wildly different estimates for the welfare range (e.g. one based on the number of neurons, and another corresponding to RP's median welfare ranges) with a weighted mean. However, as I commented above, using the geometric mean would cancel the effect.

Suppose we compared the mass of the human population of Earth with the mass of an individual human. We could compare them on 12 metrics, like per capita mass, per capita square root mass, per capita foot mass... and aggregate mass. If we use the equal-weighted geometric mean, we will conclude the individual has a mass within an order of magnitude of the total Earth population, instead of billions of times less.

Is this a good analogy? Maybe not:

  • Broadly speaking, giving the same weight to multiple estimates only makes sense if there is wide uncertainty with respect to which one is more reliable. In the example above, it would make sense to give negligible weight to all metrics except for the aggregate mass. In contrast, there is arguably wide uncertainty with respect to what are the best models to measure welfare ranges, and therefore distributing weights evenly is more appropriate.
  • One particular model on which we can put lots of weight on is that mass is straightforwardly additive (at least at the macro scale). So we can say the mass of all humans equals the number of humans times the mass per human, and then just estimate this for a typical human. In contrast, it is arguably unclear whether one can obtain the welfare range of an animal by e.g. just adding up the welfare range of its individual neurons.
MichaelDickens @ 2024-03-18T23:34 (+4)

It seems to me that the naive way to handle the two envelopes problem (and I've never heard of a way better than the naive way) is to diversify your donations across two possible solutions to the two envelopes problem:

  • donate half your (neartermist) money on the assumption that you should use ratios to fixed human value
  • donate half your money on the assumption that you should fix the opposite way (eg fruit flies have fixed value)

Which would suggest donating half to animal welfare and probably half to global poverty. (If you let moral weights be linear with neuron count, I think that would still favor animal welfare, but you could get global poverty outweighing animal welfare if moral weight grows super-linearly with neuron count.)

Plausibly there are other neartermist worldviews you might include that don't relate to the two envelopes problem, e.g. a "only give to the most robust interventions" worldview might favor GiveDirectly. So I could see an allocation of less than 50% to animal welfare.

MichaelStJules @ 2024-03-19T15:59 (+6)

There is no one opposite way; there are many other ways than to fix human value. You could fix the value in fruit flies, shrimps, chickens, elephants, C elegans, some plant, some bacterium, rocks, your laptop, GPT-4 or an alien, etc..

I think a more principled approach would be to consider precise theories of how welfare scales, not necessarily fixing the value in any one moral patient, and then use some other approach to moral uncertainty for uncertainty between the theories. However, there is another argument for fixing human value across many such theories: we directly value our own experiences, and theorize about consciousness in relation to our own experiences, so we can fix the value in our own experiences and evaluate relative to them.

weeatquince @ 2023-11-26T01:00 (+13)

Hi Emily, Sorry this is a bit off topic but super useful for my end of year donations.

I noticed that you said that OpenPhil has supported "Rethink Priorities ... research related to moral weights". But in his post here Peter says that the moral weights work "have historically not had institutional support".

Do you have a rough very quick sense of how much Rethink Priorities moral weights work was funded by OpenPhil?

Thank you so much 

Marcus_A_Davis @ 2023-11-27T14:35 (+34)

We mean to say that the ideas for these projects and the vast majority of the funding were ours, including the moral weight work. To be clear, these projects were the result of our own initiative. They wouldn't have gone ahead when they did without us insisting on their value.

For example, after our initial work on invertebrate sentience and moral weight in 2018-2020, in 2021 OP funded $315K to support this work. In 2023 they also funded $15K for the open access book rights to a forthcoming book based on the topic. In that period of 2021-2023, for public-facing work we spent another ~$603K on moral weight work with that money coming from individuals and RP's unrestricted funding.

Similarly, the CURVE sequence of WIT this year was our idea and we are on track to spend ~$900K against ~$210K funded by Open Phil on WIT. Of that $210K the first $152K was on projects related to Open Phil’s internal prioritization and not the public work of the CURVE sequence. The other $58K went towards the development of the CCM. So overall less than 10% of our costs for public WIT work this year was covered by OP (and no other institutional donors were covering it either).

weeatquince @ 2023-11-28T00:06 (+5)

Hi Marcus thanks very helpful to get some numbers and clarification on this. And well done to you and Rethink for driving forward such important research.

(I meant to post a similar question asking for clarification on the rethink post too but my perfectionism ran away with me and I never quite found the wording and then ran out of drafting time, but great to see your reply here)

James Özden @ 2023-11-23T22:06 (+70)

One reason why we are moving more slowly is that our current estimates of the gap between marginal animal and human funding opportunities is very different from the one in your post – within one order of magnitude, not three. And given the high uncertainty around our estimates here, we think one order of magnitude is well within the “margin of error” .

I assume that even though your answers are within one order of magnitude, the animal-focused work is the one that looks more cost-effective. Is that right?

Assuming so, your answer doesn't make sense to me because OP funds roughly 6x more human-focused GHW relative to farm animal welfare (FAW). Even if you have wide uncertainty bounds, if FAW is looking more cost-effective than human work, surely this ratio should be closer to 1:1 rather than 1:6? It seems bizarre (and possibly an example of omission bias) to fund the estimated less cost-effective thing 6x more and justify it by saying you're quite uncertain. 

Long story short, should we not just allocate our funding to the best of our current knowledge (even by your calculations, more towards FAW) and then update accordingly if things change?

Vasco Grilo @ 2023-11-26T08:23 (+6)

Nice point, James!

I personally agree with your reasoning, but it assumes the marginal cost-effectiveness of the human-focussed and animal-focussed interventions should be the same. Open Phil is not sold on this:

We’re also unsure conceptually whether we should be trying to equalize marginal returns between FAW and GHW or whether we should continue with our current approach.

I do not know what the "current approach" specifically involves, but it has led to Open Phil starting 6 new areas with a focus on human welfare in the last few years[1]. So it naively seems to me like Open Phil could have done more to increase the amount of funding going to animal welfare if there was a desire to do so. These areas will not be turned off easily. If Open Phil was in the process of deliberating how much funding animal-focussed interventions should receive relative to human-focussed ones, I would have expected a bigger investment in growing the internal cause-prioritisation team, or greater funding of similar research elsewhere[2].

  1. ^
  2. ^

    Open Phil has made grants to support Rethink's moral weight project, but this type of work has apparently not been fully supported by Open Phil:

    What we think of as our most innovative work (e.g., invertebrate sentiencemoral weights and welfare rangesthe cross-cause modelthe CURVE sequence) or some of our most important work (e.g., 50% of the EA Survey [4], our EA FTX surveys and Public/Elite FTX surveys, and each of our three AI surveys) have historically not had institutional support and relied on individual donors to make happen.

Vasco Grilo @ 2023-11-22T15:55 (+20)

Thanks for the feedback, Emily!

  • Vasco’s analysis implies a much wider welfare range than the one we use.
    • We’re not confident in our current assumptions, but this is a complicated question, and there is more work we need to do ourselves to get to an answer we believe in enough to act on. We also need to think through consistency with our human-focused interventions. 
  • We don’t use Rethink’s moral weights.
    • Our current moral weights, based in part on Luke Muehlhauser’s past work, are lower. We may update them in the future; if we do, we’ll consider work from many sources, including the arguments made in this post.

I am a little confused by the above. You say my analysis implies a much wider welfare range than the one you use, but in my analysis I just used point estimates. I relied on Rethink Priorities' median welfare range for chickens of 0.332, although Rethink's 5th and 95th percentile are 0.002 and 0.869 (i.e. the 95th percentile is 434 times the 5th percentile).

Are you saying Rethink's interval for the welfare range of chickens is much wider than Open Phil's? I think that would imply some disagreement with Luke's guess. Following his 2017 report on consciousness and moral patienthood, Luke guessed in 2018 a chicken life-year to be worth 0.00005 to 10 human life-years ("80% prediction interval"; upper bound 200 k times the lower bound). This interval is 11.5 (= (10 - 0.00005)/(0.869 - 0.002)) times as wide as Rethink's on a linear scale, 2.01 (= ln(10/0.00005)/ln(0.869/0.002)) times as wide on a logarithmic scale, and Luke's interval respecting the 5th and 95th percentile would have been wider. Rethink's and Luke's intervals are not directly comparable. Rethink's refers to the welfare range, whereas Luke's refers to moral weight. However, I would have guessed Open Phil's interval for the moral weight to be narrower than Open Phil's interval for the welfare range, as Open Phil's funding suggests a comparatively low weight on direct hedonic effects (across animal and human interventions). In any case, I must note Luke was not confident about his guesses, having commented around 1.5 years ago that:

Since this [this] exercise is based on numbers I personally made up, I would like to remind everyone that those numbers are extremely made up and come with many caveats given in the original sources. It would not be that hard to produce numbers more reasonable than mine, at least re: moral weights. (I spent more time on the "probability of consciousness" numbers, though that was years ago and my numbers would probably be different now.)

On the other hand, multiplying Luke's numbers by the ratio of 10 k between the cost-effectiveness of corporate campaigns and GiveWell's top charities for a moral weight of 1 shared in the worldview diversification 2016 post[1], one would conclude corporate campaigns for chicken welfare to be 0.5 (= 0.00005*10000) to 100 k (= 10*10000) times as cost-effective as GiveWell's top charities. In my mind, the prospect of corporate campaigns for chicken welfare being much more cost-effective at increasing welfare than GiveWell's top charities should have prompted a major investigation of the topic, and more transparent communication of Open Phil's prioritisation decisions.

  1. ^

    "If you value chicken life-years equally to human life-years, this implies that corporate campaigns do about 10,000x as much good per dollar as top charities. If you believe that chickens do not suffer in a morally relevant way, this implies that corporate campaigns do no good".

MichaelStJules @ 2024-02-19T01:14 (+6)

Hi Emily. I've written a post about how to handle moral uncertainty about moral weights across animals, including humans: Solution to the two envelopes problem for moral weights. It responds directly to Holden's writing on the topic. In short, I think Open Phil should evaluate opportunities for helping humans and nonhumans relative to human moral weights, like comparison method A from Holden's post. This is because we directly value that with which we're familiar, e.g. our own experiences, and we have just regular empirical (not moral) uncertainty about its nature and whether and to what extent other animals have similar relevant capacities.

MathiasKB @ 2023-11-19T21:35 (+135)

For those who agree with this post (I at least agree with the author's claim if you replace most with more), I encourage you to think what you personally can do about it.

I think EAs are far too willing to donate to traditional global health charities, not due to them being the most impactful, but because they feel the best to donate to. When I give to AMF I know I'm a good person who had an impact! But this logic is exactly what EA was founded to avoid.

I can't speak for animal welfare organizations outside of EA, but at least for the ones that have come out of Effective Altruism, they tell me that funding is a major issue. There just aren't that many people willing to make a risky donation a new charity working on fish welfare, for example.

Those who would be risk-willing enough to give to eccentric animal welfare or global health interventions, tend to also be risk-willing enough with their donations to instead give it to orgs working on existential risks. I'm not claiming this is incorrect of them to do, but this does mean that there is a dearth of funding for high-risk interventions in the neartermist space.

I donated a significant part of my personal runway to help fund a new animal welfare org, which I think counterfactually might not have gotten started if not for this. If you, like me, think animal welfare is incredibly important and previously have donated to Givewell's top charities, perhaps consider giving animal welfare a try!

Angelina Li @ 2023-11-28T19:08 (+20)

I donated a significant part of my personal runway to help fund a new animal welfare org, which I think counterfactually might not have gotten started if not for this.

<3 This is super awesome / inspirational, and I admire you for doing this!

Cornelis Dirk Haupt @ 2023-11-22T21:11 (+15)

Given it is the Giving Season, I'd be remiss not to point out that ACE currently has donation matching for their Recommended Charity Fund.

I am personally waiting to hear back from RC Forward on whether Canadian donations can also be made for said donation matching, but for American EAs at least, this seems like a great no-brainer opportunity to dip your feet in effective animal welfare giving.

Aaron Bergman @ 2023-11-19T19:19 (+89)

Strongly, strongly, strongly agree. I was in the process of writing essentially this exact post, but am very glad someone else got to it first. The more I thought about it and researched, the more it seemed like convincingly making this case would probably be the most important thing I would ever have done. Kudos to you.

A few points to add

  1. Under standard EA "on the margin" reasoning, this shouldn't really matter, but I analyzed OP's grants data and found that human GCR has been consistently funded 6-7x more than animal welfare (here's my tweet thread this is from) Image
  2. @Laura Duffy's (for Rethink Priorities) recently published risk aversion analysis basically does a lot of the heavy lifting here (bolding mine):

Spending on corporate cage-free campaigns for egg-laying hens is robustly[8] cost-effective under nearly all reasonable types and levels of risk aversion considered here. 

  1. Using welfare ranges based roughly on Rethink Priorities’ results, spending on corporate cage-free campaigns averts over an order of magnitude more suffering than the most robust global health and development intervention, Against Malaria Foundation. This result holds for almost any level of risk aversion and under any model of risk aversion.

I also want to emphasize this part, because it's the kind of serious engagement with suffering that EA still fails to to do enough of 

I experienced "disabling"-level pain for a couple of hours, by choice and with the freedom to stop whenever I want. This was a horrible experience that made everything else seem to not matter at all.

A single laying hen experiences hundreds of hours of this level of pain during their lifespan, which lasts perhaps a year and a half - and there are as many laying hens alive at any one time as there are humans. How would I feel if every single human were experiencing hundreds of hours of disabling pain? 

A single broiler chicken experiences fifty hours of this level of pain during their lifespan, which lasts 4-6 weeks. There are 69 billion broilers slaughtered each year. That is so many hours of pain that if you divided those hours among humanity, each human would experience about 400 hours (2.5 weeks) of disabling pain every year. Can you imagine if instead of getting, say, your regular fortnight vacation from work or study, you experienced disabling-level pain for a whole 2.5 weeks? And if every human on the planet - me, you, my friends and family and colleagues and the people living in every single country - had that same experience every year? How hard would I work in order to avert suffering that urgent?

Every single one of those chickens are experiencing pain as awful and all-consuming as I did for tens or hundreds of hours, without choice or the freedom to stop. They are also experiencing often minutes of 'excruciating'-level pain, which is an intensity that I literally cannot imagine. Billions upon billions of animals. The numbers would be even more immense if you consider farmed fish, or farmed shrimp, or farmed insects, or wild animals.

If there were a political regime or law responsible for this level of pain - which indeed there is - how hard would I work to overturn it? Surely that would tower well above my other priorities (equality, democracy, freedom, self-expression, and so on), which seem trivial and even borderline ridiculous in comparison.

Hamish McDoodles @ 2023-11-20T11:32 (+91)

I analyzed OP's grants data

FYI, I made a spreadsheet a while ago which automatically pulls the latest OP grants data and constructs summaries and pivot tables to make this type of analysis easier.

I also made these interactive plots which summarise all EA funding:

TylerMaule @ 2023-11-20T22:29 (+5)

Thanks! Small correction: Animal Welfare YTD is labeled as $53M, when it looks like the underlying data point is $17M (source and 2023 full-year projections here)

Jelle Donders @ 2023-11-19T20:36 (+60)

If OP disagrees, they should practice reasoning transparency by clarifying their views

 

OP believes in reasoning transparency, but their reasoning has not been transparent

Regardless of what Open Phil ends up doing, would really appreciate them to at least do this :)

Michael_PJ @ 2023-11-20T17:17 (+21)

I would qualify this statement by saying that it would be nice for OP to have more reasoning transparency, but it is not the most important thing and can be expensive to produce. So it would be quite reasonable for additional marginal transparency to not be the most valuable use of their staff time.

MichaelStJules @ 2023-11-21T03:48 (+22)

I think if there's anything they should bother to be publicly transparent about in order to subject to further scrutiny, it's their biggest cruxes for resource allocation between causes. Moral weights, theory of welfare and the marginal cost-effectiveness of animal welfare seem pretty decisive for GHD vs animal welfare.

Hamish McDoodles @ 2023-11-20T02:03 (+57)

RP's moral weights and analysis of cage-free campaigns suggest that the average cost-effectiveness of cage-free campaigns is on the order of 1000x that of GiveWell's top charities.[5] Even if the campaigns' marginal cost-effectiveness is 10x worse than the average, that would be 100x.

This seems to be the key claim of the piece, so why isn't the "1000x" calculation actually spelled out?

The "cage-free campaigns analysis" estimates

how many chickens will be affected by corporate cage-free and broiler welfare commitments won by all charities, in all countries, during all the years between 2005 and the end of 2018

This analysis gives chicken years affected per dollar as 9.6-120 (95%CI), with 41 as the median estimate.

The moral weights analysis estimates "welfare ranges", ie, the difference in moral value between the best possible and worst possible experience for a given species. This doesn't actually tell us anything about the disutility of caging chickens. For that you would need to make up some additional numbers:

Welfare ranges allow us to convert species-relative welfare assessments, understood as percentage changes in the portions of animals’ welfare ranges, into a common unit. To illustrate, let’s make the following assumptions:

  1. Chickens’ welfare range is 10% of humans’ welfare range.
  2. Over the course of a year, the average chicken is about half as badly off as they could be in conventional cages (they’re at the ~50% mark in the negative portion of their welfare range).
  3. Over the course of a year, the average chicken is about a quarter as badly off as they could be in a cage-free system (they’re at the ~25% mark in the negative portion of their welfare range). 

Anyway, the 95%CI for chicken welfare ranges (as a fraction of human ranges) is 0.002-0.869, with 0.332 as the median estimate.

So if we make the additional assumptions that:

  1. All future animal welfare interventions will be as effective as past efforts (which seems implausible given diminishing marginal returns)
  2. Cages cause chickens to lose half of their average welfare (a totally made up number)

Then we can multiply these out to get:

The "DALYs / $ through GiveWell charities" comes from the fact that it costs ~$5000 to save the life of a child. Assming "save a life" means adding ~50 years to the lifespan, that means $100 / DALY, or 0.01 DALYs / $.

A few things to note here:

  1. There is huge uncertainty here. The 95% CI in the table indicates that chicken interventions could be anywhere from 10,000x to 0.1x as effective as human charities. (Although I got the 5th and 95th percentiles of the output by simply multiplying the 5th and 95th percentiles of the inputs. This is not correct, but I'm not sure there's a better approach without more information about the input distributions.)
  2. To get these estimates we had to make some implausible assumptions and also totally make up some numbers.

The old cortical neuron count proxy for moral weight says that one chicken life year is worth 0.003, which is 1/100th of the RP welfare range estimate of 0.33. This number would mean chicken interventions are only 0.7x as effective as human interventions, rather than 700x as effective. [edit: oops, maths wrong here. see Michael's comment below.]

But didn't RP prove that cortical neuron counts are fake?

Hardly. They gave a bunch of reasons why we might be skeptical of neuron count (summarised here). But I think the reasons in favour of using cortical neuron count as a proxy for moral weight are stronger than the objections. And that still doesn't give us any reason to think RP's has a better methodology for calculating moral weights. It just tells us to not take cortical counts to literally.

Points in favour of cortical neuron counts as a proxy for moral weight:

  1. Neuron counts correlate with our intuitions of moral weights. Cortical counts would say that ~300 chicken life years are morally equivalent to one human life year, which sounds about right.
  2. There's a common sense story of: more neurons → more compute power → more consciousness.
  3. It's a simple and practical approach. Obtaining the moral weight of an arbitrary animal only requires counting neurons.

Compare with the RP moral weights:

  1. If we interpret the welfare ranges as moral weights, then 3 chicken life years are worth one human life year. This is not a trade I would make.
  2. If we don't interpret welfare ranges as moral weights, then the RP numbers tell us literally nothing.
  3. The methodology is complex, difficult to understand, expensive, and requires reams zoological observation to be applied to new animals.

And let's not forget second order effects. Raising people out of poverty can increase global innovation and specialisation and accelerate economic development which could have benefits centuries from now. It's not obvious that helping chickens has any real second order effects.

In conclusion:

  1. It's not obvious to me that RP's research actually tells us anything useful about the effectiveness of animal charities compared to human charities. 
  2. There are tons of assumptions and simplifications that go into these RP numbers, so any conclusions we can draw must be low confidence.
  3. Cortical neuron counts still looks like a pretty good way to compare welfare across species. Under cortical neuron count, human charities come out on top.
Bob Fischer @ 2023-11-20T11:44 (+60)

Thanks for all this, Hamish. For what it's worth, I don't think we did a great job communicating the results of the Moral Weight Project.

  • As you rightly observe, welfare ranges aren't moral weights without some key philosophical assumptions. Although we did discuss the significance of those assumptions in independent posts, we could have done a much better job explaining how those assumptions should affect the interpretation of our point estimates.
  • Speaking of the point estimates, I regret leading with them: as we said, they're really just placeholders in the face of deep uncertainty. We should have led with our actual conclusions, the basics of which are that the relevant vertebrates are probably within an OOM of humans and shrimps and the relevant adult insects are probably within two OOMs of the vertebrates. My guess is that you and I disagree less than you might think about the range of reasonable moral weights across species, even if the centers of my probability masses are higher than yours.
  • I agree that our methodology is complex and hard to understand. But it would be surprising if there were a simple, easy-to-understand way to estimate the possible differences in the intensities of valenced states across species. Likewise, I agree that "there are tons of assumptions and simplifications that go into these RP numbers, so any conclusions we can draw must be low confidence." But there are also tons of assumptions and biases that go into our intuitive assessments of the relative moral importance of various kinds of nonhuman animals. So, a lot comes down to how much stock you put in your intuitions. As you might guess, I think we have lots of reasons not to trust them once we take on key moral assumptions like utilitiarianism. So, I take much of the value of the Moral Weight Project to be in the mere fact that it tries to reach moral weights from first principles.
  • It's time to do some serious surveying to get a better sense of the community's moral weights. I also think there's a bunch of good work to do on the significance of philosophical / moral uncertainty here. I If anyone wants to support this work, please let me know!
Hamish McDoodles @ 2023-11-20T12:42 (+11)

Thanks for responding to my hot takes with patience and good humour!

Your defenses and caveats all sound very reasonable.

the relevant vertebrates are probably within an OOM of humans

So given this, you'd agree with the conclusion of the original piece? At least if we take the "number of chickens affected per dollar" input as correct?

Bob Fischer @ 2023-11-20T13:41 (+29)

I agree with Ariel that OP should probably be spending more on animals (and I really appreciate all the work he's done to push this conversation forward). I don't know whether OP should allocate most neartermist funding to AW as I haven't looked into lots of the relevant issues. Most obviously, while the return curves for at least some human-focused neartermist options are probably pretty flat (just think of GiveDirectly), the curves for various sorts of animal spending may drop precipitously. Ariel may well be right that, even if so, the returns probably don't fall off so much that animal work loses to global health work, but I haven't investigated this myself. The upshot: I have no idea whether there are good ways of spending an additional $100M on animals right now. (That being said, I'd love to see more extensive investigation into field building for animals! If EA field building in general is cost-competitive with other causes, then I'd expect animal field building to look pretty good.)

I should also say that OP's commitment to worldview diversification complicates any conclusions about what OP should do from its own perspective. Even if it's true that a straightforward utilitarian analysis would favor spending a lot more on animals, it's pretty clear that some key stakeholders have deep reservations about straightforward utilitarian analyses. And because worldview diversification doesn't include a clear procedure for generating a specific allocation, it's hard to know what people who are committed to worldview diversification should do by their own lights.

Angelina Li @ 2023-11-28T19:12 (+7)

The upshot: I have no idea whether there are good ways of spending an additional $100M on animals right now.

I haven't read this in a ton of detail, but I liked this post from last year trying to answer this exact question (what are potentially effective ways to deploy >$10M in projects for animals).

MichaelStJules @ 2023-11-20T08:57 (+35)

The old cortical neuron count proxy for moral weight says that one chicken life year is worth 0.003, which is 1/100th of the RP welfare range estimate of 0.33. This number would mean chicken interventions are only 0.7x as effective as human interventions, rather than 700x as effective.

700/100=7, not 0.7.

Hamish McDoodles @ 2023-11-20T11:09 (+24)

oh true lol

ok, animal charities still come out an order of magnitude ahead of human charities given the cage-free campaigns analysis and neuron counts

but the broader point is that the RP analyses seem far from conclusive and it would be silly to use them unilaterally for making huge funding allocation decisions, which I think still stands

Ariel Simnegar @ 2023-11-20T14:57 (+28)

Hi Hamish! I appreciate your critique.

Others have enumerated many reservations with this critique, which I agree with. Here I'll give several more.

why isn't the "1000x" calculation actually spelled out?

As you've seen, given Rethink's moral weights, many plausible choices for the remaining "made-up" numbers give a cost-effectiveness multiple on the order of 1000x. Vasco Grilo conducted a similar analysis which found a multiple of 1.71k. I didn't commit to a specific analysis for a few reasons:

  1. I agree with your point that uncertainty is really high, and I don't want to give a precise multiple which may understate the uncertainty.
  2. Reasonable critiques can be made of pretty much any assumptions made which imply a specific multiple. Though these critiques are important for robust methodology, I wanted the post to focus specifically upon how difficult it seems to avoid the conclusion of prioritizing animal welfare in neartermism. I believe that given Rethink's moral weights, a cost-effectiveness multiple on the order of 1000x will be found by most plausible choices for the additional assumptions.

(Although I got the 5th and 95th percentiles of the output by simply multiplying the 5th and 95th percentiles of the inputs. This is not correct, but I'm not sure there's a better approach without more information about the input distributions.)

Sadly, I don't think that approach is correct. The 5th percentile of a product of random variables is not the product of the 5th percentiles---in fact, in general, it's going to be a product of much higher percentiles (20+).

To see this, imagine if a bridge is held up 3 spokes which are independently hammered in, and each spoke has a 5% chance of breaking each year. For the bridge to fall, all 3 spokes need to break. That's not the same as the bridge having a 5% chance of falling each year--the chance is actually far lower (0.01%). For the bridge to have a 5% chance of falling each year, each spoke would need to have a 37% chance of breaking each year.

As you stated, knowledge of distributions is required to rigorously compute percentiles of this product, but it seems likely that the 5th percentile case would still have the multiple several times that of GiveWell top charities.

let's not forget second order effects

This is a good point, but the second order effects of global health interventions on animals are likely much larger in magnitude. I think some second-order effects of many animal welfare interventions (moral circle expansion) are also positive, and I have no idea how it all shakes out.

kierangreig @ 2023-11-20T15:41 (+20)

Sadly, I don't think that approach is correct. The 5th percentile of a product of random variables is not the product of the 5th percentiles---in fact, in general, it's going to be a product of much higher percentiles (20+).

As something of an aside, I think this general point was demonstrated and visualised well here

Disclaimer: I work RP so may be biased.  

Hamish McDoodles @ 2023-11-21T05:13 (+7)

wanted the post to focus specifically upon how difficult it seems to avoid the conclusion of prioritizing animal welfare in neartermism

I wasn't familiar with these other calculations you mention. I thought you were just relying on the RP studies which seemed flimsy. This extra context makes the case much stronger.

Sadly, I don't think that approach is correct. The 5th percentile of a product of random variables is not the product of the 5th percentiles---in fact, in general, it's going to be a product of much higher percentiles (20+).

I don't think that's true either. 

If you're multiplying noramlly distributed distributions, the general rule is that you add the percentage variances in quadrature.

Which I don't think converges to a specific percentile like 20+. As more and more uncertainties cancel out the relative contribution of any given uncertainty goes to zero.

IDK. I did explicitly say that my calculation wasn't correct. And with the information on hand I can't see how I could've done better. Maybe I should've fudged it down by one OOD.

Ariel Simnegar @ 2023-11-21T16:26 (+9)

This extra context makes the case much stronger.

Thanks for being charitable :)

On the percentile of a product of normal distributions, I wrote this Python script which shows that the 5th percentile of a product of normally distributed random variables will in general be a product of much higher percentiles (in this case, the 16th percentile):

import random

MU = 100
SIGMA = 10
N_SAMPLES = 10 ** 6
TARGET_QUANTILE = 0.05
INDIVIDUAL_QUANTILE = 83.55146375 # From Google Sheets NORMINV(0.05,100,10)

samples = []
for _ in range(N_SAMPLES):
   r1 = random.gauss(MU, SIGMA)
   r2 = random.gauss(MU, SIGMA)
   r3 = random.gauss(MU, SIGMA)
   sample = r1 * r2 * r3
   samples.append(sample)

samples.sort()
# The sampled 5th percentile product
product_quantile = samples[int(N_SAMPLES * TARGET_QUANTILE)]
implied_individual_quantile = product_quantile ** (1/3)
implied_individual_quantile # ~90, which is the *16th* percentile by the empirical rule

I apologize for overstating the degree to which this reversion occurs in my original reply (which claimed an individual percentile of 20+ to get a product percentile of 5), but I hope this Python snippet shows that my point stands.

I did explicitly say that my calculation wasn't correct. And with the information on hand I can't see how I could've done better.

This is completely fair, and I'm sorry if my previous reply seemed accusatory or like it was piling on. If I were you, I'd probably caveat your analysis's conclusion to something more like "Under RP's 5th percentile weights, the cost-effectiveness of cage-free campaigns would probably be lower than that of the best global health interventions".

MHR @ 2023-11-20T12:45 (+10)

I think your BOTEC is unlikely to give meaningful answers because it treats averting a human death as equivalent to moving someone from the bottom of their welfare range to the top of their welfare range. At least to me, this seems plainly wrong - I'd vastly prefer shifting someone from receiving the worst possible torture to the greatest possible happiness for an hour to extending someone's ordinary life for an hour. 

The objections you raise are still worth discussing, but I think the best starting place for discussing them is Duffy (2023)'s model (Causal model, report), rather than your BOTEC.

MichaelStJules @ 2023-11-20T07:33 (+8)

But didn't RP prove that cortical neuron counts are fake?

Hardly. They gave a bunch of reasons why we might be skeptical of neuron count (summarised here). But I think the reasons in favour of using cortical neuron count as a proxy for moral weight are stronger than the objections.

I don't think the reasons in favour of using neuron counts provide much support for weighing by neuron counts or any function of them in practice. Rather, they primarily support using neuron counts to inform missing data about functions and capacities that do determine welfare ranges (EDIT: or moral weights), in models of how welfare ranges (EDIT: or moral weights) are determined by functions and capacities. There's a general trend that animals with more neurons have more capacities and more sophisticated versions of some capacities.

However, most functions and capacities seem pretty irrelevant to welfare ranges, even if relevant for what welfare is realized in specific circumstances. If an animal can already experience excruciating pain, presumably near the extreme of their welfare range, what do humans have that would make excruciating pain far worse for us in general, or otherwise give us far wider welfare ranges? And why?

NickLaing @ 2023-11-20T08:32 (+12)

"If an animal can already experience excruciating pain, presumably near the extreme of their welfare range, what do humans have that would make excruciating pain far worse for us in general, or otherwise give us far wider welfare ranges? And why?"

We have a far more advanced consciousness and self awareness, that may make our experience of pain orders of magnitude worse (or at least different) than for many animals - or not. 

I think there is far more uncertainty in this question than many ackhnowledge - RP acknowledge the uncertainty but I don't think present it as clearly as they could. Extreme pain for humans could be a wildly different experience than it is for animals, or it could be quite similar. Even if we assume hedonism (which I don't), we can oversimplify the concepts of "Sentience" and "welfare ranges" to feel like we have more certainty over these numbers than we do.

MichaelStJules @ 2023-11-20T10:14 (+10)

We have a far more advanced consciousness and self awareness, that may make our experience of pain orders of magnitude worse (or at least different) than for many animals - or not. 

I agree that that's possible and worth including under uncertainty, but it doesn't answer the "why", so it's hard to justify giving it much or disproportionate weight (relative to other accounts) without further argument. Why would self-awareness, say, make being in intense pain orders of magnitude worse?

And are we even much more self-aware than other animals when we are in intense pain? One of the functions of pain is to take our attention, and it does so more the more intense the pain. That might limit the use of our capacities for self-awareness: we'd be too focused on and distracted by the pain. Or, maybe our self-awareness or other advanced capacities distract us from the pain, making it less intense than in other animals.

(My own best guess is that at the extremes of excruciating pain, sophisticated self-awareness makes little difference to the intensity of suffering.)

Hamish McDoodles @ 2023-11-20T11:17 (+1)

by that logic, two chickens have the same moral weight as one chicken because they have the same functions and capacities, no?

MichaelStJules @ 2023-11-20T22:02 (+11)

They won't be literally identical: they'll differ in many ways, like physical details, cognitive expression and behavioural influence. They're separate instantiations of the same broad class of functions or capacities.

I would say the number of times a function or capacity is realized in a brain can be relevant, but it seems pretty unlikely to me that a person can experience suffering hundreds of times simultaneously (and hundreds of times more than chickens, say). Rethink Priorities looked into these kinds of views here. (I'm a co-author on that article, but I don't work at Rethink Priorities anymore, and I'm not speaking on their behalf.)

FWIW, I started very pro-neuron counts (I defended them here and here), and then others at RP, collaborators and further investigation myself moved me away from the view.

Hamish McDoodles @ 2023-11-21T04:54 (+8)

FWIW, I started very pro-neuron counts (I defended them here and here), and then others at RP, collaborators and further investigation myself moved me away from the view.

 

Oh, interesting. That moves my needle.

Hamish McDoodles @ 2023-11-20T11:17 (+2)

As I see it, we basically have a choice between:

  1. simple methodology to make vaguely plausible guesses about the unknowable phenomology of chickens (cortical neuron count)
  2. complex methodology to make vaguely plausible guesses about the unknowable phenomology of chickens (other stuff)

I much prefer the simple methodology where we can clearly see what assumptions we're making and how that propagates out.

MichaelStJules @ 2023-11-21T00:29 (+13)

There are other simple methodologies that make vaguely plausible guesses (under hedonism), like:

  1. welfare ranges are generally similar or just individual-relative across species capable of suffering and pleasure (RP's Equality Model), 
  2. the intensity categories of pain defined by Welfare Footprint Project (or some other functionally defined categories) have similar ranges across animals that have them, and assign numerical weights to those categories, so that we should weigh "disabling pain" similarly across animals, including humans,
  3. the pain intensity scales with the number of just-noticeable differences in pain intensities away from neutral across individuals, so we just weigh by their number (RP's Just Noticeable Difference Model[1]).

In my view, 1, 2 and 3 are more plausible and defensible than views that would give you (cortical or similar function) neuron counts as a good approximation. I also think the actually right answer, if there's any (so excluding the individual-relative interpretation for 1), will look like 2, but more complex and with possibly different functions. RP explicitly considered 1 and 3 in its work. These three models give chickens >0.1x humans' welfare ranges:

  1. Model 1 would give the same welfare ranges across animals, including humans, conditional on capacity for suffering and pleasure.
  2. Model 2 would give the same sentience-conditional welfare ranges across mammals (including humans) and birds, at least. My best guess is also the same across all vertebrates. I'm less sure that invertebrates can experience similarly intense pain even conditional on sentience, but it's not extremely unlikely.
  3. Model 3 would probably pretty generally give nonhuman animals welfare ranges at least ~0.1x humans', conditional on sentience, according to RP.[2]

You can probably come up with some models that assign even lower welfare ranges to other animals, too, of course, including some relatively simple ones, although not simpler than 1.

Note that using cortical (or similar function) neuron counts also makes important assumptions about which neurons matter and when. Not all plausibly conscious animals have cortices, so you need to identify which structures have similar roles, or else, chauvinistically, rule these animals out entirely regardless of their capacities. So this approach is not that simple, either. Just counting all neurons would be simpler.

(I don't work for RP anymore, and I'm not speaking on their behalf.)

  1. ^

    Although we could use a different function of the number instead, for increasing or diminishing marginal returns to additional JNDs.

  2. ^

    Maybe lower for some species RP didn't model, e.g. nematodes, tiny arthropods?

David Mathers @ 2023-11-22T09:57 (+7)

'There's a common sense story of: more neurons → more compute power → more consciousness.'

I think it is very unclear what "more consciousness" even means. "Consciousness" isn't "stuff" like water that you can have a greater weight or volume of. 

Vasco Grilo @ 2023-11-22T14:08 (+7)

Hi David,

Relatedly, readers may want to check Why Neuron Counts Shouldn't Be Used as Proxies for Moral Weight. Here are the key takeaways:

  • Several influential EAs have suggested using neuron counts as rough proxies for animals’ relative moral weights. We challenge this suggestion.
  • We take the following ideas to be the strongest reasons in favor of a neuron count proxy:
    • neuron counts are correlated with intelligence and intelligence is correlated with moral weight,
    • additional neurons result in “more consciousness” or “more valenced consciousness,” and
    • increasing numbers of neurons are required to reach thresholds of minimal information capacity required for morally relevant cognitive abilities.
  • However:
    • in regards to intelligence, we can question both the extent to which more neurons are correlated with intelligence and whether more intelligence in fact predicts greater moral weight; 
    • many ways of arguing that more neurons results in more valenced consciousness seem incompatible with our current understanding of how the brain is likely to work; and
    • there is no straightforward empirical evidence or compelling conceptual arguments indicating that relative differences in neuron counts within or between species reliably predicts welfare relevant functional capacities.
  • Overall, we suggest that neuron counts should not be used as a sole proxy for moral weight, but cannot be dismissed entirely. Rather, neuron counts should be combined with other metrics in an overall weighted score that includes information about whether different species have welfare-relevant capacities.
NickLaing @ 2023-11-22T13:21 (+4)

I think it's very unclear for sure. Why could consciousness not be like water that you could have more or less volume of? When I was a child I was perhaps conscious but less so than now?

Could a different species with a different brain structure could have a different "nature" of consciousness while not necessarily being more or less?

I agree it's very unclear, but there could be directionality unless I'm missing some of the point of the concept...

David Mathers @ 2023-11-23T11:43 (+4)

I'm not saying it's impossible to make sense of the idea of a metric of "how conscious" something is, just that it's unclear enough what this means that any claim employing the notion without explanation is not "commonsense". 

NickLaing @ 2023-11-23T13:27 (+2)

100% agree nice one

David Mathers @ 2023-11-23T13:38 (+7)

Also part (although not all) of the attraction of "more neurons=more consciousness" is I think a picture that comes from "more input=more of a physical stuff", which is wrong in this case. I actually do  (tentatively!) think that consciousness is sort of a cluster-y concept, where the more of a range of properties a mind has, the more true* it is to say it is conscious, but none of those properties definitively is "really" what being conscious requires. (i.e. sensory input into rational belief, ability to recognize your own sensory states, some sort of raw complexity requirement to rule out very simple systems with the previous 2 features etc.) And I think larger neuron counts will rough correlate with having more of these sorts of properties. But I doubt this will lead to a view where something with a trillion neurons is a thousand times more conscious than something with a billion. 
*Degrees of truth are also highly philosophically controversial though. 

David_Moss @ 2023-11-21T11:47 (+7)

Points in favour of cortical neuron counts as a proxy for moral weight:

  1. Neuron counts correlate with our intuitions of moral weights. Cortical counts would say that ~300 chicken life years are morally equivalent to one human life year, which sounds about right.

 

That neuron counts seem to correlate with intuitions of moral weight is true, but potentially misleading. We discuss these results, drawing on our own data here

I would quite strongly recommend that more survey research be done (including more analysis, as well as additional surveys: we have some more unpublished data of our own on this), if before taking the correlations as a reason to prefer using neuron count as a proxy (in contrast to a holistic assessment of different capacities).

Jack Malde @ 2023-11-19T19:07 (+54)

Strong upvoted. I think this is correct, important and well-argued, and I welcome the call to OP to clarify their views. 

This post is directed at OP, but this conclusion should be noted by the EA community as a whole which still prioritises global poverty over all else.

The only caveat I would raise is that we need to retain some focus on global poverty in EA for various instrumental reasons: it can attract more people into the movement, allows us to show concrete wins etc.  

Benny Smith @ 2023-11-19T20:58 (+3)

Yeah, I think this caveat is important.

At the same time, GiveWell will continue to work on global poverty regardless of what OP does, right?

Gabriel Mukobi @ 2023-11-19T21:43 (+7)

GiveWell seems pretty dependent on OP funding, s.t. it might have to change its work with significantly less OP money.
An update on GiveWell’s funding projections — EA Forum (effectivealtruism.org)
Open Philanthropy's 2023-2025 funding of $300 million total for GiveWell's recommendations — EA Forum (effectivealtruism.org)

Jack Malde @ 2023-11-20T00:26 (+4)

I’m a bit confused by this. Presumably GiveWell doesn’t need that much money to function. Less Open Phil money probably won’t affect GiveWell, instead it will affect GiveWell’s recommended charities which will of course still receive money from other sources, in part due to GiveWell’s recommendation.

Gabriel Mukobi @ 2023-11-20T03:17 (+1)

Ah right, I was conflating GiveWell's operating costs (I assume not too high?) and their funding sent to other charities both as "GiveWell continuing to work on global poverty." You're right that they'll still probably work on it and not collapse without OP, just they might send much less to other charities.

MarcusAbramovitch @ 2023-11-22T00:18 (+43)

I strongly agree with this post and strongly upvoted it. I also talked a lot with Ariel in the making of this post. I think the arguments are good and I think EA in general should be focusing a lot more on animal welfare than GHW.

That said, I think it's important to note that "EA" doesn't own the money being given away by Open Phil. It's Dustin/Cari's money that is being given away and Open Phil was set up (by them, in a joint venture between Givewell and Good Ventures) to advise them where their money should go and they are inspired/wish to give away their money by EA principles. 

The people at Open Phil are heavily influenced by Dustin/Cari's values so it isn't surprising that the people at Open Phil might value animals less than the general movement and if Dustin/Cari don't want to give their money to non-human animal causes, that's well within their rights. The "EA movement", however you define it, doesn't get to control the money and there are good reasons for this.

Like @MathiasKB, I want to generally encourage people to see how they can affect the funding landscape, primarily via their own donations as opposed to simply telling other people how they should donate. A very unstable equilibrium would result from a bunch of people steering and not a lot of people rowing.

Will Aldred @ 2023-11-23T19:12 (+17)

The "EA movement", however you define it, doesn't get to control the money and there are good reasons for this.

I disagree, for the same reasons as those given in the critique to the post you cite. Tl;dr: Trades have happened, in EA, where many people have cast aside careers with high earning potential in order to pursue direct work. I think these people should get a say over where EA money goes.

Vasco Grilo @ 2023-11-22T16:21 (+8)

Hi Marcus,

The people at Open Phil are heavily influenced by Dustin/Cari's values so it isn't surprising that the people at Open Phil might value animals less than the general movement and if Dustin/Cari don't want to give their money to non-human animal causes, that's well within their rights.

Do you have a source for this? My sense was that Dustin and Cari mostly deferred to Open Phil's researchers. I think Dustin tweeted about this at some point.

MvK @ 2023-11-23T02:03 (+13)

This is from 2016, but worth looking into if you're curious how this works:

"At least 50% of each program officer’s grantmaking should be such that Holden and Cari understand and are on board with the case for each grant. At least 90% of the program officer’s grantmaking should be such that Holden and Cari could easily imagine being on board with the grant if they knew more, but may not be persuaded that the grant is a good idea. (When taking the previous bullet point into account, this leaves room for up to 40% of the portfolio to fall in this bucket.) Up to 10% of the program officer’s grantmaking can be done without meeting either of the above two criteria, though there are some basic checks in place to avoid grantmaking that creates risks for Open Philanthropy. We call this “discretionary” grantmaking. Grants in this category generally follow a different, substantially abbreviated approval process. Some examples of discretionary grants are here and here."

(https://www.openphilanthropy.org/research/our-grantmaking-so-far-approach-and-process/)

Vasco Grilo @ 2023-11-23T13:49 (+13)

Thanks for sharing, MvK!

In general, I would still say Open Phil's grantmaking process is very opaque, and I think it would be great to have more transparency about how grants are made, including the influence of Dustin and Cari, at least for big ones. Just to illustrate how little information is provided, here is the write-up of a grant of 10.7 M$ to Redwood Research in 2022:

Open Philanthropy recommended a grant of $10,700,000 over 18 months to Redwood Research for general support. Redwood Research is a nonprofit research institution focused on aligning advanced AI with human interests.

This follows our 2021 support and falls within our focus area of potential risks from advanced artificial intelligence.

There was nothing else. Here is the write-up regarding the 2021 support, 9.42 M$, mentioned just above:

Open Philanthropy recommended four grants totaling $9,420,000 to Redwood Research for general support. Redwood Research is a new research institution that conducts research to better understand and make progress on AI alignment in order to reduce global catastrophic risks.

This falls within our focus area of potential risks from advanced artificial intelligence.

MarcusAbramovitch @ 2023-11-22T18:57 (+6)

Correct. Dustin and Cari mostly defer to OP. But the people at OP aren't random. The selection of leadership at OP (Holden/Alex) are very much because of Dustin/Cari. FWIW, on the whole, I'm very thankful for them. Without them, EA would look quite a lot worse on the whole, including for animals.

Seth Ariel Green @ 2023-11-19T20:21 (+41)

I think this post is on the right track, the request for reasoning transparency especially so. 

I personally worry about how weird effective altruism will seem to the outside world if we focus exclusively on topics that most people don't think are very important. A sister comment argues that the average person's revealed preference about the value of a hen's life relative to a human's is infinitesimal. Likewise, however much people say they worry about AI (as a proxy for longtermism, which isn't really on people's radar in general), in practice, it tends to be relatively low on their list of concerns, even among potential existential threats.

If our thinking takes us in weird directions, that's not inherently a reason to shy away. But I think there's something to be said for considering the  implications of having increasingly niche opinions, priorities, and epistemology. A movement that's a little more humble/agnostic about what the most important cause is might broadly be able to devote more resources, on net, to a wider range of causes, including the ones we think most important.

(For context I am a vegan who believes that animal welfare is broadly neglected -- I recently wrote something on the case for veganism for domesticated dogs.)

Scott Smith @ 2023-11-20T12:02 (+29)

Also worry about the weirdness. Ariel said themselves:

When I started as an EA, I found other EAs' obsession with animal welfare rather strange. How could these people advocate for helping chickens over children in extreme poverty? I changed my mind for a few reasons.

This might not be realistic for Ariel, but it would have been ironic if this obsession was even greater and enough to cause Ariel to shy away from EA, so that they never contributed to shifting priorities more to animal welfare.

But I also agree this isn't necessarily a reason to shy away. Being disingenuous about our personal priorities to seem more mainstream seems wrong - like a bait-and-switch or cult-like tactics of getting people in the door and introducing heavier stuff as they get more emotionally invested. I like the framing of being more humble/agnostic, but maybe we (speaking as individuals) need to be careful that is genuine epistemological humility and not an act.

Michael_PJ @ 2023-11-20T17:20 (+24)

100% agree. I think it is almost always better to be honest, even if that makes you look weird. If you are worried about optics, "oh yeah, we say this to get people in but we don't really believe it" looks pretty bad.

David_Moss @ 2023-11-20T13:31 (+13)

I think that revealed preference can be misleading in this context, for reasons I outline here.

It's not clear that people's revealed preferences are what we should be concerned about compared to, for example, what value people would reflectively endorse assigning to animals in the abstract. People's revealed preference for continuing to eat meat, may be influenced by akrasia, or other cognitive distortions which aren't relevant to assessing how much they actually endorse animals being valued.[1] We may care about the latter, not the former, when assessing how much we should value animals (i.e. by taking into account folk moral weights) or how much the public are likely to support/oppose us allocating more aid to animals.

But on the specific question of how the public would react to us allocating more resources to animals: this seems like a directly tractable empirical question. i.e. it would be relatively straightforward through surveys/experiments to assess whether people would be more/less hostile towards if we spent a greater share on animals, or if we spent much more on the long run future vs supporting a more diverse portfolio, or more/less on climate change etc. 

 

  1. ^

    Though of course we also need to account for potential biases in the opposite direction as well.

Corentin Biteau @ 2023-11-20T16:09 (+25)

Thanks a lot for this post!

I was thinking of doing something similar myself.

And I must admit I agree with the conclusion, especially as I have trouble seeing how their ability to suffer can be much lower than ours (I mean, we have a lot of evolutionary history in common. I can't really justify how my cat would be able to feel an amount of pain ten times lower than mine).

Since animals are far more numerous than humans, they have much worse living conditions, much less money is spent on their welfare than on human well-being, and animal charities are more funding-constrained, it's hard to see how working on them can be less cost-effective.

Jack Malde @ 2023-11-20T17:53 (+13)

In fact, it has been suggested by Richard Dawkins that less intelligent animals might experience greater suffering, as they require more intense pain to elicit a response. The evolutionary process would have ensured they feel sufficient pain.

MichaelDickens @ 2024-03-18T23:52 (+6)

Agreed. I disagree with the general practice of capping the probability distribution over animals' sentience at 1x that of humans'. (I wouldn't put much mass above 1x, but it should definitely be more than zero mass.)

NickLaing @ 2023-11-21T08:47 (+3)

Why would they need more intense pain to elicit a response? Intuitively to me at least with less "reasoning" ability, the slightest bit of pain would likely illicit a response away from said pain. 

Jack Malde @ 2023-11-21T10:35 (+5)

Well you might need a reasoned response I.e. it seems that when I do X a bad thing happens to me therefore I should endeavor not to do X.

Here is the quote from Richard Dawkins:

“If you think about what pain is for biologically speaking, pain is a warning to the animal, ‘don’t do that again’.

“If the animal does something which results in pain, that is a kind of ritual death – it is telling the animal, ‘if you do that again you might die and fail to reproduce’. That’s why natural selection has built the capacity to feel pain into our nervous systems.

“You could say since pain is there to warn the animal not to do that again… an animal that is a slow learner or less intelligent might need more intense pain in order to deter [them] from doing it again, than a human who is intelligent enough to learn quickly.

“So it’s possible non-human animals are capable of feeling more intense pain than we are.”

https://plantbasednews.org/culture/richard-dawkins-animals-feel-more-intense-pain-than-humans/#:~:text=“You could say since pain,intelligent enough to learn quickly.

NickLaing @ 2023-11-21T11:17 (+2)

That does seem plausible but I think the opposite i more likely. Of course you need a reasoned response, but I'm not sure the magnitude of pain would necessarily help the association with the reasoned response

Harmful action leads to negative stimulus (perhaps painful) which leads to withdrawl and future cessation of that action. It seems unlikely to me that increasing the magnitude of that pain would make a creature more likely to stop doing an action. More like the memory and higher functions would need to be sufficient to associate the action to the painful stimuli, and then a form of memory needs to be there to allow a creature to avoid the action in future. 

It is unintuitive to me that the "amount" of negative stimuli (pain) would be what matters, more the strength of connection between the pain and the action, which would allow future avoidance of the behaviour.

I use "negative stimuli" rather than pain, because I still believe we heavily anthropomorphise our own experience of pain onto animals. Their experience is likely to be so wildly different from ours (whether "better" or "worse") that I think even using the word pain might be misleading sometimes.

More intelligent beings shouldn't necessarily need pain at all to avoid actions which could cause you to "die and fail to reproduce". I wouldn't think to avoid actions that could lead to, or would need very minor stimulus as a reminder. 

Actually it does seem quite complex the more I think about it/

Its an interesting discussion anyway.

Corentin Biteau @ 2023-11-20T20:30 (+3)

Ah, that's interesting. I didn't know that. 

I had in mind that maybe the power of thought could allow us to put things into perspective better and better support pain (as can be experienced through meditation). However, this can go both ways, as negative thoughts can cause additional suffering.

But I shall check the suggestion by Dawkins, that sounds interesting.

joshcmorrison @ 2023-11-20T07:00 (+22)

Thanks for writing this post! I think it's thoughtful and well-reasoned, and I think public criticism of OP (and leading institutions in effective altruism in general) is good and undersupplied, so I feel ike this writeup is commendable. I work at a global health nonprofit funded by OP, so I should say I'm strongly biased against moving lots of the money to animal welfare. 

An argument I've heard in the past (not the point of your post I know) is that because humans (often) eat  factory-farmed animals, expanding human lifespan is net negative from a welfarist perspective (because it increases the net amount of suffering in the world). 1. Is this argument implausible (i.e. is there a good way to disprove it?) and 2. If the argument were true, would it imply OP should not fund global health work at all (or restrict it very seriously)? 

MichaelStJules @ 2023-11-20T07:19 (+21)

There's a related tag Meat-eater problem, with some related posts. I think this is less worrying in low-income countries where GiveWell-recommended charities work, because animal product consumption is still low and factory farming has not yet become the norm. That being said, factory farming is becoming increasingly common, and it could be common for the descendants of the people whose lives are saved.

Then, there are also complicated wild animal effects from animal product consumption and generally having more humans that could go either way morally, depending on your views.

Vasco Grilo @ 2023-11-26T08:53 (+9)

Thanks for elaborating, Michael! Readers might want to check these BOTECs on the meat-eater problem.

These results suggest accounting for poultry does not matter much for GHD interventions. Among the countries targeted by GW’s top charities, the relative reduction in the cost-effectiveness of saving lives ranges from 0.0253 % for the Democratic Republic of Congo to 7.99 % for South Sudan.

Nevertheless, I believe the results above underestimate the reduction in cost-effectiveness, because:

  • I have not accounted for other farmed animals. From my estimates here, the negative utility of farmed chickens is only 30.6 % (= 1.42/4.64) of that of all farmed animals globally. This suggests accounting for all farmed animals would lead to a reduction in cost-effectiveness for the mean country of 8.72 % (= 0.0267/0.306), which is not negligible. So accounting for the effects of GHD interventions on farmed animals may lead to targeting different countries.
  • I have used the current consumption of poultry per capita, but this, as well as that of other farmed animals, will tend to increase with economic growth. I estimated the badness of the experiences of all farmed animals alive is 4.64 times the goodness of the experiences of all humans alive, which suggests saving a random human life results in a nearterm increase in suffering.

[...]

All in all, I can see the impact on wild animals being anything from negligible to all that matters in the nearterm. So, as for farmed animals, I think more research is needed. For example, on forecasting net change in forest area in low-income countries.

I tend to agree with Michael that the meat-eater problem is currently not a major concern in low-income countries, but that it will tend to become so in the next few decades, such that I would not be surprised if saving lives increased net suffering. It is also worth noting that the people targeted by Open Phil's new GHW areas[1] may have greater consumption per capita of farmed animals with bad lives relative to that in the countries targeted by GiveWell, such that the meat-eater problem is more problematic.

Personally, I also worry about the meat-eater problem in the context of global catastrophic risks. In my mind, if the catastrophe is sufficiently severe, saving humans will have a positive longterm effect which outweights the potential suffering inflicted to animals. However, for small catastrophes, I am open to arguments that saving humans has a negligible longterm effect, and may well increase net suffering due to greater consumption of farmed animals with bad lives linked to the saved humans.

  1. ^
Richard Y Chappell @ 2023-12-04T18:14 (+21)

One thought is that it may be a mistake to categorize GHD work as purely "neartermist". As Nick Beckstead flagged in his dissertation, the strongest reason for favoring GHD over animal welfare is that the former, by increasing overall human capacity, seems more likely to have positive "ripple effects" beyond the immediate beneficiaries.

One may object that GHD has lower expected value than explicitly longtermist work. But GHD may be more robustly good, with less risk of proving long-term counterproductive. So it may help to think of the GHD component of Worldview Diversification as stemming from a concern for robustness, rather than a concern for the nearterm per se.

Bob Fischer @ 2023-12-05T10:14 (+20)

Admittedly, we weren't factoring in the (ostensible) ripple effects, but our modeling indicates that if we're interested in robust goodness, we should be spending on chickens.

Also, for the reasons that @Ariel Simnegar already notes, even if there are unappreciated benefits of investing in GHD, there would need to be a lot of those benefits to justify not spending on animals. Could work out that way, but I'd like to see the evidence. (When I investigated this myself, making the case seemed quite difficult.)

Richard Y Chappell @ 2023-12-05T15:51 (+2)

That's all assuming neartermism, right?  I agree that neartermism plausibly entails prioritizing non-human animals. But neartermism seems very arbitrary, and should plausibly not receive nearly as much weight as GHD currently receives in OP's portfolio.*

Rather, my suggestion was that the current explicitly "longtermist" bucket should be thought of as something like "highly speculative longtermism", and the current GHD bucket should be thought of as something like "improving the long-term via robustly good methods".

It's harder to see how helping animals would fit that latter description. (Maybe the best case would be the classic Kantian argument that abusing animals less would help make us morally better.)

*: Insofar as the GHD bucket is really just motivated by something like "sticking close to common sense", and people started using "neartermism" as a slightly misnamed label for this, the current proposal to shift to very counterintuitive priorities whilst doubling down on the unprincipled rejection of longtermism seems kind of puzzling to me!

We shouldn't just assume that neartermism is the principled alternative to speculative longtermism, and then stick to that assumption come what may (i.e., even if it leads to the result that we should "mostly" ignore poor people). Rather, I think we should be more open-minded about how we should think of the different "buckets" in a Worldview-Diversified portfolio, and cautious of completely dismissing common-sense priorities (even as we give significant weight and support to a range of theoretically well-supported counterintuitive cause areas).

Bob Fischer @ 2023-12-05T15:56 (+2)

Nope, not assuming neartermism. The report has the details. Short version: across a range of decision theories, chickens look really good.

That said, I totally agree that from a purely conceptual perspective, we should "be more open-minded about how we should think of the different 'buckets' in a Worldview-Diversified portfolio, and cautious of completely dismissing common-sense priorities (even as we give significant weight and support to a range of theoretically well-supported counterintuitive cause areas)." 

Richard Y Chappell @ 2023-12-05T16:19 (+5)

Ok, thanks for clarifying!

Edit to add: I think you might mean something different by "robust goodness" than what I had in mind. From a quick look at your link, you're considering a range of different decision theories, risk-weightings, etc., and noting that chickens do at least moderately well on a wide range of theoretical assumptions.

I instead meant to be talking about empirical robustness: roughly, "helping the long-term via methods that are especially likely to do some immediate good, and with less risk of proving long-term counterproductive." Or, more concisely, "longtermism via nearterm goods with positive ripple effects". And then assessing what does best via this particular theoretical standard (to make up one bucket in our portfolio).

Since your report doesn't consider ripple effects, it doesn't address the kind of "robust longtermism" bucket I have in mind.

Halffull @ 2023-11-19T18:44 (+15)

one values humans 10-100x as much

 

This seems quite low, at least from a perspective of revelead preferences. If one indeed rejects unitarism, I suspect that the actual willingness to pay is something like 1000x - 10,000x to prevent the death of an animal vs. a human.  

MichaelStJules @ 2023-11-20T01:54 (+30)

Also, if we defer to people's revealed preferences, we should dramatically discount the lives and welfare of foreigners. I'd guess that Open Philanthropy, being American-funded, would need to reallocate much or most of its global health and development grantmaking to American-focused work, or to global catastrophic risks.

EDIT: For those interested, there's some literature on valuing foreign lives, e.g. https://scholar.google.ca/scholar?hl=en&as_sdt=0%2C5&q="valuing+foreign+lives"+OR+"foreign+life+valuation"

Richard Y Chappell @ 2023-11-19T20:45 (+22)

But isn't the relevant harm here animal suffering rather than animal death?  It would seem pretty awful to prefer that an animal suffer torturous agony rather than a human suffer a mild (1000x less bad) papercut.

Jeff Kaufman @ 2023-11-20T13:28 (+23)

I think comparisons to paper cuts and other minor harms don't work very well with people's intuitions: a lot of people feel like (and sometimes explicitly endorse that) no number of paper cuts can outweigh torturous agony. See this old LW post and the disagreements around it.

Instead, my experience is people's intuitions work better when thinking in probabilities or quantities: what chance of suffering for a human would balance against that for a chicken? Or how many chickens suffering in that way would be equivalent to one human?

Richard Y Chappell @ 2023-11-20T15:32 (+6)

Fair point, thanks!

Karthik Tadepalli @ 2023-11-24T01:30 (+14)

Revealed preference is a good way to get a handle on what people value, but its normative foundation is strongest when the tradeoff is internal to people. Eg when we value lives vs income, we would want to use people's revealed preference for how they trade those off because those people are the most affected by our decisions and we want to incorporate their preferences. That normative foundation doesn't really apply to animal welfare where the trade-offs are between people and animals. You may as well use animals revealed preferences for saving humans (ie not at all) and conclude that humans have no worth; it would be nonsensical.

MichaelStJules @ 2023-11-19T19:28 (+11)

I think that's basically right, but also rejecting unitarianism and discounting other animals through this seems to me like saying the interests of some humans matter less in themselves (ignoring instrumental reasons) just because of their race, gender or intelligence, which is very objectionable.

People discount other animals because they're speciesist in this way, although also for instrumental reasons.

Gage Weston @ 2023-11-23T18:58 (+11)

Very good points made! One objection I think you didn’t mention that might be on OP’s mind in neartermist allocations has to do with population ethics. One reason many people are near termist is because they subscribe to a person-affecting view whereby the welfare of “merely potential” beings does not matter. Since basically all animal welfare interventions either 1. Cause fewer animals to exist, or 2. Change welfare conditions for entire populations of animals, it seems extremely unlikely the animals who would otherwise have lived the higher suffering lives will have the same identity (eg same genes) as the higher welfare ones. To a person affecting view, this implies animal welfare interventions like corporate campaigns or alt protein investment merely change who or how many animals there are but don’t benefit any animal in particular and thus have no impact on this moral view. I personally don’t subscribe to this view, and I am not sure if most people at OP with a person affecting view have taken this idea seriously although it does seem like the right conclusion from this view.

Dustin Crummett @ 2023-11-30T23:54 (+10)

Generally, people with person-affecting views still want it to be the case that we shouldn't create individuals with awful lives, and probably also that we should prefer the creation of someone with a life that is net-negative by less over someone with a life that is net-negative by more. (This relates to the supposed procreation asymmetry, where, allegedly, that a kid would be really happy is not a reason to have them, but that a kid would be in constant agony is a reason not to have them.) One way to justify this would be the thought that, if you don't create a happy person, no one has a complaint, but if you do create a miserable person, someone does have a complaint (i.e., that person).

Where factory-farmed animals have net-negative lives, I'm not sure person-affecting views would justify neglecting animal welfare, then. (Similarly, re: longtermism, they might justify neglecting long-term x-risks, but not s-risks.)

Gage Weston @ 2023-12-01T18:01 (+1)

I agree many people believe in the asymmetry, and that is likely one reason people care about animal welfare but not longtermism. However, I think you're conflating a person-affecting view with the asymmetry, which are separate views. I hate to argue semantics here, but the person-affecting view only is concerned only with the welfare of existing beings, not with the creation of negative lives no matter how bad they are. Again, neither of these are my view, but they likely belong to some people.

Dustin Crummett @ 2023-12-01T20:31 (+6)

They are separate views, but related: people with person-affecting views usually endorse the asymmetry, people without person-affecting views usually don't endorse the asymmetry, and person-affecting views are often taken to (somehow or other) provide a kind of justification for the asymmetry. The upshot here is that it wouldn't be enough for people at OP to endorse person-affecting views: they'd have to endorse a version of a person-affecting view that is rejected even by most people with person-affecting views, and that independently seems gonzo--one according to which, say, I have no reason at all not to push a button that creates a trillion people who are gratuitously tortured in hell forever.

Very roughly, how this works: person-affecting views say that a situation can't be better or worse than another unless it benefits or harms someone. (Note that the usual assumption here is that, to be harmed or benefited, the individual doesn't have to exist now, but they have to exist at some point.) This is completely compatible with thinking it's worse to create the trillion people who suffer forever: it might be that their existing is worse for them than not existing, or harms them in some non-comparative way. So it can be worse to create them, since it's worse for them. And that should also be enough to get the view that, e.g., you shouldn't create animals with awful lives on factory farms.

Of course, usually people with person-affecting views want it to be neutral to create happy people, and then there is a problem about how to maintain that while accepting the above view about not creating people in hell. So somehow or other they'll need to justify the asymmetry. One way to try this might be via the kind of asymmetrical complaint-based model I mentioned above: if you create the people in hell, there are actual individuals you harm (the people in hell), but if you don't create people in heaven, there is no actual individual you fail to benefit (since the potential beneficiaries never exist). In this way, you might try to fit the views together. Then you would have the view that it's neutral to ensure the awesome existence of future people who populate the cosmos, but still important to avoid creating animals with net-negative lives, or future people who get tortured by AM or whatever.

Now, it is true that people with person-affecting views could instead say that there is nothing good or bad about creating individuals either way--maybe because they think there's just no way to compare existence and non-existence, and they think this means there's no way to say that causing someone to exist benefits or harms them. But this is a fringe view, because, e.g., it leads to gonzo conclusions like thinking there's no reason not to push the hell button.

I think all this is basically in line with how these views are understood in the academic literature, cf., e.g., here.

MichaelStJules @ 2023-12-01T20:34 (+5)

There are multiple views considered "person-affecting views", and I think the asymmetry (or specific asymmetric views) is often considered one of them. What you're describing is a specific narrow/strict person-affecting restriction, also called presentism. I think it has been called the person-affecting view or the person-affecting restriction, which is of course confusing if there are multiple views people consider person-affecting. The use of "person-affecting" may have expanded over time.

Ariel Simnegar @ 2023-11-23T23:13 (+3)

Thanks Gage!

That's a good point I hadn't considered! I don't think that's OP's crux, but it is a coherent explanation of their neartermist cause prioritization.

Vasco Grilo @ 2023-11-26T17:21 (+4)

It really is a nice point, Gage! Like Ariel, I also guess it is not driving OP's neartermist prioritisation, as OP has funded lots of longtermist work, and this is also significantly less valuable under person-affecting views (unless OP thinks most of the benefits of longtermist interventions respect reducing deaths of people currently alive).

Jeroen Willems @ 2023-11-21T10:43 (+11)

I haven't read the other comments yet but I just want to share my deep appreciation for writing this post! I've always wondered why animal welfare gets so little funding compared to global health in EA. I'm thankful you're highlighting it and starting a discussion, whether or not OP's reasons might be justified.

Rakefet Cohen Ben-Arye @ 2024-05-26T11:57 (+7)

This is a masterpiece.
These were the key points for me from the article:

Vasco Grilo @ 2023-12-04T17:35 (+6)

Hi Ariel,

Not strictly related to this post, but just in case you need ideas for further posts ;), here are some very quick thoughts on 80,000 Hours.

I wonder whether 80,000 Hours should present "factory-farming" and "easily preventable human [human] diseases" as having the same level of pressingness.

80,000 Hours' thinking the above have similar pressingness is probably in agreement with a list they did in 2017, when factory-farming came out 2 points above (i.e. 10 times as pressing as) developing world health.

It is also interesting that 3 of 80,000 Hours' current top 5 most pressing problems came out as similarly pressing or less pressing than factory-farming. More broadly, it would be nice if 80,000 Hours were more transparent about how their rankings of problems and careers are produced, as I guess these have a significant impact on shaping the career choices of many people. I will post one question about this on the EA Forum in a few weeks.

Vasco Grilo @ 2023-11-22T14:56 (+6)

Thank you so much for putting this together, Ariel!

Ariel Simnegar @ 2023-11-22T15:09 (+6)

Absolutely! Most of what's important in this essay is just a restatement of your inspiring CEA from months ago :)

David van Beveren @ 2023-11-27T07:28 (+3)

Ariel, thank you for taking the time to put this together. It's encouraging to see both constructive and meaningful conversations unfolding around a topic that I believe is essential if we're to see a shift in both OP and EA's FAW funding priorities. 

Most points I had in mind have been covered by others in this thread already— but I wanted to extend my support either way.

Ariel Simnegar @ 2023-11-27T13:26 (+3)

Thanks so much David! :)

DanteTheAbstract @ 2023-11-23T10:43 (+3)

In the summary you mention that "Skepticism of formal philosophy is not enough". I’m new to the forum, could you (or anyone else) clarify what is meant by formal philosophy? Is the statement equivalent to just saying "Skepticism of philosophy is not enough" or "Skepticism of philosophical reasoning is not enough"? 

Also, in the section "Increasing Animal Welfare Funding would Reduce OP’s Influence on Philanthropists" you make a comparison of AI x-risk and FAM. While AI x-risk reduction is also a niche cause area, I think you underestimate how niche FAW is relative to AI x-risk. The potential alienating risk from significant allocation to x-risk isn’t the same as that of FAW since AI x-risk is still largely a story about the impact this would have on humans and their societies.

I’m not saying this is the correct view but the one that would be generally held by most potential funders. 

 

In general the utilitarian case for your main points seem strong. Great post.

Ariel Simnegar @ 2023-11-24T02:09 (+4)

Thanks for the compliment :)

When I write "skepticism of formal philosophy", I more precisely mean "skepticism that philosophical principles can capture all of what's intuitively important". Here's an example of skepticism of formal philosophy from Scott Alexander's review of What We Owe The Future: 

I’m not sure I want to play the philosophy game. Maybe MacAskill can come up with some clever proof that the commitments I list above imply I have to have my eyes pecked out by angry seagulls or something. If that’s true, I will just not do that, and switch to some other set of axioms. If I can’t find any system of axioms that doesn’t do something terrible when extended to infinity, I will just refuse to extend things to infinity...I realize this is “anti-intellectual” and “defeating the entire point of philosophy”.

You make a good point regarding the relative niche-ness of animal welfare and AI x-risk. I agree that my post's analogy is crude and there are many reasons why people's dispositions might favor AI x-risk reduction over animal welfare.

Alex Mallen @ 2023-12-03T21:26 (+1)

It is unclear in the first figure whether to compare the circles by area or diameter. I believe the default impression is to compare area, which I think is not what was intended and so is misleading.

Ariel Simnegar @ 2023-12-03T22:06 (+2)

Comparing area was intended :)

If it's unclear, I can add a note which says the circles should be compared by area.

Alex Mallen @ 2023-12-03T22:12 (+1)

Thanks!