What are the strongest arguments for the side you voted against in the AW vs GH debate?

By Will Howard🔹 @ 2024-10-08T19:45 (+51)

The Animal Welfare vs Global Health debate week is turning out to be pretty one sided so far.

The wording of the question this time was chosen to be a bit more resistant to nitpicks (vs "...should be an EA priority" last time), potentially this has also resulted in it appearing more polarised one way. For me, voting strongly on the animal welfare side was not a endorsement of animal welfare being definitely more effective forever, but just that moving a chunk of money on the margin would be good seeing as it currently appears more cost effective by most counts.

So, I'm interested in hearing arguments for the other side (whichever way you voted) that you find persuasive, but not enough to fully persuade you.


Will Howard🔹 @ 2024-10-08T19:50 (+33)

My personal reasons favouring global health:

  1. I'm sceptical of Rethink's moral weight numbers[1], and am more convinced of something closer to anchoring on neuron counts (and even more convinced by extreme uncertainty). This puts animal charities more like 10x ahead rather than 1000 or 1 million times. I'm also sceptical of very small animals (insects) having a meaningful probability/degree of sentience.
  2. I am sceptical of suffering focused utilitarianism[2], and am worried that animal welfare interventions tend to lean strongly in favour of things that reduce the number of animals, on the assumption that their lives are net negative. Examples of this sort of mindset include this, this, and this.

    Not all of these actively claim the given animals' lives must be net negative, but I'm concerned about this being seen as obviously true and baked into the sorts of interventions that are pursued. I'm especially concerned about the idea that the question of whether animals' lives are net-negative is not relevant (see first linked comment), because the way in which it is relevant is that it favours preventing animals from coming into existence (this is more commonly supported than actively euthanising animals).

    Farmed animals are currently the majority of mammal + bird biomass, and so ending the (factory) farming of animals is concomitant with reducing the total mammal + bird population[3] by >50%, and this is not something that I see talked about as potentially negative.

    That said, if pushed I would still fairly strongly predict that farmed chickens lives are net negative at least, which is why on net I support the pro animal welfare position.
  3. I think something like worldview diversification is essentially a reasonable idea, for reasons of risk aversion and optimising under expected future information. The second is an explore/exploit tradeoff take (which often ends up looking suspiciously similar to risk aversion 🧐).

    In the case where there is a lot of uncertainty on the relative value of different cause areas (not just in rough scale, but that things we think are positive EV could be neutral or very negative), it makes sense to hedge and put a few eggs into each basket so that you can pivot when new important information arises. It would be bad to, for instance, spend all your money euthanising all the fish on the planet and then later discover this was bad and that also there is a new much more effective anti-TB intervention.

    Of course, this more favours doing more research on everything than it does pouring a lot of exploit-oriented money into Global Health, but in practice I think some degree of trying to follow through on interventions is necessary to properly explore (plus you can throw in some other considerations like time preference/discount rates), and OpenPhil isn't spending money overall at a rate that implies reckless naive EV maximising (over-exploitation).

    Some written-down ideas in this direction: We can do better than argmax, Tyranny of the Epistemic Majority, In defense of more research and reflection.
  4. I believe something like "partiality shouldn't be a (completely) dirty word". When taken to extremes, most people accept some concessions to partiality. For instance it's generally considered not a good strategic move to pressure people into giving so much of their income that they can't live comfortably, even though for a sufficiently motivated moral actor this would likely still be net positive. Most people also would not jump at the chance to be replaced by a species that has 10% higher welfare.

    I think it's wrong to apply this logic only at the extremes, and there should be some consideration of what the market will bear when considering more middle of the road sacrifices. For instance a big factor in the cost effectiveness of lead elimination is that it can be happily picked up by more mainstream funders.

(I realise a lot of these are not super well justified, I'm just trying to get the main points across).

  1. ^

    I'm planning to publish a post this week addressing one small part of this, although it's a pretty complicated topic so I don't expect this to get that far in justifying the position

  2. ^

    Not meant in a very technical sense, just as the idea that there is probably more suffering relative to positive wellbeing, or that it's easier to prevent it. Again, this is for reasons that are beyond the scope of this post. But two factors are:
    1) I think common sense reasoning about the neutral point of experience is overly pessimistic
    2) I am sceptical of the intensity of pain and pleasure being logarithmically distributed (severe pain ~100x worse than moderate pain), and especially of this being biased in the negative direction. One reason for this is that I find the "first story" for interpreting Weber's law in this post much more intuitive, i.e. that logarithmically distributed stimuli get compressed to a more linear range of experience

  3. ^

    Weighted by biomass obviously. The question of actual moral value falls back to the moral weights issue above. A point of reference on the high-moral-weights-sceptical end of the spectrum is this table @Vasco Grilo🔸 compiled of aggregate neuron counts (although, as mentioned, I don't actually think neuron counts are likely to hold up in the long run)

Vasco Grilo🔸 @ 2024-10-08T21:54 (+6)

Thanks for sharing your thoughts, Will!

I am sceptical of suffering focused utilitarianism[2], and am worried that animal welfare interventions tend to lean strongly in favour of things that reduce the number of animals, on the assumption that their lives are net negative.

You could donate to organisations improving instead of decreasing the lives of animals. I estimated a past cost-effectiveness of Shrimp Welfare Project’s Humane Slaughter Initiative (HSI) of 43.5 k times the marginal cost-effectiveness of GiveWell’s top charities.

I'm sceptical of Rethink's moral weight numbers[1], and am more convinced of something closer to anchoring on neuron counts (and even more convinced by extreme uncertainty). This puts animal charities more like 10x ahead rather than 1000 or 1 million times. I'm also sceptical of very small animals (insects) having a meaningful probability/degree of sentience.

I agree with the last sentence. Using Rethink Priorities' welfare range for chickens based on neurons, I would conclude corporate campaigns for chicken welfare are 11.1 (= 1.51*10^3*0.00244/0.332) times as cost-effective as GiveWell's top charities.

Rethink Priorities' median welfare range for shrimps of 0.031 is 31 k (= 0.031/10^-6) times their welfare range based on neurons of 10^-6. For you to get to this super low welfare range, you would have to justify putting a very low weight in all the other 11 models considered by Rethink Priorities. In general, justifying a best guess so many orders of magnitude away from that coming out of the most in-depth research on the matter seems very hard.

2) I am sceptical of the intensity of pain and pleasure being logarithmically distributed (severe pain ~100x worse than moderate pain), and especially of this being biased in the negative direction. One reason for this is that I find the "first story" for interpreting Weber's law in this post much more intuitive, i.e. that logarithmically distributed stimuli get compressed to a more linear range of experience

Assuming in my cost-effectiveness analysis of HSI that disabling and excruciating pain are as intense as hurtful pain (setting B2 and B3 of tab "Types of pain" to 1), and maintaining the other assumptions, 1 day of e.g. "scalding and severe burning" would be neutralised by 1 day of fully healthy life. I think this massively underestimates the badness of severe suffering. Yet, even then, I conclude the past cost-effectiveness of HSI is 2.17 times the marginal cost-effectiveness of GiveWell's top charities.

I think something like worldview diversification is essentially a reasonable idea, for reasons of risk aversion and optimising under expected future information.

Farmed animals are neglected, so I do not think worldview diversidication would be at risk due to moving 100 M$ to animal welfare instead of global health and development. I calculated 99.9 % of the annual philanthropic spending is on humans.

In contrast, based on Rethink Priorities' median welfare ranges, the annual disability of farmed animals is much larger than that of humans.

We can do better than argmax

I agree one should not put all resources into the best option, but we are very far from this (see 1st graph above).

Will Howard🔹 @ 2024-10-09T14:00 (+7)

Thanks Vasco, I did vote for animal welfare, so on net I agree with most of your points. On some specific things:

You could donate to organisations improving instead of decreasing the lives of animals

This seems right, and is why I support chicken corporate campaigns which tend to increase welfare. Some reasons this is not quite satisfactory:

  1. It feels a bit like a "helping slaves to live happier lives" intervention rather than "freeing the slaves"
  2. I'm overall uncertain about whether animals lives are generally net positive, rather than strongly thinking they are
  3. I'd still be worried about donations to these things generally growing the AW ecosystem as a side effect (e.g. due to fungibility of donations, training up people who then do work with more suffering-focused assumptions)

But these are just concerns and not deal breakers.

Rethink Priorities' median welfare range for shrimps of 0.031 is 31 k (= 0.031/10^-6) times their welfare range based on neurons of 10^-6. For you to get to this super low welfare range, you would have to justify putting a very low weight in all the other 11 models considered by Rethink Priorities.

I am sufficiently sceptical to put a low weight on the other 11 models (or at least withhold judgement until I've thought it through more). As I mentioned I'm writing a post I'm hoping to publish this week with at least one argument related to this.

The gist of that post will be: it's double counting to consider the 11 other models as separate lines of evidence, and similarly double counting to consider all the individual proxies (e.g. "anxiety-like behaviour" and "fear-like behaviour") as independent evidence within the models.

Many of the proxies (I claim most) collapse to the single factor of "does it behave as though it contains some kind of reinforcement learning system?". This itself may be predictive of sentience, because this is true of humans, but I consider this to be more like one factor, rather than many independent lines of evidence that are counted strongly under many different models.

Because of this (a lot of the proxies looking like side effects of some kind of reinforcement learning system), I would expect we will continue to see these proxies as we look at smaller and smaller animals, and this wouldn't be a big update. I would expect that if you look at a nematode worm for instance, it might show:

  1. "Taste-aversion behaviour": Moving away from a noxious stimulus, or learning that a particular location contains a noxious stimulus
  2. "Depression-like behaviour": Giving up/putting less energy into exploring after repeatedly failing
  3. "Anxiety-like behaviour": Being put on edge or moving more quickly if you expose it to a stimulus which has previously preceded some kind of punishment
  4. "Curiosity-like behaviour": Exploring things even when it has some clearly exploitable resource

It might not show all of these (maybe a nematode is in fact too small, I don't know much about them), but hopefully you get the point that these look like manifestations of the same underlying thing such that observing more of them becomes weak evidence once you have seen a few.

Even if you didn't accept that they were all exactly side effects of "a reinforcement learning type system" (which seems reasonable), still I believe this idea of there being common explanatory factors for different proxies which are not necessarily sentience related should be factored in.

(RP's model does do some non-linear weighting of proxies at various points, but not exactly accounting for this thing... hopefully my longer post will address this).

On the side of neuron counts, I don't think this is particularly strong evidence either. But I see it as evidence on the side of a factor like "their brain looks structurally similar to a human's", vs the factor of "they behave somewhat similarly to a human" for which the proxies are evidence.

To me neither of these lines of evidence ("brain structural similarity" and "behavioural similarity") seems obviously deserving of more weight.

Farmed animals are neglected, so I do not think worldview diversidication would be at risk due to moving 100 M$ to animal welfare

I definitely agree with this, I would only be concerned if we moved almost all funding to animal welfare.

Vasco Grilo🔸 @ 2024-10-09T16:10 (+5)

I'd still be worried about donations to these things generally growing the AW ecosystem as a side effect (e.g. due to fungibility of donations, training up people who then do work with more suffering-focused assumptions)

Without more information, I would guess that funding work on improving rather than decreasing animal lives will at the margin incentivises people to follow the funding, and therefore skill up to work on improving rather than decreasing animal lives.

I am sufficiently sceptical to put a low weight on the other 11 models (or at least withhold judgement until I've thought it through more). As I mentioned I'm writing a post I'm hoping to publish this week with at least one argument related to this.

I am looking forward to the post. Thanks for sharing the gist and some details. You may want to share a draft with people from Rethink Priorities.

To me neither of these lines of evidence ("brain structural similarity" and "behavioural similarity") seems obviously deserving of more weight.

I find it hard to come up with other proxies.

Jason @ 2024-10-08T23:24 (+7)

Farmed animals are neglected, so I do not think worldview diversidication would be at risk due to moving 100 M$ to animal welfare instead of global health and development. I calculated 99.9 % of the annual philanthropic spending is on humans.

I think it would be more appropriate to use something like human welfare spending for low-income countries rather than counting ~all charitable activity as in a broad "human" bucket. That is to maintain parity with the way you've sliced off a particularly effective part of the animal-welfare pie (farmed animal welfare). E.g., some quick Google work suggests animal shelters brought in 3.5B in 2023 in just the US (although a fair portion of of that may be government contracts). 

Companion animal shelters may be the animal-welfare equivalent of opera for human-focused charities (spending lots on relatively few individuals who are relatively privileged in a sense). While deciding not to give to farmed-animal charities because of dog shelter spending doesn't make much sense, I would submit that not giving to bednets because of opera spending poses much the same problem.

I don't think that changes your underlying point much at all, though!

NickLaing @ 2024-10-09T05:17 (+6)

Thanks Jason, I would say that giving to animal shelters might be more like giving to the cancer society, or even world vision, rather than opera but that's as fairly minor point.

MichaelStJules @ 2024-10-10T05:28 (+2)

Farmed animals are currently the majority of mammal + bird biomass, and so ending the (factory) farming of animals is concomitant with reducing the total mammal + bird population[3] by >50%, and this is not something that I see talked about as potentially negative.

Presumably counterfactual reductions in animal agriculture result in counterfactual reductions in land use for agriculture, and so counterfactual increases in wild habitat, allowing more wild animals to be born and live. Animal agriculture is responsible for a disproportionate share of land use.

Source: https://ourworldindata.org/global-land-for-agriculture

 

As someone suffering-focused, I see this as reason to not work on diet change and reducing animal agriculture, because increasing wild animal populations seems bad. I mostly support welfare reforms and reducing the use of very small animals in particular.

Will Howard🔹 @ 2024-10-10T07:55 (+4)

I was assuming that a reduction in agriculture would result in an overall reduction in the biomass (and "neuron count"[1]) of birds and mammals, because:

  1. Currently the biomass of farmed birds + mammals is about 10x that of farmed birds + mammals (source, not sure how marine mammals are counted but only the ballpark is needed), and this is only using 45% of the habitable land as you say
  2. Logically it makes sense that farming aims to efficiently convert land area into animal biomass, and has more of a top down ability to achieve this than nature does. The animals that are most widely farmed are partly chosen for having food chains of only one step, and not needing to run around a lot expending energy.
    1. A point against this is that animals are slaughtered earlier than their natural lifespan, which would result in fewer days experienced per unit of feed input. But given the numbers above I don't think this is an offsetting factor

Of course by number of individuals it would go the other way, which is presumably why you are concerned about reducing farming from a suffering-focused perspective. So I think this comes back to the issue of moral weights for small animals (as usual 😌).

...

I'm now trying to inhabit a position that I don't exactly believe, but is interesting and that I do find somewhat persuasive.

From a bigger picture perspective, you can imagine someone trying to derive the optimal arrangement of civilisation according to hedonic utilitarianism, where they accept something closer to my end of the suffering-focused and logarithmic-intensity axes[2]. Suppose they have a good model of evolutionary theory and economics but lack the details of how life on earth currently looks.

They might think something like the following: "In order to expect any kind of top down control over the outcome you need an intelligent species + culture that can coordinate over the use of large areas (geographical, or in whatever relevant space). This species will probably be high maintenance, because they will need to have had very complex and demanding needs and wants in order for them to develop the necessary culture in the first place.

The ideal scenario would be for a relatively small population of this high maintenance species to act as stewards for a much larger population of creatures that are very low maintenance, in order to achieve a high total utility with the resources available. These low maintenance creatures should be chosen to not require a lot of energy, be easily satisfied with simple and cheap pleasures, and have simple social structures such that you can scale the number of individuals without too many side effects.

Of course, this is just a pipe dream, because this would require the advanced species to have some kind of intrinsic preference for stewarding this larger population. Among the set of all goals it seems unlikely they would have this specific one".

If you look at the actual world it is quite striking how close it is to this vision. Humanity does maintain large populations of low maintenance animals, using a large proportion of the resources that are available to do so, at minimum economic cost. The difference is that we currently torture them.

If you were to accept the vision above, it looks like an easier move from "maintaining large population and torturing them" to "maintaining large population and trying to give them happy lives", than it is from "large population + torturing them" to "90% smaller population of domesticated animals" to "later maybe we make the population large again for morally motivated reasons".

...

Anyway, sorry for getting on a tangent from directly replying to your comment, but this long term picture is the thing that makes me actually uneasy about going hard on interventions to end factory farming. That is, on the margin currently I'm pretty happy with a best guess of it being positive expected value to reduce the amount of animal farming, but would be more hesitant about ending farming overnight because of the potential for irreversible effects.

I would expect that if non-animal protein sources become clearly superior than animal sources, then this would result in a very rapid collapse in the number of farmed animals, and that once this has happened it could be a lot harder to move towards the "high population, high welfare" world (because we would start using all the land for something else, and the idea of using a large fraction of the land on earth for managed populations of animals would become seen as weird).

I think it's not widely conceptualised that potentially "PTC-dominant alternative protein => >50%[3] collapse in the welfare-range-weighted population of creatures within 10 years".

  1. ^

    Used as a stand-in for some more accurate proxy for sentience, but which scales predominantly with brain size/complexity rather than number of individuals

  2. ^

    I.e. they think extremely bad experiences are not orders of magnitude worse than simply quite bad experiences

  3. ^

    Using ">50%" as a stand-in for "a quite surprising amount of the total fraction" and welfare-range-weighted as a stand-in for "weighted by the delta in welfare that humans could reasonably expect to achieve with some degree of confidence (e.g. without it being in animals that are so different from humans that their sentience is highly questionable)"

MichaelStJules @ 2024-10-11T02:30 (+13)

(Edited)

I favour animal welfare, but some (near-term future) considerations that I'm most sympathetic to that could favour global health are:

  1. I'm not a hedonist. I care about every way any being can care consciously and terminally about anything. So, I care about others' (conscious or dispositionally conscious) hedonic states, desires, preferences, moral intuitions and other attitudes on their behalf. I'd guess that humans are much more willing to endure suffering, including fairly intense suffering, for their children and other goals than other animals are for anything. So human preferences might often be much stronger than other animals', if we normalize preferences by preferences about one's own suffering, say.
    1. This has some directly intuitive appeal, but my best guess is that this involves some wrong or unjustifiable assumptions, and I doubt that such preferences are even interpersonally comparable.[1]
    2. This reasoning could lead to large discrepancies between humans, because some humans are much more willing to suffer for things than others. The most fanatical humans might dominate. That could be pretty morally repugnant.
  2. Arguments for weighing ~proportionally with neuron counts:
    1. The only measures of subjective welfare that seem to me like they could ground interpersonal comparisons are based on attention (and alertness), e.g. how hard attention is pulled towards something important (motivational salience) or "how much" attention is used. I could imagine the "size" of attention, e.g. the number of distinguishable items in it, to scale with neuron counts, maybe even proportionally, which could favour global health on the margin.
      1. But probably with decreasing marginal returns to additional neurons, and I give substantial weight to the number of neurons not really mattering at all, once you have the right kind of attention.
    2. Some very weird and speculative possibilities of large numbers of conscious or value-generating subsystems in each brain could support weighing ~proportionally with neuron counts in expectation, even if you assign the possibilities fairly low but non-negligible probabilities (Fischer, Shriver & St. Jules, 2022).
      1. Maybe even faster scaling than proportional in expectation, but I think that leads to double counting I'd reject if it's even modestly faster than proportional.
  3. Animal welfare work has more steeply decreasing marginal cost-effectiveness.
  4. Cost-effectiveness estimates for marginal animal welfare work are more speculative than GiveWell's (RCT- and meta-analysis-based) estimates, at least for the more direct impacts considered. Maybe we're not skeptical enough of the causal effects of animal welfare work, and the welfare reforms would have happened soon anyway or aren't as likely to actually materialize as we think. I'm also inclined to give less weight to more extreme impacts when they're more ambiguous/speculative, similar to difference-making ambiguity aversion.
  5. I worry about lots of animal welfare work backfiring, and support for apparently safer work funging with work that backfires, so also backfiring.
    1. My best guess is that animal agriculture is good for wild animals, especially invertebrates, because it reduces their populations and I have very asymmetric views. So plant-based substitutes, cultured meat and other diet change work could backfire, if and because it harms wild invertebrates more than it helps animals used for food.
    2. I worry that nest deprivation for caged laying hens could be much less intensely painful than the long-term pain from keel bone fractures, so cage-free could be worse because of the apparent increase in keel bone fractures.
      1. I think we should support more work to reduce keel bone fractures in laying hens, and CE/AIM wants to start a new charity for this.
  6. Saving human lives, e.g. through AMF, probably reduces wild animal populations, so seems good for animals overall if you care enough about invertebrates (relative to animals used for food) and think they'd be better off not existing.
    1. Maybe farmed insect welfare work is even better, though.
  1. ^

    People probably just have different beliefs/preferences about how much their own suffering matters, and those preferences are plausibly not interpersonally comparable at all.

    Some people may find it easier to reflectively dismiss or discount their own suffering than others for various reasons, like particular beliefs or greater self-control. If interpersonal comparisons are warranted, it could just mean these people care less about their own suffering in absolute terms on average, not that they care more about other things than average. Other animals probably can't easily dismiss or discount their own suffering much, and their actions follow pretty directly from their suffering and other felt desires, so they might even care more about their own suffering in absolute terms on average.

    We can also imagine moral patients with conscious preferences who can't suffer at all, so we'd have to find something else to normalize by to make interpersonal comparisons with them.

    I discuss interpersonal comparisons more here.

Nathan Young @ 2024-10-10T10:51 (+10)

I'm making my way through, but so far I guess it's gonna be @Richard Y Chappell🔸's arguments around ripple effects

Animals likely won't improve the future for consciousness, but more, healthy humans might. 

I haven't read the article fully yet though.