abrahamrowe's Quick takes

By abrahamrowe @ 2021-02-24T15:18 (+5)

null
abrahamrowe @ 2024-08-29T02:55 (+195)

Reflections on a decade of trying to have an impact

Next month (September 2024) is my 10th anniversary of formally engaging with EA. This date marks 10 years since I first reached out to the Foundational Research Institute about volunteering, at least as far as I can tell from my emails.

Prior to that, I probably had read a fair amount of Peter Singer, Brian Tomasik, and David Pearce, who might all have been considered connected to EA, but I hadn’t actually actively tried engaging with the community. I’d been engaged with the effective animal advocacy community for several years prior, and I think I’d volunteered for The Humane League some, and had seen some of The Humane League Labs’ content online. I’m not sure if The Humane League counted as being “EA” at the time (this was a year before OpenPhil made its first animal welfare grants).

This post is me roughly trying to guess at my impact since then, and reflections on how I’ve changed as a person, both on my own and in response to EA. It’s got a lot of broad reflections about how my feelings about EA have changed. It isn’t particularly rigorously or transparently reasoned — it’s more of a reflection exercise for myself than anything else. I’m mainly trying to look at what I’ve worked on with a really critical eye. I make a lot of claims here that I don't provide evidence for.

I’m sharing this because I think the major update I’ve had from doing this is that while I’ve generally done many of the “working-in-EA” things that are often presented as high impact, I personally feel much more tangibly the impact of my donations, and right now, if I think what’s made me feel best about being in EA, it’s actions more in the earning-to-give direction than the direct work direction.

My high-level view of my impact over this period is something like:

 

Background

I became pretty convinced that factory farming was a moral tragedy as a little kid, I believe due to exposure to either PETA content or PETA Kids content. My brother was also vegetarian, which was a compelling enough reason for me to also become vegetarian. I volunteered for a lot of animal welfare organizations, especially in college. I also did a lot of direct action-type advocacy for animals in college. I was already a fairly hardcore utilitarian at that point, and had mainlined Peter Singer, David Pearce, Timothy Sprigge, and a bunch of other wacky utilitarians. I spent a significant amount of my time in college staying up until the early morning talking about wild animal suffering and other animal issues with my closest friend while playing the video game Super Monkey Ball. This did not help any animals but was incredibly important to how I think about animal issues now.

At some point around 2011 or 2012, I saw a frog that was hit by a bike and dying, and was really distraught over it. I’m not sure why, but this was oddly transformative for me, and I just internalized animal suffering from it really directly in a way I hadn’t before. I also have a fairly strong memory from when I was 20 or 21 of spending an afternoon in the rain putting worms back from the sidewalk into the grass, and feeling bad about them dying naturally. I formed fairly strong views about animals in nature living awful lives, and beliefs about my obligations to help them.

In 2014, I was targeted by a Google Ad for The Foundational Research Institute, I believe on a topic related to wild animal welfare. I think this was my first exposure to EA formally, though I’d read studies on The Humane League Labs website, had read Animal Liberation, Famine, Influence, and Morality, and some other books that informed EA ideas.

I did some volunteering for FRI, read a lot of Brian Tomasik’s website, and also did some experiments at a cat shelter on reducing the impact of outdoor cats on animals. In 2016, I started working at Mercy For Animals, running corporate animal welfare campaigns. I also formally started Utility Farm, a nonprofit that would later merge with Wild-Animal Suffering Research into Wild Animal Initiative. I’ve done a bunch of other things in the EA world since.

My potential impact

 

 

 

 

 

My beliefs in 2014 compared to now

This is my best effort to estimate how my credence in various beliefs have changed since 2014, based on notes and exercises from that period of my life.

Belief

2014

2024

Change

Most suffering/welfare is and will be experienced by wild animals

90%

90%

+0%

I have a deep ethical obligation to reduce as much animal suffering as possible

95%

60%

-35%

People generally have a deep ethical obligation to change their diet to help animals

85%

20%

-65%

We can make meaningful progress on abolishing factory farming or improving farmed animal welfare by 2050

75%

10%

-65%

We can make meaningful progress on abolishing factory farming or improving farmed animal welfare by 2100

85%

15%

-70%

Most experiences are negative / suffering dominates in the wild

85%

80%

-5%

I have an obligation to reduce suffering, but not to increase happiness

75%

65%

-10%

I have strong moral obligations to help whoever I can as effectively as possible, independent of location, relationship, etc.

75%

80%

+5%

I have strong moral obligations to help whoever I can as effectively as possible, independent of time

30%

40%

+10%

I have a strong moral obligation to ensure future, positive lives occur

5%

10%

+5%

I have a strong moral obligation to prevent future negative lives from occurring

85%

35%

-50%

Large scale philanthropy by individuals often threatens democratic institutions, and this is often bad, independent of the benefits

85%

60%

-25%

The best animal welfare interventions target farmed vertebrates

95%

10%

-85%

Farmed vertebrate welfare should be an EA focus

90%

15%

-75%

EA as a movement is/will be positive for the world in the long run

90%

30%

-60%

Most people interested in EA should earn to give

60%

85%

+25%

It was good for animal welfare that the EAs “won” the abolitionist/welfarist debates

80%

95%

+15%

 

How my thinking about EA has changed over time

I have some long-held views that haven’t really changed:

My views have also changed in a bunch of ways:

Things that changed about me from exposure to EA

I care a lot more about money

I care a lot more about status

My commitment to doing good feels deeper

I feel more morally compromised

 

Overall, when I look at my first 10 years engaging with EA, I feel mainly like things are just ambiguous to me. I feel a lot more positive about some donations I made than anything else — in particular large donations to brand new projects that probably helped accelerate them a lot. The animal work I’ve done feels promising but ambiguous. This post feels very melancholy, but ultimately, I still feel excited about trying to do impactful work in the world.

huw @ 2024-08-29T04:14 (+23)

This is really great and I'd encourage you to convert it to a full post! It's absolutely worthy of that honor :)

Tyler Johnston @ 2024-08-29T03:29 (+17)

Thank you for writing this! Was really interesting to read. I'd love to see more posts of this nature. And it seems like you've done a lot for the world — thank you.

I have a couple questions, if you don't mind:

You write

I still generally suspect corporate campaigns are no longer particularly effective, especially those run by the largest groups (e.g. Mercy For Animals or The Humane League), and don’t think these meet the bar for being an EA giving area anymore, and haven’t in the US since around 2017, and outside the US since around 2021.

I would love to hear your reasoning (pessimism about fulfillment? WAW looking better?) and what sort of evidence has convinced you. I think this is really important, and I haven't seen an argument for this publicly anywhere. Ditto about your skepticism of the organizations leading this work.

We can make meaningful progress on abolishing factory farming or improving farmed animal welfare by 2050

Did you mean to change one of the years in the two statements of this form?

Most people interested in EA should earn to give

I'd love to hear more about this. How much value do you think e.g. the median EA doing direct work is creating? Or, put another way, how significant an annual donation would exceed the value of a talented EA doing direct work instead?

abrahamrowe @ 2024-08-29T10:45 (+18)

Thanks for the questions!


Corporate campaigns

  • Seems like the majority of commitments happened in prior years and there's been a rapid decline in number of commitments.
  • Enforcement is still needed, but is isn't obvious to me that dozens of well funded orgs are needed for it.
  • The broiler ask was not tenable from the start, and many campaigners think it'll never be fulfilled at a large scale.
  • The well-funded orgs seem like they have lots of internal issues that prevent them from being particularly effective.
  • There's been a pretty big break from the tactics I think are most effective for winning commitments, and it would be hard to get well-funded groups to go back to them.
  • On WAW specifically, my view is something like:
    • Large scale interventions we can be confident in aren't that far away.
    • The intervention space is so large and impacting animals' lives generally is so easy that the likelihood of finding really cost-effective things seems high.
    • These interventions will often not involve nearly as much "changing hearts and minds" or public advocacy as other animal welfare work, so could easily be a lot more tractable.

Did you mean to change one of the years in the two statements of this form?

  • Yes, 2100. Thanks for spotting that!

I'd love to hear more about this. How much value do you think e.g. the median EA doing direct work is creating? Or, put another way, how significant an annual donation would exceed the value of a talented EA doing direct work instead?

  • I think my view is something more like the talent pool in EA is deep enough (for most kinds of roles, especially junior ones), and the donor diversification issues are large enough that it seems like some kind of shift is warranted. I wouldn't want fewer people doing direct work — I'd want fewer people trying to.
Angelina Li @ 2024-09-01T04:12 (+4)
  • On WAW specifically, my view is something like:
    • Large scale interventions we can be confident in aren't that far away.
    • The intervention space is so large and impacting animals' lives generally is so easy that the likelihood of finding really cost-effective things seems high.
    • These interventions will often not involve nearly as much "changing hearts and minds" or public advocacy as other animal welfare work, so could easily be a lot more tractable.

I would love to hear you talk more about this :) What makes you hopeful that scalable interventions are coming, and can you say more about anything you're particularly excited about here? Also curious what "aren't that far away" caches out into in terms of your beliefs -- in 1 year? 3?

I wonder if your opinions are related to the following, which I'd also be excited to hear more about!

  • I think that my research has generally caused the EA space to focus too much on farmed insects, and less on insecticides. I am somewhat inclined toward thinking that insecticide-caused suffering is both more tractable and larger in scale. I’m now working on a insecticide project though, so trying to correct this.

(Thanks for sharing this post Abraham, I enjoyed reading it :) )

abrahamrowe @ 2024-09-01T12:46 (+6)

Thanks for the questions!!

What makes you hopeful that scalable interventions are coming, and can you say more about anything you're particularly excited about here?

The ones that seem most likely in the near future are:

  • Insecticide interventions like alternative crop insect management approaches, including genetic ones
  • Less painful insecticides
  • Fertility control for urban wildlife
  • Probably a lot more no one has considered

Things that make me think this is on the table:

  • I think there aren't great alternative animal welfare interventions, but animal interventions have really good returns if you get them right because you can impact so many animals.
  • We've made some cool progress on validating welfare measures that might be cheap to measure, which could be useful for assessing the sign of interventions.
  • It seems generally like the academic field building project is going well, so we should expect this to accelerate. 

In terms of timelines — I think this is more like 10-15 years. But part of the reason I think that's exciting is that I used to think it would be more like 2050+ before anything like this was on the table. I think I've also just generally decreased my confidence that the problems as are as difficult as I thought before (though I definitely think they are still tricky).

For insecticides, I think my view remains that we are something like 2-5 years of specific lab/field research away from plausibly having a great intervention, so it is sad that progress hasn't been made on it, and given that this also seemed like the case a few years ago, funding the research should have been a priority earlier.

MaxRa @ 2024-08-29T15:02 (+10)

Really interesting, thanks for sharing. I was particularly surprised about your changes of mind here:

We can make meaningful progress on abolishing factory farming or improving farmed animal welfare by 205075%10%-65%
We can make meaningful progress on abolishing factory farming or improving farmed animal welfare by 210085%15%-70%

E.g. some spontaneous potential cruxes that might be interesting to hear your disagreement with, in case they capture your reasons for pessimism:

  1. Plant protein sources will become price- and taste-competitive with more than half of all consumed meat products by 2040.
  2. Cultured meat will become price- and taste-competitive with more than half of all consumed meat by 2050.
  3. There is a critical mass of people caring about animal welfare at which animal rights will become major political issues in most democracies.
  4. There will be a steady increase in people caring about animal welfare in the coming decade.
    1. E.g. I expect Germany to have a >20% vegetarian population by 2040 (currently apparently 12%).
abrahamrowe @ 2024-08-29T15:59 (+8)

Nice, these are good questions, but probably don't capture all the cruxes in my view.
1. I think this seems moderately unlikely to me? I'm not sure what would drive down prices further than where they are now, as it seems like a large portion of the cost are the proteins themselves, and not production.

2. This also seems like it relies on crossing technological hurdles that are really hard.

3. I think this seems possible? But I'd put below 50% on it, and if it does happen, I'd expect something more like the climate movement, where lots of people think it is important but don't really take substantial steps to act on it.

4. I think that reaching 20% vegetarian seems possible in some countries, but I think I'm a lot more skeptical it'll go much higher.

I think it does seem plausible to me that there would be a meaningful reduction in the amount of meat consumed over this period in developed countries, but also expect that might come with more chicken/fish consumption that would offset animal welfare gains anyway.

I think another crux more important to my pessimism is that I don't feel very convinced that price/taste competitive meat alternatives will cause a significant increase in their adoption.

MaxRa @ 2024-08-29T16:56 (+11)
  1. That's interesting, based on thinking that animal protein in the end comes from plant protein, and that animals use up a lot of space, food, and extra infrastructure that is not directly involved in turning plant protein into meat, I'd've guessed that plant protein would be much cheaper than animal protein.
    1. I quickly asked chatGPT for the cheapest animal vs. plant proteins in the US:
      Chicken: Approximately 6.6 cents per gram of protein
      Lentils: Approximately 3.7 cents per gram of protein
    2. Less difference than I'd've guessed.
  2. Interesting, hard for me to judge. Reading the Innovations needed section, it seems like most hurdles are in the 3 OOM range, only the growth factor price is 6 OOM off.
    1. My naive reaction is: AI + increased wealth + generally improving science & engineering + increased caring + those are "just" engineering problems -> I'm much more bullish than the authors of the report, who are 9% for:
    2. >50M metric tons of cellular meat will be sold at any price within a continuous 12-month span before the end of 2051.9%
      1. (For context, the annual production of conventional meat (excluding seafood) in 2018 was 346M metric tons)
      2. I'm maybe at 40%, given that plant-based meat might just do it by itself, or that something disruptive happens that affects R&D broadly, or that it's just unintuitively difficult.
  3. I'm more hopeful this will be more comparable to civil rights / racism / sexism, as there are concrete victims who are suffering (and there is already a broad agreement that animals in human guardianship shouldn't suffer), compared to climate change, which is much more abstract and indirect.
  4. Yeah, I somewhat agree that the steady increase will probably bottom out at maybe 20%. But my hopeful vision is that at 20%, there will be critical mass effects for political action and for the demand of alternative products to lead to a much more mature industry.
    1. Plus I expect health and climate change angles on meat consumption will also more likely than not steadily increase.
       

Finally, I'm also probably more optimistic about your last point, thinking that price/taste competitive meat alternatives will be huge. I think the Beyond and Impossible "moments" were huge milestones, and a few more "moments" like that will reduce resistance against higher welfare standards & higher prices for conventional meat.

abrahamrowe @ 2024-08-29T21:16 (+7)

Nice, these are great points.

On some specifics:

  1. I think the other consideration is that for really cheap proteins (corn/soy/wheat), chickens and other animals eat much less processed versions that are cheaper than the ones humans eat. But also people seem to like products made from them less. The novel plant protein inputs are a lot more expensive as far as I can tell.
  2. Yeah, I think there is a bunch of uncertainty. My sense of the technical hurdles to cost reduction is that they are fairly large, and I'm not sure they super solvable. But I hope I am wrong!
  3. Yeah, this seems possible too.
  4. Plus I expect health and climate change angles on meat consumption will also more likely than not steadily increase.
    1. I worry these push toward worse animal welfare (less eating of cows, more eating of chicken/fish), not better.
       
MaxRa @ 2024-08-30T08:00 (+4)

Thanks, that all makes sense and moderates my optimism a bit, and it feels like we roughly exhausted the depth of my thinking. Sigh... anyways, I'm really thankful and maybe also optimistic for the work that dedicated and strategically thinking people like you have been and will be doing for animals.

OscarD🔸 @ 2024-09-07T10:57 (+9)

Thanks for writing this, heaps of interesting points. Most surprising and saddening to me was that you think there is a 70% chance EA will be net-negative! Could you explain why you think this? Your various concerns about power centralisation and so forth make sense to me, but to my mind this isn't nearly enough to flip the sign, and EA still seems overwhelmingly good to me.

I was also struck by your melancholy tone - somehow I think I implicitly hoped that if I accomplished all the things you have I would feel more resoundingly happy with my impact! But maybe EAish people are unusually cognisant of missed opportunities and impact that could have been but wasn't.

abrahamrowe @ 2024-09-08T12:28 (+10)

I don't think it's all net-negative — I think there are lots of worlds where EA has lots of good and bad that kind of wash out, or where the overall sign is pretty ambiguous in the longrun.

Here are lots of ways I think are possible EA could end up causing a lot of possible harm. I don't really think any of these are that likely on their own — I just think it's generally easier to cause harm than produce good, so there are lots of ways EA can accidentally not achieve being overall positive, and I generally think it has an uphill road to climb to end up not being a neutral or ambiguous quirk in the ash heap of history.

  • The various charities don't produce enough value to offset the harms of FTX (seems likely they already have produced more to me, but I haven't thought about it)
  • Things around accidentally accelerating AI capabilities in ways that end up being harmful
  • Things around accidentally accelerating various bio capabilities in ways that end up being harmful.
  • Enabling some specific person into entering a position of power where they end up doing a lot of harm.
  • X-risk from AI is overblown, and the E/accs are right about the potential of AI, and lots of harm is caused by trying to slow AI development/regulate it.
  • There is even stronger reactionary response to some future EA effort that makes things worse is some way.
  • Most of the risk from AI is algorithmic bias/related things, and AI folks' conflict with people in that field ends up being harmful for reducing it.
  • Using EV only for making decisions accidentally leads to a really bad world, even when all decisions made were positive EV.
  • EA crowds out other better effective giving efforts that could have arisen.
NickLaing @ 2024-09-08T13:57 (+6)

I note that these risks hardly apply to GHD work ;).

Can you explain how FTX harm could plausible outweigh good done by EA? I can't fathom a scenario where this is the case myself.

abrahamrowe @ 2024-09-08T16:23 (+4)

Yeah, I think there are probably parts of EA that will look robustly good in the long run, and part of the reason I think that it's less likely EA as a whole will be less likely to be positive (and more likely to be neutral or negative) are that actions in other areas of EA could impact those areas negatively. Though this could cut both in favor of or against GHD work. I think just having a positive impact is quite hard, even more so when doing a bunch of uncorrelated things when some of them have major downside risks.

I think it is pretty unlikely that FTX harm outweighs good done by EA on its own, but it seems easy enough to imagine that conditional on EA's net benefit being barely above neutral (which for other reasons mentioned above seems pretty possible to me, along with EA increasingly working on GCRs which directly increases the likelihood EA work ends up being net-negative or neutral, even if in expectation that shift is positive value), that the scale of the stress / financial harm caused by EA via FTX, outweighs that remaining benefit. And then there is brand damage to effective giving, etc.

But yeah, I agree that my original statement above seems a lot less likely than FTX just contributing to an overall portfolio of harm or work that doesn't matter in the longrun from EA.

Joseph Lemien @ 2024-08-29T14:01 (+9)

Thank you for sharing your reflections. As I read it I found various aspects that resonated with me, and I suspect that many other people on the EA Forum will feel the same. I'd love to see more of this type of writing (contemplative, reflective, critical/skeptical while being kind) on this forum.

abrahamrowe @ 2024-09-09T17:10 (+7)

After some discussions with someone offline that were clarifying, I want to clarify my decrease in confidence in the statement, "Farmed vertebrate welfare should be an EA focus".

I think my view is slightly more complicated than this implies. I think that given that OpenPhil and non-EA donors are basically able to fund what seem like the entirety of the good opportunities in this space, I don't think these groups are that talent constrained, and it seems like the best bets (e.g. corporate campaigns) will continue to have decreasing cost-effectiveness, new animal-focused talent should probably be mostly going into earning-to-give for invertebrates/WAW, and that donations should mostly go to groups there or the EA AWF (which should in turn mostly fund invertebrates and WAW). I don't think farmed vertebrate welfare should be the default way that EAs recommend to help animals

Ben Stevenson @ 2024-08-30T03:01 (+6)

Thanks, Abraham, I liked reading this! Good luck for an impactful decade to come

John Salter @ 2024-08-29T07:15 (+6)

This an incredible set of accomplishments. Thanks for your dedication!

Pascal Costa @ 2024-09-09T15:14 (+5)

Hello ! 

Thanks a lot from sharing all this knowledge. It is pretty insightfull, even for people who don't follow EA news for years.

There are several claims that surprised me a little bit. I would be pleased to have more infos about these particular claims: 

1-Low probability: People generally have a deep ethical obligation to change their diet to help animals: If it is pretty clear that it is not the most efficient way to help animals, it is not that clear that we do not have a moral duty to at least not eat animal products.To my mind, I think that we have to differenciate moral duties to efficacities issues. 
Moreover, it is also close to impossible to convince people that animal welfare is a problem while eating animal products

2-Low probability: We can make meaningful progress on abolishing factory farming or improving farmed animal welfare by 2050: I would be pleased to know more about the facts that leed you into thinking that.

3-Low probability: I have a strong moral obligation to prevent future negative lives from occurring: Same than two.

4- High probability: It was good for animal welfare that the EAs “won” the abolitionist/welfarist debates: I would be interested about details of the historical fact (how EA "won" that debate") and also why it is a good news. 

I know it's a lot of questions. Feel free to answer from all to none :) .

Like others I also feel like you had more impact that you aknowledge to yourself :). 

Thanks again for the quality of the reasoning. 
 

abrahamrowe @ 2024-10-13T23:46 (+4)

Sorry to just see this!

  1. I agree that for many individuals, going vegan could be a good way to help animals! It's not obvious to me that it is easier to do for most of those people than say, donating to a charity at a rate that roughly offsets the harm from it. I don't really think the specific harm of "eating animals" is worse than the variety of other ways that we eat animals, so feel pretty neutral about veganism — it seems like one of many effective things one can do personally to help animals.
  2. I basically don't know if I believe we'll find anything amazingly effective to do for farmed animals beyond cage free campaigns on this timeline, and those impact a pretty small portion of farmed animals.
  3. I feel confused by this personally - I don't think it makes sense that I'd have an obligation to bring positive lives into existence, and feel like there should be some symmetry here, but it doesn't feel exactly the same. I also don't think 35% is a small probability! It seems not unlikely to me that I have this kind of obligation.
  4. I think EAs won it because they spent a lot of money on things that actually worked (e.g. cage-free campaigns) instead of wasting money on diet change advocacy that wasn't very effective, etc. And I think it was good because it actually did something to help animals! I generally just think the EA side of the animal welfare space is more interested in evidence, and less in ideological purity. These both seem very good to me!
Vasco Grilo🔸 @ 2024-10-25T13:59 (+4)

Great reflections, Abraham!

I care a lot more about money

Say you could either hire the best candidate for a role or the 2nd best plus receiving X $/year. What is the value of X which would make you indifferent between the 2 options? Feel free to provide different answers for different roles / sets of roles and organisatons/areas.

abrahamrowe @ 2024-10-25T23:49 (+12)

I think it would be pretty hard for me to make that trade off in a workplace context (I think I'm still a deep sucker for impact and in any real version of this is X would be whatever the organization is indifferent towards and I'd donate it). If you forced me to in some hypothetical I'd guess X is quite low for many junior roles (<$10k), but higher for more mid/senior roles (>$50k?). But I think something like the following are true:

  • I'm currently not doing what I suspect would be the most impactful jobs for me to do, in part because what seems reasonable to pay for them (based on market rates, etc) strikes be as being at least $30k-$40k below what I would consider.
    • As recently as a few years ago, I probably would have considered them at that level.
    • My expenses haven't changed in any meaningful way (outside inflation, etc).
    • I think the work I'm doing instead is almost certainly significantly less impactful.
    • I think this is bad, but compensation isn't the only consideration on my mind.
  • I think generally past a certain point, having (or moving) money is strongly correlated having strategic influence within certain spaces in EA, so it seems pretty important.
    • This is obviously not necessarily correlated with having strategic skill
MichaelStJules @ 2024-10-26T00:29 (+4)

I'm currently not doing what I suspect would be the most impactful jobs for me to do, because what seems reasonable to pay for them (based on market rates, etc) strikes be as being at least $30k-$40k below what I would consider.

Out of curiosity, what would you be doing?

(My guess: running an insect welfare org, or starting another EA charity.)

abrahamrowe @ 2024-10-26T00:40 (+6)

Yeah, I think that's basically what I was thinking (specifically, starting an insecticide charity, or similar project focused on implementing a WAW intervention)

MichaelStJules @ 2024-10-26T00:50 (+4)

Have you checked with potential donors if they'd be willing to pay you at a rate you find acceptable to run such a charity?

I'd be pretty excited about improving insecticides, but I'm not sure about donating much myself in the near term, since I already feel overinvested in invertebrates recently.

MichaelStJules @ 2024-10-26T13:17 (+2)

Also, adding to this, potential donors might be willing to pay more for you, given your experience, but maybe you've accounted for this in "market rates, etc". Presumably this would increase the probability of success of the org, from their POV.

And even bumping up the costs of the whole org 2x through higher salaries still leaves an insecticide charity at least 1/2 as cost-effective as something extraordinarily cost-effective (the same org where the same people work for less), which is still extraordinarily cost-effective!

If the counterfactual is that such a charity isn't started at all, that could be much worse than you running it at higher pay.

NickLaing @ 2024-08-29T13:10 (+4)

Great post! I'm interested in how you are 80% confident that "Most experiences are negative / suffering dominates in the wild". I can understand why you would lean towards it being negative, but why so confident given how little we still understand the experience of animals? 

abrahamrowe @ 2024-08-29T13:44 (+8)

Yeah, I think this just seems pretty likely to me due to thinking that most animals are juveniles / die as juveniles, and the amount of time an animal has to be alive to accumulate good experiences to outweigh a painful death is probably higher than this. Things that have made me slightly less certain about this are me thinking it is more likely than I used to that adult animals in the wild live good lives, and me thinking that it is less likely than I used to that insects/some other invertebrates experience suffering, especially juvenile insects (though I probably still put a higher credence in this than many people).

I think it is pretty plausible I'm overconfident here though.

But, I also think this belief is mostly irrelevant to EAs / wild animal welfare advocates, unless you think there are special reasons improving welfare is easier on one side of the spectrum than the other, which I don't really have strongly held opinions on.

NickLaing @ 2024-08-29T13:57 (+6)

The juvenile animal argument is interesting, as from a total "QALY" perspective, if animals die very young then unless their deaths are extremely suffering-ful and drawn out, the total time for suffering isn't that large IMO.

Yep I completely agree that the belief is (or should be) mostly irrelevent to wild animal welfare advocates, and I think WAW might be more palatable to more people if it was emphasised less. "We have cheap and effective ways of helping wild animals live way better lives" is a better markteing tool than "Wild animals have bad live and are suffering soooo much so we have to do something"  (aware I'm strawmanning for emphais a bit here). It only becomes relevant for arguments that look at whether the whole world is "net positive or negative", which I find a bit unhelpful as that discussion doesn't get us closer to making things better.

On that I appreciated these points

"On WAW specifically, my view is something like:

  • Large scale interventions we can be confident in aren't that far away.
  • The intervention space is so large and impacting animals' lives generally is so easy that the likelihood of finding really cost-effective things seems high.
  • These interventions will often not involve nearly as much "changing hearts and minds" or public advocacy as other animal welfare work, so could easily be a lot more tractable.
abrahamrowe @ 2024-08-29T16:16 (+4)

Yeah, I agree with everything you say here RE WAW, on both how to present it and the usefulness of the net-positive or negative debate.

Chris Leong @ 2024-08-29T05:29 (+2)

I think importantly I failed to do this well, and if I had succeeded, the animal advocacy space would have been much more likely to prevent the take off of insect farming. Other people obviously had an effect here too, but I think not being strategic about this feels to me like the biggest failure I’ve had as an EA.

 

What do you wish you'd done differently and are there any lessons for AI governance which may be in a similar stage?

abrahamrowe @ 2024-08-29T10:54 (+6)

I'm pretty uncertain, but I think my best guess is that starting a group/getting someone to start a group working directly on it at the time would have been better than lobbying people to care about it. I suspect that broadly applies.

abrahamrowe @ 2025-07-30T12:12 (+31)

Expected value maximization hides a lot of important details.  

I think a pretty underrated and forgotten part of of Rethink Priorities' CURVE sequence is the risk aversion work. I think the defenses of EV against more risk-aware models seem to often boil down to EV's simplicity. But I think that EV actually just hides a lot of important detail, including, most importantly, that if you only care about EV maximization, you might be forced to conclude that worlds where you're more likely to cause harm than not are preferable.

As an example, imagine that you're considering a choice that can cause 10 equally possible outcomes. In 6 of them, you'll create -1 utility. In 3 of them, your impact is neutral. In 1 of them, you'll create 7 utility. The EV of taking the action is (-6+0+7)/10 = 0.1. This is a positive number! Your expected value is positive, even though you have a 60% chance of causing harm. In expectation you're more likely than not to cause harm, but also in expectation you should expect to increase utility a bit. This is weird.

 

Scenario 1

More concretely, if I consider the following choices, which are equivalent from an EV perspective:

Option A. A 0% chance of causing a harmful outcome, but in expectation will cause +10 utility

Option B. A  20% chance of causing a harmful outcome, but in expectation will cause +10 utility

It seems really bizarre to not prefer Option A. But if I prefer Option A, I'm just accepting risk aversion to at least some extent. But what if the numbers slip a little more?

 

Scenario 2

Option A. A 0% chance of causing a harmful outcome, but in expectation will cause +9.9999 utility

Option B. A 20% chance of causing a harmful outcome, but in expectation will cause +10 utility

Do I really want to take a 20% chance on causing harm in exchange for 0.001% gain in utility caused?

 

Scenario 3

Option A. A 0% chance of causing a harmful outcome, but in expectation will cause +5 utility

Option B. A 99.99999% chance of causing a harmful outcome, but in expectation will cause +10 utility

Do I really want to be exceedingly likely to cause harm, in exchange for a 100% gain in utility?

 

I don't know the answers to the above scenarios, but I think it feels like just saying "the EV is X" without reference to the downside risk misses a massive part of the picture. It seems much better to say "the expected range of outcomes are a 20% chance of really bad stuff happening, a 70% chance of nothing happening and a 10% of a really really great outcome, which all averages out to an >0 average". This is meaningfully different than saying "no downside risk, and a 10% chance of a pretty good outcome, so >0 average".

I think that risk aversion is pretty important, but even if it isn't incorporated into people's thinking at all, it really doesn't feel like EV produces a number I can take at face value, and that makes me feel like EV isn't actually that simple.

The place where I currently see this happening the most is naive expected value maximization in reasoning about animal welfare — I feel like I've seen an uptick in "I think there is a 52% chance these animals live net negative lives, so we should do major irreversible things to reduce their population". But it's pretty easy to imagine doing those things being harmful, or your efforts backfiring, etc. in ways that cause harm.

MichaelDickens @ 2025-07-31T00:03 (+14)

I think this is wrong, and this intuition that many people have derives from a psychological mistake. Essentially everything in life has diminishing marginal utility, so it almost always makes sense to be risk averse. So it's intuitive that you should be risk averse with respect to expected utility. But that doesn't make any logical sense—by definition, you don't have diminishing marginal utility of utility. Your utility function already accounts for risk aversion. Being risk averse with respect to utility is double-counting.

Karthik Tadepalli @ 2025-08-02T19:28 (+4)

This is a valid statement but non-responsive to the actual post. The argument is that there is intuitive appeal in having a utility function with a discontinuity at zero (ie a jump in disutility from causing harm), and ~standard EV maximisation does not accommodate that intuition. That is a totally separate normative claim from arguing that we should encode diminishing marginal utility.

abrahamrowe @ 2025-08-02T13:42 (+4)

I don't think this is quite what I'm referring to, but I can't quite tell! But my quick read is we are talking about different things (I think because I used the word utility very casually). I'm not talking about my own utility function with regard to some action, but the potential outcomes of that action on others, and I don't know if I'm embracing risk aversion views as much as relating to their appeal.

Or maybe I'm misunderstanding, and you're just rejecting the conclusion that there is a moral difference between taking, say, an action with +1 EV and a 20% chance of causing harm and an action with +1EV and a 0% chance of causing harm / think I just shouldn't care about that difference?

MichaelDickens @ 2025-08-06T15:39 (+2)

In retrospect my comment was poorly thought-out, I think you're right that it's not directly addressing your scenarios.

I think there are two separate issues with my comment:

  1. My comment was about being risk-averse with respect to utility; your quick take was about wanting to avoid causing harm; those aren't necessarily the same thing.
  2. You can self-consistently believe in diminishing marginal utility of welfare, i.e., your utility function isn't just "utility = sum(welfare)". And the way your quick take used the word "utility", you really meant something more like "welfare" (it sounds like this is what you're saying in your reply comment).

RE #1, my sense is that "person is risk-averse with respect to utility" is isomorphic to "person disprefers a lottery with a possibility of doing harm, even if it has the same expected utility as a purely-positive lottery". Or like, I think the person is making the same mistake in these two scenarios. But it's not immediately obvious that these are isomorphic and I'm not 100% sure it's true. Now I kind of want to see if I can come up with a proof but I would need to take some time to dig into the problem.

RE #2, I do in fact believe that utility = welfare, but that's a whole other discussion and it's not what I was trying to get at with my original comment, which means I think my comment missed the mark.

Or maybe I'm misunderstanding, and you're just rejecting the conclusion that there is a moral difference between taking, say, an action with +1 EV and a 20% chance of causing harm and an action with +1EV and a 0% chance of causing harm / think I just shouldn't care about that difference?

Depends on what you mean by "EV". I do reject that conclusion if by EV you mean welfare. If by EV you mean something like "money", then yeah I think money has diminishing marginal utility and you shouldn't just maximized expected money.

Ebenezer Dukakis @ 2025-07-31T06:01 (+9)

I wish people would talk more about "sensitivity analysis".

Your parameter estimates are just that, estimates. They probably result from intuitions or napkin math. They probably aren't that precise. It's easy to imagine a reasonable person generating different estimates in many cases.

If a relatively small change in parameters would lead to a relatively large change in the EV (example: in Scenario 3, just estimate the "probability of harm" a teensy bit different so it has a few more 9s, and the action looks far less attractive!) — then you should either (a) choose a different action, or (b) validate your estimates quite thoroughly since the VoI is very high, and beware of the Unilateralist's Curse in this scenario, since other actors may be making parallel estimates for the action in question.

Michael St Jules 🔸 @ 2025-07-30T20:23 (+8)

(I'm guessing you mean difference-making risk aversion here, based on your options being implicitly compared to doing nothing.)

When considering the potential of larger indirect effects on wild invertebrates, the far future and other butterfly effects, which interventions do you think look good (better than doing nothing) on difference-making risk aversion (or difference-making ambiguity aversion)?

(I suspect there are none for modest levels of difference-making risk/ambiguity aversion, and we should be thinking about difference-making in different ways.)

abrahamrowe @ 2025-07-31T01:26 (+6)

I think I mean something slightly different than difference-making risk aversion, but I see what you're saying. I don't even know if I'm arguing against EV maximization - more just trying to point out that EV alone doesn't feel like it fully captures the picture of the value I care about (e.g. likelihood of causing harm relative to doing nothing feels like another important thing). I think specifically, that there are plausible circumstances where I am more likely than not to cause additional harm, and in expectation that action has positive EV, feels concerning. I imagine lots of AI risk work could be like this: doing some research project has some strong chance of advancing capabilities a bit (high probability of a bit of negative value), but maybe a very small chance of massively reducing risk (low probability of tons of positive value). The EV looks good, but my median outcome will be the world being worse than it was if I hadn't done anything.

Michael St Jules 🔸 @ 2025-07-31T18:05 (+2)

Ok, that makes sense. I'd guess butterfly effects would be neutral in the median difference. The same could be the case for indirect effects on wild animals and the far future, although I'd say it's highly ambiguous (imprecise probabilities) and something to be clueless about, and not precisely neutral about.

Would you say you care about the overall distribution of differences, too, and not just the median and the EV?

abrahamrowe @ 2025-08-02T13:43 (+4)

Probably, but not sure! Yeah, the above is definitely ignoring cluelessness considerations, on which I don't have any particularly strong opinion.

Brad West🔸 @ 2025-07-30T16:19 (+7)

I think this critique misses how EV maximization works in a world with many actors taking uncorrelated risks.

Consider your Scenario 2: Individual actors choosing between Option A (0% harm chance, +9.9999 utility) vs Option B (20% harm chance, +10 utility). If we have 1000 altruistic actors each making independent choices with similar profiles, and they all choose Option B (higher EV), we'd expect:

  • 800 successful outcomes (+8000 utility)
  • 200 harmful outcomes (negative utility)
  • Net positive impact far exceeding what we'd get if everyone chose the "safe" option

This is portfolio theory applied to altruism. Just as index funds maximize returns by holding many uncorrelated assets, the altruistic community maximizes impact when individuals make risk-neutral EV calculations on independent projects.

The key caveats:

  1. For large actors (major foundations, governments, AI companies, etc.), risk aversion makes more sense since their failures aren't offset by others' successes
  2. For correlated risks (like your animal welfare example where many actors might simultaneously cause harm based on shared wrong beliefs), we need more caution

But for most EA individuals working on diverse, independent projects? Risk-neutral EV maximization is exactly what we want everyone doing. The portfolio effect means we'll get the best aggregate outcome even though some individual bets will fail.

Jason @ 2025-08-01T16:10 (+6)

Are the projects of most EA individuals truly independent in the sense of their EVs being essentially uncorrelated with each other? That would be surprising to me, given that many of those projects are conditional on positive evaluation from a small number of funders, and many arose out of the same meta (so would be expected to have meaningful correlations with other active projects).

So my prediction is that most EA stuff falls into one of your two caveats. What I don't have a good sense of is how correlated the average EA work is, and thus the degree of caution / risk aversion implied by the caveats.

NickLaing @ 2025-08-01T15:41 (+4)

In theory I agree with this, but in practise I personally think "Risk-neutral EV maximisation" can lead to bets which are far worse than they appear to be. This is because I think we often massively overrate the EV of "hits based approaches". 

Generally I think the lower probability of a bet, the higher chance there is of that EV being wrong and lower than stated. I'm keen to see evidence of high risk bets turning out well once in a while before I'm convinced that they really do have the claimed EVs...

Brad West🔸 @ 2025-08-01T15:48 (+6)

Then your issue is with systemically flawed reasoning overestimating the likelihood of low-probability events. The solution for that would be to adjust by some factor that adjusts for this systemic epistemic bias, and then proceed with risk-neutral EV maximization (again, with the caveats that I had mentioned in my initial comment).

abrahamrowe @ 2025-08-02T13:48 (+2)

I think this is true as a response in certain cases, but many philanthropic interventions probably aren't tried enough times to get the sample size and lots of communities are small. It's pretty easy to imagine a situation like:

  • You and a handful of other people make some positive EV bets.
  • The median outcome from doing this is the world is worse, and all of the attempts at these bets end up neutral or negative.
  • The positive EV is never realized and the world is worse on average, despite both the individuals and the ecosystem being +EV.

It seems like this response would imply you should only do EV maximization if your movement is large (or that its impact is reliably predictable if the movement is large).

But I do think this is a fair point overall — though you could imagine a large system of interventions with the same features I describe that would have the same issues as a whole.

abrahamrowe @ 2021-02-24T15:18 (+12)

Following up with some thoughts I originally had in response to saulius' List of ways in which cost-effectiveness estimates can be misleading. Not sure if there has been other write ups of this effect.

If we incentivize charities' to act as cost-effectively as possible, and if they operate in coordination with other groups working on the same issue, it seems like we might expect in many cases what's best for an individual charities' cost-effectiveness to be bad for the overall cost-effectiveness of the space. This issue is compounded if multiple EA / highly cost-effective charities are operating in the same space.

The issue is something like, charities have relative strengths and weaknesses, and by coordinating to take advantage of those, individual charities might lose out on cost-effectiveness, but overall make  their collective work more effective.

I think this occasionally actively happens with animal welfare campaigns, where single donors are giving to several charities doing the same thing.

An example using chicken welfare campaigns in the animal welfare space:

Charity A has 100 good volunteers in City 1, where Company X is headquartered. To run a successful campaign against them would cost Charity A $1000, and Company A uses 10M chickens. Alternatively, Charity A  could run a campaign against Company Y in a different city where they have fewer volunteers for $1500 (more expensive because fewer volunteers).

Charity B has 5 good volunteers in City 1, but thinks they could secure a commitment from Company Y in City 2, where they have more volunteers, for $1000. Company B uses 1M chickens. Or, by spending more money, Charity B could secure a commitment from Company X for $1500.

Charities A and B are coordinating, and agree that Companies X and Y committing will put pressure on a major target (Company Z), and want to figure out how to effectively campaign.

They consider three strategies:

Strategy 1: They both campaign against both targets, at half the cost it would be for them to campaign on their own, and a charity evaluator views the campaign as split evenly between them, since they put in equal effort. The cost-effectiveness of each charity is: (5M + 0.5M Chickens / $500 + $750) = 4,400 chickens / dollar, and $2500 total has been spent.

Strategy 2: Charity A targets Company X, and Charity B targets Company Y. Charity A's cost-effectiveness is 10,000 chickens / dollar, and Charity B's is 1,000 chickens / dollar, with $2,000 total spent.

Strategy 3: Charity A targets Company Y, Charity B targets Company X. Charity A: 667 chickens / dollar, Charity B: 6696 chickens / dollar. $3,000 total spent across all charities.

These charities want to be as effective as possible — clearly, the charities should choose Strategy 2, because the least money will be spent overall (and both charities will spend less for the same outcome).

But if a charity evaluator is fairly influential, and looking at each charity individually, Charity B might push hard for less ideal Strategies 1 or 3, because those make its cost-effectiveness look much better. Strategy 2 is clearly the right choice for Charity B to make, but if they do, an evaluation of their cost-effectiveness will look much worse.

I guess a simple way of putting this is - if multiple charities are working on the same issue, and have different strengths relevant at different times, it seems likely that often they ought to make decisions that might look bad for their own cost-effectiveness ratings, but were the best thing to do / right decision to make.

I can think of a few examples where charities made less effective decisions explicitly due to reasoning about their own cost-effectiveness, and not thinking about coordination, but I'm not sure how prevalent this actually is as an issue. It mainly makes me a little worried about apples-to-apples comparisons of the cost-effectiveness of charities who do the same thing, and are known to coordinate with each other.

abrahamrowe @ 2024-12-08T04:21 (+11)

Equal Hands — 2 Month Update

Equal Hands is an experiment in democratizing effective giving. Donors simulate pooling their resources together, and voting how to distribute them across cause areas. All votes count equally, independent of someone's ability to give.

You can learn more about it here, and sign up to learn more or join here. If you sign up before December 16th, you can participate in our current round. As of December 7th, 2024 at 11:00pm Eastern time, 12 donors have pledged $2,915, meaning the marginal $25 donor will move ~$226 in expectation to their preferred cause areas.

In Equal Hands’ first 2 months, 22 donors participated and collectively gave $7,495.01 democratically to impactful charity. Including pledges for its third month, that number will likely increase to at least 24, and $10,410.01

Across the first two months, the gifts made by cause area and pseudo-counterfactual effect (e.g. if people had given their own money in line with their voting, rather than following the democratic outcome) has been:

Interestingly, the primary impact has been money being reallocated from animal welfare to global catastrophic risks. From the very little data that we have, this primarily appears to be because animal welfare-motivated donors are much more likely to pledge large amounts to democratic giving, while GCR-motivated donors are more likely to sign up (or are a larger population in general), but are more likely to give smaller amounts.

The total administrative time for me to operate Equal Hands has been around 45 minutes per month. I think it will remain below 1 hour per month with up to 100 donors, which is somewhat below what I expected when I started this project.

We’d love to see more people join! I think this project works best by having a larger number of donors, especially people interested in giving above the minimum of $25. If you want to learn more or sign up, you can do so here!