Linkpost: AI does not absolve us of helping animals

By Seth Ariel Green 🔸 @ 2025-10-02T21:34 (+9)

This is a crosspost from my Substack, Regression to the Meat. My typical forum posts are written in a more dispassionate, research-y style, but I'm sharing this here because it touches on something that's been discussed in  few EA forum posts previously

Dwarkesh Patel began his August interview of Lewis Bollard with: “At some point we'll have AGI. How do you think about the problem you're trying to solve? Are you trying to make conditions more tolerable for the next 10 years until AI solves this problem for us?”

 Lewis responds by saying basically that better technology might make animal suffering worse if we use it to do “ever more intensive” farming, and also, even if AGI invents totally excellent meat alternatives, there will still be cultural and political barriers to its adoption, and we still need to do that work.

It’s a good answer, and it keeps the conversation flowing. My less diplomatic answer would probably have been to turn it around and hammer at the premise. Dwarkesh, what is your theory of the world where something we’ve been doing for as long as we’ve been on this planet, however you define that, will suddenly wrap up? Can you think of anything, ever, that went from everywhere to nowhere in ten years?[1]

For whatever reason the exchange has been nagging at my attention. There have also been a few EA forum posts in a similar vein. I think that if other people find the topic interesting, I’d like to explain why it occupies zero of my professional attention. (The short answer is that I expect AI to be sharply curtailed by risk-averse regulations in my lifetime.)

This post is not precisely about animals. It’s about a theory of technological change and how societies adapt to it. I’ll first sketch out the trajectory I expect AI to take in the next 10-50 years; why we still need to do the hard work of persuasion under that scenario; and why I still think it’s a good thing to work on ending factory farming if I’m wrong, including in worlds where AI either completely solves the lab-grown meat problem or kills us all.

I expect AI to follow a trajectory like nuclear power’s

Nuclear power is a big deal. It’s about 70 years old. There are ~440 nuclear power plants on earth which collectively generate about 9% of global electricity. Ballpark, we’d need a few thousand plants to generate all global electricity — ChatGPT says 3100-3500 1GW plants — and about 6X that to produce all ‘final energy.’ 

It costs ~$3B to build a 1GW plant in China and about twice that in the US, and I’m not claiming to be an expert in this area, but apparently the US’s costs could be about $3.5 billion/GW if we relaxed constraints like the “as low as reasonably achievable” ionizing radiation standard. Replacing all fossil fuels with nuclear power would cost between $7-$30T at baseline. If you add 10% on top of that for transmission/infrastructure costs and assume graft/corruption will eat another 20% — who even knows — you get a number that the world can afford. Especially if we treated nuclear technology advancements as a core civilizational goal and invested accordingly.[2] 

But we’re not doing that. There are currently about 70 nuclear power plants under construction. Zero of them are in the US. Germany is denuclearizing and experiencing periodic energy shortages. We collectively lost our appetite for nuclear power because a few prominent nuclear disasters killed a few hundred people over many decades. (Air pollution from fossil fuels is thought to kill about 5 million people a year).

I expect AI to follow a similar path. I anticipate rapid progress in LLMs for both current use cases and new ones. (A college friend is working on putting researchers like myself out of business 😉.) And then I expect several dozen or hundreds of people to die from AI-related mishaps or terrorism. Suppose a pilot sleeps on the job while an LLM-based assistant crashes the plane, or an autonomous truck crashes into a school/hospital, or a cult starts worshipping a chatbot and does doomsday stuff. Seems pretty plausible to me! At that point I expect the west’s fundamentally lawyerly culture to take the reins and for AI to be strictly curtailed. That’s what we do when things are promising and dangerous. We do not become more utilitarian when the stakes get higher. Fear eats the soul, for people and for countries. 

I’m kind of a techno-optimist, and when this happens I’ll be sad. I think the turn away from nuclear power is one of our civilization’s great mistakes. If AI can radically transform material/organic sciences, I want to see that unleashed and society radically upended. But I’m not expecting it. I am a bit baffled that other people seem to. Has anything in your lifetime, or in your parents’ lifetime, been like that? 

Also, to clarify, nuclear power has been transformative. 9-10% of global electricity production is a lot of lightbulbs! But it’s not some civilization-altering thing. It just exists in tandem with other, older things, fueling our wants and needs. We could be aiming to fuel mass desalination to terraform the American west or the Sahara, which would sequester a few decades worth of carbon, open a huge new frontier for productive agriculture, and dramatically lower spatial pressures on biodiversity. But we’re not doing that because we’re scared of what it would take. That’s who we are. We get a lot of utility from arguing about things, perhaps more than from from solving them. This is, to me, a civilization-defining trait. 

If I’m wrong, we’d still need to talk to people

To repeat something I said to Kenny Torella, persuasion is a beautiful thing. I’m not ready to give up on it. Let’s say AI-assisted labs make huge progress on lab-grown meat. First, in practical terms, ‘progress’ here means lowering the energy costs of production, because we have lab-grown meat in Oakland, Singapore, and Tel Aviv. But it’s expensive. Meat, by contrast, is cheap and available everywhere, and if you think of an industrial chicken plant as a macroorganism that converts corn to tasty, protein-rich meat, Lewis estimates that it takes about two calories of grain to produce one calorie of chicken, which is incredible.[3] Let’s say AI leads to breakthroughs that give lab-grown meat similar efficiency and therefore similar price. Great! Now we’ll have a bunch of fundamentally people problems, i.e. matters of persuasion.

I see an obvious role for advocates and researchers. So does Lewis. (My colleague Jacob Peacock provides a nice overview of consumer attitudes towards plant-based meat attitudes in Price-, taste-, and convenience-competitive plant-based meat analogues would not currently replace the majority of meat consumption: A narrative review.)

A lot of AI scenarios are orthogonal to animal welfare issues

A friend once posited that AI doesn’t need to literally kill us all for it to be a big problem. His example was AI agents capturing like 20% of global electricity, and we just have to live with it, the way that Mexico has to live with parasitic, seemingly ineradicable cartels. That sure would suck! But I don’t see the implications for animal welfare one way or the other.

Or imagine, as Neal Stephenson does in Fall, that AI generates endless addictive slop, and the “Five companies/Running everything I see around me” continue to improve at beaming it directly to our eyeballs, and eventually most people just end up spending all day staring at nonsense on their AI goggles, human relations wither, we’re all sad, etc. Again, very bad! But unclear how this affects animals. Probably factory farming would just continue onwards. In which case, we’re back to where we were, which is needing to do the work. 

Suppose the worst (or best) happens

Personally I view the “we’re all going to die” or “we’re all going to live in utopia” scenarios as very unlikely.[4] But I might be wrong. So back to Dwarkesh’s problem: Let’s say that by 2035, it’s either all going to be over or we’ll have infinity lab-grown meat for $0. Suppose those were the only two possible outcomes. Why continue working on ending factory farming in the meantime?

Because factory farming is very, very bad. It is many holocausts worth of bad. Stopping that even a day sooner than it would otherwise end is good. I very much doubt that you have something else more important to work on. Maybe, like, Aella, you “like throwing weird orgies” and you’re “like — well, we’re going to die. What’s a weirder, more intense, crazier orgy we can do? Just do it now.” That’s great, spend your evenings doing that! But I still think you can find time to work on solving something really bad in the morning.

Whether anything we do actually works is a separate problem. But we’re a lot more likely to find it if we are actually trying. I much prefer that, in any world, to waiting for a Deus Ex Machina.

  1. ^

     Some animal advocates would reach here for the comparison to slavery, whose legal status changed dramatically over the 19th century. To which I would say that tens of millions of people are slaves today, compared to about 12.5 million people enslaved in the Atlantic slave trade. You can ‘win’ the moral fight in some places and still be nowhere close to getting the job done.

  2. ^

     I know nothing about fusion but here is some evidence it’s happening.

  3. ^

     Eric Gastfriend – who, it should be said, is smart and has pivoted to AI safety, which is some evidence in favor of its being worth doing — once said the ratio was more like 4 to 1, but either way, it’s way more efficient than a herd of cows or any extant lab-grown alternatives.

  4. ^

     My probability of any global catastrophe killing >50% of the human population in a single year over the next 200 years is probably 1%, which is very high! But I think the most likely culprit is war.


Ben_West🔸 @ 2025-10-03T17:02 (+8)

Thanks for writing this Seth! I agree it's possible that we will not see transformative effects from AI for a long time, if ever, and I think it's reasonable for people to make plans which only pay off on the assumption that this is true. More specifically: projects which pay off under an assumption of short timelines often have other downsides, such as being more speculative, which means that the expected value of the long timeline plans can end up being higher even after you discount them for only working on long timelines.[1]

That being said, I think your post is underestimating how transformative truly transformative AI would be. As I said in a reply to Lewis Bollard who made a somewhat similar point: 

If I'm assuming that we are in a world where all of the human labor at McDonald's has been automated away, I think that is a pretty weird world. As you note, even the existence of something like McDonald's (much less a specific corporate entity which feels bound by the agreements of current-day McDonald's) is speculative.

But even if we grant its existence: a ~40% egg price increase is currently enough that companies feel cover to be justified in abandoning their cage-free pledges. Surely "the entire global order has been upended and the new corporate management is robots" is an even better excuse?

And even if we somehow hold McDonald's to their pledge, I find it hard to believe that a world where McDonald’s can be run without humans does not quickly lead to a world where something more profitable than battery cage farming can be found. And, as a result, the cage-free pledge is irrelevant because McDonald’s isn’t going to use cages anyway. (Of course, this new farming method may be even more cruel than battery cages, illustrating one of the downsides of trying to lock in a specific policy change before we understand what the future will be like.)

  1. ^

    Although I would encourage people to actually try to estimate this and pressure test the assumption that there isn't actually a way that their work can pay off on a shorter timeline. 

Seth Ariel Green 🔸 @ 2025-10-07T14:50 (+2)

Hi Ben, I agree that there are a lot of intermediate weird outcomes that I don't consider, in large part because I see them as less likely than (I think) you do. I basically think society is going to keep chugging along as it is, in the same way that life with the internet is certainly different than life without it but we basically all still get up, go to work, seek love and community, etc.

However I don't think I'm underestimating how transformative AI would be in the section on why my work continues to make sense to me if we assume AI is going to kill us all or usher in utopia, which I think could be fairly described as transformative scenarios ;) 

If McDonalds becomes human-labor-free, I am not sure what effect that would have on advocating for cage-free campaigns. I could see it going many ways, or no ways. I still think persuading people that animals matter, and they should give cruelty-free options a chance, is going to matter under basically every scenario I could think of, including that one.

SummaryBot @ 2025-10-03T15:48 (+2)

Executive summary: This exploratory essay argues that AI is unlikely to quickly end factory farming due to regulatory, cultural, and political barriers, and that regardless of AI’s trajectory—whether it stalls, transforms food production, or even ends civilization—we still need to actively pursue persuasion and advocacy to reduce animal suffering.

Key points:

  1. The author challenges the idea that AI will soon “solve” animal farming, arguing that technological revolutions rarely eliminate entrenched practices within a decade.
  2. AI is expected to follow a path similar to nuclear power: significant potential, but likely hampered by risk-averse regulation after inevitable accidents or abuses.
  3. Even if AI produces cheap, efficient lab-grown meat, societal adoption will hinge on persuasion—convincing regulators, consumers, and cultural groups to embrace it.
  4. Many AI futures (e.g. energy consumption, addictive media) may have little bearing on animal welfare, leaving factory farming to continue largely unchanged.
  5. In extreme scenarios—whether AI delivers utopia or catastrophe—reducing factory farming remains valuable, since ending suffering earlier is inherently worthwhile.
  6. The post frames animal advocacy as robust to AI uncertainty: persuasion and direct work against factory farming matter in all plausible futures.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.