Linkpost: AI does not absolve us of helping animals
By Seth Ariel Green đ¸ @ 2025-10-02T21:34 (+9)
This is a crosspost from my Substack, Regression to the Meat. My typical forum posts are written in a more dispassionate, research-y style, but I'm sharing this here because it touches on something that's been discussed in few EA forum posts previously.
Dwarkesh Patel began his August interview of Lewis Bollard with: âAt some point we'll have AGI. How do you think about the problem you're trying to solve? Are you trying to make conditions more tolerable for the next 10 years until AI solves this problem for us?â
Lewis responds by saying basically that better technology might make animal suffering worse if we use it to do âever more intensiveâ farming, and also, even if AGI invents totally excellent meat alternatives, there will still be cultural and political barriers to its adoption, and we still need to do that work.
Itâs a good answer, and it keeps the conversation flowing. My less diplomatic answer would probably have been to turn it around and hammer at the premise. Dwarkesh, what is your theory of the world where something weâve been doing for as long as weâve been on this planet, however you define that, will suddenly wrap up? Can you think of anything, ever, that went from everywhere to nowhere in ten years?[1]
For whatever reason the exchange has been nagging at my attention. There have also been a few EA forum posts in a similar vein. I think that if other people find the topic interesting, Iâd like to explain why it occupies zero of my professional attention. (The short answer is that I expect AI to be sharply curtailed by risk-averse regulations in my lifetime.)
This post is not precisely about animals. Itâs about a theory of technological change and how societies adapt to it. Iâll first sketch out the trajectory I expect AI to take in the next 10-50 years; why we still need to do the hard work of persuasion under that scenario; and why I still think itâs a good thing to work on ending factory farming if Iâm wrong, including in worlds where AI either completely solves the lab-grown meat problem or kills us all.
I expect AI to follow a trajectory like nuclear powerâs
Nuclear power is a big deal. Itâs about 70 years old. There are ~440 nuclear power plants on earth which collectively generate about 9% of global electricity. Ballpark, weâd need a few thousand plants to generate all global electricity â ChatGPT says 3100-3500 1GW plants â and about 6X that to produce all âfinal energy.â
It costs ~$3B to build a 1GW plant in China and about twice that in the US, and Iâm not claiming to be an expert in this area, but apparently the USâs costs could be about $3.5 billion/GW if we relaxed constraints like the âas low as reasonably achievableâ ionizing radiation standard. Replacing all fossil fuels with nuclear power would cost between $7-$30T at baseline. If you add 10% on top of that for transmission/infrastructure costs and assume graft/corruption will eat another 20% â who even knows â you get a number that the world can afford. Especially if we treated nuclear technology advancements as a core civilizational goal and invested accordingly.[2]
But weâre not doing that. There are currently about 70 nuclear power plants under construction. Zero of them are in the US. Germany is denuclearizing and experiencing periodic energy shortages. We collectively lost our appetite for nuclear power because a few prominent nuclear disasters killed a few hundred people over many decades. (Air pollution from fossil fuels is thought to kill about 5 million people a year).
I expect AI to follow a similar path. I anticipate rapid progress in LLMs for both current use cases and new ones. (A college friend is working on putting researchers like myself out of business đ.) And then I expect several dozen or hundreds of people to die from AI-related mishaps or terrorism. Suppose a pilot sleeps on the job while an LLM-based assistant crashes the plane, or an autonomous truck crashes into a school/hospital, or a cult starts worshipping a chatbot and does doomsday stuff. Seems pretty plausible to me! At that point I expect the westâs fundamentally lawyerly culture to take the reins and for AI to be strictly curtailed. Thatâs what we do when things are promising and dangerous. We do not become more utilitarian when the stakes get higher. Fear eats the soul, for people and for countries.
Iâm kind of a techno-optimist, and when this happens Iâll be sad. I think the turn away from nuclear power is one of our civilizationâs great mistakes. If AI can radically transform material/organic sciences, I want to see that unleashed and society radically upended. But Iâm not expecting it. I am a bit baffled that other people seem to. Has anything in your lifetime, or in your parentsâ lifetime, been like that?
Also, to clarify, nuclear power has been transformative. 9-10% of global electricity production is a lot of lightbulbs! But itâs not some civilization-altering thing. It just exists in tandem with other, older things, fueling our wants and needs. We could be aiming to fuel mass desalination to terraform the American west or the Sahara, which would sequester a few decades worth of carbon, open a huge new frontier for productive agriculture, and dramatically lower spatial pressures on biodiversity. But weâre not doing that because weâre scared of what it would take. Thatâs who we are. We get a lot of utility from arguing about things, perhaps more than from from solving them. This is, to me, a civilization-defining trait.
If Iâm wrong, weâd still need to talk to people
To repeat something I said to Kenny Torella, persuasion is a beautiful thing. Iâm not ready to give up on it. Letâs say AI-assisted labs make huge progress on lab-grown meat. First, in practical terms, âprogressâ here means lowering the energy costs of production, because we have lab-grown meat in Oakland, Singapore, and Tel Aviv. But itâs expensive. Meat, by contrast, is cheap and available everywhere, and if you think of an industrial chicken plant as a macroorganism that converts corn to tasty, protein-rich meat, Lewis estimates that it takes about two calories of grain to produce one calorie of chicken, which is incredible.[3] Letâs say AI leads to breakthroughs that give lab-grown meat similar efficiency and therefore similar price. Great! Now weâll have a bunch of fundamentally people problems, i.e. matters of persuasion.
- Who will convince the FDA/EMA to permit it?
- Which restaurants will carry it?
- How will we get the MAHA movement to give it a chance given their general hesitance about highly manufactured/processed foods?
- Can we persuade Florida or Montana to permit its sale?
- Will the EU allow folks to market plant-based products with meaty labels?
I see an obvious role for advocates and researchers. So does Lewis. (My colleague Jacob Peacock provides a nice overview of consumer attitudes towards plant-based meat attitudes in Price-, taste-, and convenience-competitive plant-based meat analogues would not currently replace the majority of meat consumption: A narrative review.)
A lot of AI scenarios are orthogonal to animal welfare issues
A friend once posited that AI doesnât need to literally kill us all for it to be a big problem. His example was AI agents capturing like 20% of global electricity, and we just have to live with it, the way that Mexico has to live with parasitic, seemingly ineradicable cartels. That sure would suck! But I donât see the implications for animal welfare one way or the other.
Or imagine, as Neal Stephenson does in Fall, that AI generates endless addictive slop, and the âFive companies/Running everything I see around meâ continue to improve at beaming it directly to our eyeballs, and eventually most people just end up spending all day staring at nonsense on their AI goggles, human relations wither, weâre all sad, etc. Again, very bad! But unclear how this affects animals. Probably factory farming would just continue onwards. In which case, weâre back to where we were, which is needing to do the work.
Suppose the worst (or best) happens
Personally I view the âweâre all going to dieâ or âweâre all going to live in utopiaâ scenarios as very unlikely.[4] But I might be wrong. So back to Dwarkeshâs problem: Letâs say that by 2035, itâs either all going to be over or weâll have infinity lab-grown meat for $0. Suppose those were the only two possible outcomes. Why continue working on ending factory farming in the meantime?
Because factory farming is very, very bad. It is many holocausts worth of bad. Stopping that even a day sooner than it would otherwise end is good. I very much doubt that you have something else more important to work on. Maybe, like, Aella, you âlike throwing weird orgiesâ and youâre âlike â well, weâre going to die. Whatâs a weirder, more intense, crazier orgy we can do? Just do it now.â Thatâs great, spend your evenings doing that! But I still think you can find time to work on solving something really bad in the morning.
Whether anything we do actually works is a separate problem. But weâre a lot more likely to find it if we are actually trying. I much prefer that, in any world, to waiting for a Deus Ex Machina.
- ^
Some animal advocates would reach here for the comparison to slavery, whose legal status changed dramatically over the 19th century. To which I would say that tens of millions of people are slaves today, compared to about 12.5 million people enslaved in the Atlantic slave trade. You can âwinâ the moral fight in some places and still be nowhere close to getting the job done.
- ^
I know nothing about fusion but here is some evidence itâs happening.
- ^
Eric Gastfriend â who, it should be said, is smart and has pivoted to AI safety, which is some evidence in favor of its being worth doing â once said the ratio was more like 4 to 1, but either way, itâs way more efficient than a herd of cows or any extant lab-grown alternatives.
- ^
My probability of any global catastrophe killing >50% of the human population in a single year over the next 200 years is probably 1%, which is very high! But I think the most likely culprit is war.
Ben_Westđ¸ @ 2025-10-03T17:02 (+8)
Thanks for writing this Seth! I agree it's possible that we will not see transformative effects from AI for a long time, if ever, and I think it's reasonable for people to make plans which only pay off on the assumption that this is true. More specifically: projects which pay off under an assumption of short timelines often have other downsides, such as being more speculative, which means that the expected value of the long timeline plans can end up being higher even after you discount them for only working on long timelines.[1]
That being said, I think your post is underestimating how transformative truly transformative AI would be. As I said in a reply to Lewis Bollard who made a somewhat similar point:
If I'm assuming that we are in a world where all of the human labor at McDonald's has been automated away, I think that is a pretty weird world. As you note, even the existence of something like McDonald's (much less a specific corporate entity which feels bound by the agreements of current-day McDonald's) is speculative.
But even if we grant its existence: a ~40% egg price increase is currently enough that companies feel cover to be justified in abandoning their cage-free pledges. Surely "the entire global order has been upended and the new corporate management is robots" is an even better excuse?
And even if we somehow hold McDonald's to their pledge, I find it hard to believe that a world where McDonaldâs can be run without humans does not quickly lead to a world where something more profitable than battery cage farming can be found. And, as a result, the cage-free pledge is irrelevant because McDonaldâs isnât going to use cages anyway. (Of course, this new farming method may be even more cruel than battery cages, illustrating one of the downsides of trying to lock in a specific policy change before we understand what the future will be like.)
- ^
Although I would encourage people to actually try to estimate this and pressure test the assumption that there isn't actually a way that their work can pay off on a shorter timeline.
Seth Ariel Green đ¸ @ 2025-10-07T14:50 (+2)
Hi Ben, I agree that there are a lot of intermediate weird outcomes that I don't consider, in large part because I see them as less likely than (I think) you do. I basically think society is going to keep chugging along as it is, in the same way that life with the internet is certainly different than life without it but we basically all still get up, go to work, seek love and community, etc.
However I don't think I'm underestimating how transformative AI would be in the section on why my work continues to make sense to me if we assume AI is going to kill us all or usher in utopia, which I think could be fairly described as transformative scenarios ;)
If McDonalds becomes human-labor-free, I am not sure what effect that would have on advocating for cage-free campaigns. I could see it going many ways, or no ways. I still think persuading people that animals matter, and they should give cruelty-free options a chance, is going to matter under basically every scenario I could think of, including that one.
SummaryBot @ 2025-10-03T15:48 (+2)
Executive summary: This exploratory essay argues that AI is unlikely to quickly end factory farming due to regulatory, cultural, and political barriers, and that regardless of AIâs trajectoryâwhether it stalls, transforms food production, or even ends civilizationâwe still need to actively pursue persuasion and advocacy to reduce animal suffering.
Key points:
- The author challenges the idea that AI will soon âsolveâ animal farming, arguing that technological revolutions rarely eliminate entrenched practices within a decade.
- AI is expected to follow a path similar to nuclear power: significant potential, but likely hampered by risk-averse regulation after inevitable accidents or abuses.
- Even if AI produces cheap, efficient lab-grown meat, societal adoption will hinge on persuasionâconvincing regulators, consumers, and cultural groups to embrace it.
- Many AI futures (e.g. energy consumption, addictive media) may have little bearing on animal welfare, leaving factory farming to continue largely unchanged.
- In extreme scenariosâwhether AI delivers utopia or catastropheâreducing factory farming remains valuable, since ending suffering earlier is inherently worthwhile.
- The post frames animal advocacy as robust to AI uncertainty: persuasion and direct work against factory farming matter in all plausible futures.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.