AGI & Animals: Discussion Thread

By Toby Tremlett🔹 @ 2026-03-23T14:45 (+34)

This week, we are discussing the statement: “If AGI goes well for humans, it’ll go well for animals”. The announcement post, with a bit more info and a reading list, is here

What is this thread for?

General discussions about and reactions to the debate statement. 

Some of the comments on this thread will be populated directly from the debate banner on the homepage — these will mostly be people explaining why they voted the way they did.

 However, you’re also welcome to comment on here directly, with any considerations you'd like to share, or questions you'd like to ask. 

How should I understand the debate statement?

Again, our statement is: “If AGI goes well for humans, it’ll go well for animals”

The statement will ultimately mean whatever people interpret it to mean. The key is to explain how you are interpreting the statement in the comment that you attach to your vote. However, I can share a few notes which might pre-empt your questions:

Message me or comment in the thread with me tagged if you have any questions. 


 


Jim Buhler @ 2026-03-24T08:04 (+7)

I think there are plenty of crucial sign-flipping considerations pointing both ways (I'll publish a post on this today), and that our takes certainly fail to account for some of them, in ways that likely make these takes irrelevant. 

And even if someone's evaluation somehow does not omit a single crucial consideration, they have to make opaque judgment calls on how to weigh up the conflicting pieces of (theoretical and empirical) evidence. I see little reason to believe such judgment calls would do better than chance.

Clarification on what my "0% Agree" means: I confidently disagree that we should believe it'd go well for animals, but I don't think we should believe the opposite either. I think our cause prio should not rely on any assumption on this question.

Matrice Jacobine🔸🏳️‍⚧️ @ 2026-03-23T19:54 (+5)

It seems unfortunately plausible that despite technological progress toward alternatives to meat, humans have a revealed terminal preference for animal suffering, which mean that short of extinction we are on a default trajectory to astronomical suffering.

Mjreard @ 2026-03-23T16:27 (+5)

Seems like AGI will lead to ASI and ASI will show us more valuable ways to use all the land and matter that currently support animal suffering. The ways we use those probably won't involve animals or suffering at all.

MaxReith @ 2026-03-23T21:31 (+3)

 I think this depends on whether farmed or wild animal welfare matters more. I don't have an answer, so let's treat it as 50/50. 

  1. If wild animals matter more, what could happen?  On the upside, AGI might enable us to help wild animals.  On the downside, it might lead to humans creating biospheres on other planets, which would increase the suffering of wild animals by many orders of magnitude.
  2. If farmed animals matter more, the upside could be that AGI enables us to substitute farmed animals completely (cultivated meat, etc.). The downside could be that people get richer and want to eat more meat, or that AGI changes the production of farmed animals in a way that increases suffering. 

Again, I don't know whether the upside or downside in each scenario is more likely.  Let's say each is 50/50 again.  I think this makes 1) EV negative and 2) EV positive, with the aggregate being slightly EV negative.

Jim Buhler @ 2026-03-24T07:34 (+2)

If farmed animals matter more, the upside could be that AGI enables us to substitute farmed animals completely (cultivated meat, etc.).

Nitpick, but it seems unfair to consider this an upside rather than the mere absence of a downside, since the relevant counterfactual scenario, in expectation (if no AI safety work) is a misaligned AI that takes over and probably ends animal farming as it kills or disempowers humans. 

AI safety cannot take the credit for a potential future reduction or end of farmed animal suffering if it preserves humanity, without which animal farming would not exist to begin with.

Toby Tremlett🔹 @ 2026-03-24T10:19 (+2)

Reminder that you can kick off sub-debates within this discussion thread. Just highlight some text in a comment and then click 'Insert poll':
 

This'd be especially useful as a way to find (and discuss) the cruxes that are driving the different views on this debate. 

Kevin Xia 🔸 @ 2026-03-24T10:18 (+2)

Very uncertain on this one, mainly a matter of "I just don't see why it would" and a strong default to "technological process has largely been bad for animals."

I do think the "better" AI goes for humans (or broadly, the more "extreme" the outcome is), the more likely it is that factory farming would basically disappear incidentally.

However, I think a large range of possible futures where AI goes well for humans are (comparably) normal scenarios, in which I just don't have any strong reason to believe that they would go well for animals.

Aaron Bergman @ 2026-03-23T19:58 (+2)

Vibes, I have no idea, I hope someone convinces me with good takes

NickLaing @ 2026-03-23T19:24 (+2)

I think the answer to this question is too many branches down a tree of possible futures to meaningfully predict. What happens at multiple branch points could swing this either way. If I have time I'll share more about what I mean.

alene @ 2026-03-23T16:00 (+2)

The good news is that life on Earth has been going better and better for humans over the millennia. For instance, we have technology that make it easy to grow tons and tons of food so lots of people can eat as much as they want. We have cures for lots of previously deadly diseases so lots of us humans can live a very long time. And lots of people live in countries that recognize their rights. We also have a robust international economy that makes it really easy for a large number of people to buy the goods and services they want—and for lots of other people to get paid producing those goods and services! 

The bad news is that none of this has translated to things going well for animals. :-( In fact, it has translated to the opposite. Things have been going worse and worse for animals over the millennia. For instance, factory farming, which causes a HUGE amount of suffering for animals, developed very recently in human history, and it developed as a byproduct of humans getting the things they want most (like a great economy, and the ability to produce food cheaply). So we have seen that humans getting more and more of what we want doesn't translate to animals getting what they need. Of course, humans do also want animals to be treated well, on some level! But humans' main goals are human-oriented goals. And so when we get more and more ability to achieve our goals, we put those human-oriented goals first, resulting in negative externalities for animals. If AI goes well for humans, it'll go well for humans. It'll be aligned with what humans want. And that will mean it's aligned with prioritizing human interests over all others. Sure, it'll care about animals a little, the way humans care about animals a little. But it will continue to put human interests first. And that will continue to result in externalities for animals. 

The same way people harm animals now (e.g. for food, entertainment, fashion, science, etc.) may continue. And new ways to harm animals may develop that we never could have imagined before AI. For instance, people love having pet dogs. When their pet dogs die, people are sad. People may want to be able to upload their pet dog's brain to the cloud to hang out with the pet dog when their dog dies. But trying to develop this technology may be a lot of work. AI may do the work by uploading 100,000 dog brains, or 100,000 copies of the same dog brain, to the cloud, and running various tests to see what works best. Perhaps a lot of these dogs will suffer immensely due to some mistake AI made in an early draft or some feature AI failed to include. And perhaps the suffering will be made worse because the dogs don't have bodies and cannot even able to express their suffering without vocal cords or paws. Eventually, AI may work out the kinks before AI roles out the final keep-your-dead-pet-alive-as-an-app-on-your-phone product. But there's all that behind-the-scenes suffering in the meantime. Humans care about animals a little. But humans love to turn a blind eye to behind-the-scenes suffering, so humans won't be too upset about this situation. Then maybe AI realizes humans would like an upgrade to their pet-on-your-phone product. And that means AI needs 100,000 more copies of dog brains to do more experiments. AI that is fully aligned with human interests would realize humans would like the upgrade more than humans would be bothered by the suffering inherent in creating the upgrade. So AI will create the upgrade. This is just an example to illustrate my point. But I think there are lots of ways animals can be caused to suffer that we can't even imagine right now.

What animals need is for AI to be aligned with animal interests, too—not just human interests.

Kestrel🔸 @ 2026-03-23T15:21 (+2)

Hi! There's no labels on the slider bar so it's initially unclear which side is agree vs disagree.

Sarah Cheng 🔸 @ 2026-03-23T18:50 (+2)

Oh no, thanks so much for flagging this! Toby was on holiday today unfortunately, so I've just updated it.

NickLaing @ 2026-03-23T19:26 (+2)

Fair call disappearing after dropping the debate slider to avoid the upcoming bedlam...

PabloAMC 🔸 @ 2026-03-23T14:52 (+2)

AGI could, in principle, find solutions for the key problems that animals face, but I would argue the main issue is that it won't automatically enlighten humans.

JessMasterson @ 2026-03-24T09:52 (+1)

So far, much of technological development seems to have gone well for humans - for example, in developed nations, we have never had to do less hard manual labour, or had access to more information. That has not led to an improvement in the quality of non-human animal lives. In fact, we have seen exactly the opposite. AGI is likely to amplify this effect unless we make a significant conscious and coordinated effort to steer it another direction.

Jens Nordmark @ 2026-03-24T08:09 (+1)

Slightly leaning toward that moral progress in that area would become so cheap that people accept it.

Catherine Low🔸 @ 2026-03-24T00:54 (+1)

I'd really like it if AI resulted in amazing plant based or cultured meat, and that the general abundance coming from AI means that people can focus their thinking on morality, not just making their lives go okay. 

BUT, so far, new tech and improved economical situations have caused farmed animal suffering to get worse.

So I have a big uncertainty, but lean disagree. 
 

Steven Rouk @ 2026-03-23T20:18 (+1)

I'm quite uncertain, but in general I don't think it's been the case that "if X technology goes well for humans, it'll go well for animals". I think in some key cases, it's been the exact opposite, actually—e.g., industrialization leading to the rise of factory farming and killing/causing suffering to many more animals.

However, I also don't think that AGI is going to be quite different from most technologies, at least in some ways (and definitely as it goes past AGI to ASI), and so I'm quite uncertain about how "going well for humans" might positively impact "going well for animals" in this specific case.

But I still see AGI as mostly being a technology developed by humans for human purposes, so it will be guided as such. And humans still predominantly use other animals as resources (for food, testing, raw materials, etc.). So, I think the default trajectory would probably be negative unless there is significant effort invested in helping AGI go well for nonhumans specifically.

shepardriley @ 2026-03-23T19:59 (+1)

No particular strong reason, this is my intuition but curious to see people's reasoned takes.

Hazo @ 2026-03-23T15:25 (+1)

A couplet different potential mechanisms could help farmed animals:

More abstractly, people generally care about welfare so it will be one of the things that an aligned AGI optimizes for. However, it wont be optimal for animals because AGI won't be directly optimizing for welfare. For example, most people don't think it's wrong to eat meat, and we might still not want do things like beneficial vaccines or genetic edits.

Wild animals, less clear though!