Animal advocates should respond to transformative AI maybe arriving soon

By Jamie_Harris @ 2025-08-02T14:27 (+90)

AI is advancing incredibly fast. We might see AI systems that are better than most humans at many tasks within a few years. This would change things drastically for animals in factory farms, in the wild, and beyond… and therefore animal advocates’ strategies should change, too.

In this post, I argue:

  1. Based on recent trends in AI capabilities and advances in training techniques, truly transformative AI could arrive soon, e.g. by 2030.
  2. This matters for animal advocates (if you agree it could arrive soon), because transformative AI will change the game for animals—for better or worse.
  3. Animal advocates might reasonably:
    1. Optimise harder for immediate results (not results in e.g. 5+ years’ time)
    2. Predict how AI will change things, and try to make that go well for animals
    3. Try to increase the concern that AIs or their controllers show for animals
    4. Focus on building capacity to prepare for TAI
    5. Shift to AI welfare, to protect potential sentient AIs from suffering
    6. Shift towards all-inclusive AI safety

But this is not something that animal advocates can afford to just ignore. You can change your own strategies and next steps in the light of this.

This post is intended as a bit of a wake up call. For more measured, sensible posts, see here and here instead.

Written in a personal capacity; I’m not speaking for the views of others at the organisations I work at. Initially prepared as an impromptu talk at the AI, Animals, & Digital Minds unconference. Thanks to Amber Ace for doing much of the writing. Thanks to Lizka Vaintrob, Engin Arıkan, Constance Li, Max Taylor, Neil Dullaghan, Kevin Xia, Lauren Mee, Renata Scarellis, James Ozden, Michael St Jules, and Ben West for feedback and comments on the draft. All mistakes are my own.

Transformative AI may arrive soon

'Transformative AI' (TAI) refers to AI that is so broadly skilled that its use would drastically alter global economic, political, and social structures, potentially far exceeding human-level intelligence at many (or most) valuable tasks.

We could have AI like this soon. I think this because:

Of course, there are many reasons that it might not pan out this way. We could hit a bottleneck or a plateau soon, policymakers might impose more onerous regulations, or states might sabotage each others’ AI infrastructure. But none of this is guaranteed. TAI soon is a real possibility.

If you want to ‘feel the AGI’ a bit more, and soak up some of the ‘shit, things are happening now’ vibes, I recommend Situational Awareness for the basic case and AI 2027 for a very concrete story of how this might unfold.[1]

This matters for animals and their advocates

I think a lot of animal advocates assume that AI issues are just for nerds, sci-fi fans, and SF tech bros. They’re not. AI could shape the lives and experiences of all future humans, animals, and other sentient beings—and this could happen soon.

For animals, it could mean, for instance:

I think animal advocates should take this seriously. Maybe some animal advocacy strategies that didn't make sense 5 years ago do make sense now, and vice versa. The gameboard has flipped, and something should change — at least insofar as you agree that TAI may arrive soon (which I’ll assume you do for the rest of this post).

How might animal advocates respond to TAI soon?

We could:

Stick to business as usual

We could bury our heads in the sand, ignore the problem, and continue business as usual. I think this is the default approach even among animal advocates who have heard about AI x-risk concerns. Sometimes it comes from not fully grasping the situation, sometimes it comes from uncertainty about what to do, and sometimes it comes from cognitive dissonance—an impulse to deny that the problem matters at all, to defend and justify the actions you’ve already planned to take.

This reaction is understandable given how complicated, confusing and huge this all seems. It is also quite clearly the wrong answer. In this post I’m mostly leaving it as an open question which approach is best, but the ‘head in the sand’ approach is the exception where I want to go out on a limb and challenge you… beg you… not to take this avenue.

Even if you think that we can’t influence the outcomes of TAI, your strategies should usually still change. (Caveats in this footnote.[3])

Let’s look at some better options.

Optimise harder for immediate results

If you reason that we either can't predict the trajectory of AI, or we can predict it but not influence it, then you should focus on minimising animal suffering with high confidence on very short timeframes.

Forget speculative legislative and institutional tactics that might take years to pay off and have uncertain effects; you want high-speed, high-confidence, evidence-based tactics, like corporate campaigns for immediate welfare reforms, or—even better perhaps—cooperative outreach to producers and retailers that helps producers to implement immediate changes in their supply chain. The Shrimp Welfare Project’s Humane Slaughter Initiative is a good example; they work with shrimp farmers to promote more humane slaughter methods.

More formally, you should “only focus on the effects the intervention will have before the paradigm shift [to TAI], and discount the post-shift impacts to ~0” when choosing interventions.

Be careful not to confuse this with “business as usual” though; many corporate campaigns would still take 3-5 years to make a difference for example, so you really do need to ask and explore which interventions will be fast enough to matter.

This moves in almost the opposite direction (less future-focused) to the other options in this post. It’s less ambitious, since it has lower chances of a massive payoff for animals.[4] But it’s better than working on something that never pays off at all, and I want to emphasise that I think it’s a valid, plausible response to short TAI timelines.

Or, we could plan ahead. We could instead:

Predict how AI will change things, and try to make that go well for animals

While we’re not certain how TAI will pan out, we can make educated guesses. We can then backchain from those guesses towards actions that we can take now, to capitalise on TAI’s benefits or to mitigate its harms.

For example, take cultivated meat. It seems likely that TAI—or even improvements in not-yet-transformative AI—could accelerate cultivated meat research, potentially giving us cultivated meat that is (significantly) cheaper than conventional animal products within the next few years.

But there’s a risk that we won’t be able to make the best use of this technology if (for example) animal agriculture corporations lobby to ban cultivated meat, or if people refuse to even try it because of some public health scandal. Benjamin Hilton, in this profile on factory farming, suggests:

You might work on finding ways to preemptively reduce barriers to the uptake of cultivated meat, such as finding ways to ensure cultivated meat adheres to religious dietary restrictions, or preventing a possible EU-wide ban on cultivated meat (which, if passed, could last decades or more).[5]

A shift from campaigns focused on diet change or improving animals’ welfare to campaigns focused on increasing acceptance of cultivated meat — either publicly, or targeting policymakers — seems feasible to me for animal advocacy organisations.[6]

Or, consider space colonisation. This may be the first step towards a vast, prosperous, and extinction-resilient population. But if humans expand to other planets, there’s a risk that we’ll also import human cruelty towards animals.[7] For example, in 2022 the ‘Nuggets in Mars’ programme in North Carolina aimed to ‘equip teachers to be able to educate their students about the future of [animal] agriculture in space’[8], and there has been research and active efforts to launch aquaculture in space. This would be a devastating misstep which animal advocates might work to prevent now, for example by advocating for legislation against setting up factory farms on other planets.

Focus on building capacity to prepare for TAI

The authors of this post recommend:

These all sound reasonable to me!

I’d also encourage upskilling and education on these topics, as individuals and within organisations. There are various efforts to build expertise for making use of AI currently, such as OpenPaws, AI Impact Hub and various individuals. I expect these efforts to be useful, but are often more focused on improving efficiency along advocates’ pre-existing theories of change; I think the theories of change themselves need to shift, and I don’t think that these efforts are sufficient for adjusting to the truly transformative potential of AI (hence the 6 strategies I outline in this post!). The BlueDot Impact courses are closer to what I have in mind, with a focus on TAI.

Another idea would be to just focus harder on building decent financial reserves to give you flexibility to act decisively at pivotal moments, e.g. to worry less about the views of your funders.

Try to increase the concern that AIs or their controllers show for animals

The values and personalities of AIs might shape the future for animals. So we could also directly engage with AI development to try to make sure that AI intrinsically cares about animal welfare (just as it is now trained to be helpful and harmless towards humans). This could involve creating synthetic training data to nudge AI towards pro-animal attitudes (an approach taken by Compassion in Machine Learning), integrating animal welfare principles into AI “constitutions”, or creating and advocating for model evaluations that test for animal-sympathetic traits (as Sentient Futures are pursuing).

There are also more people-focused efforts that could help ensure animals are represented in AI development, ranging from simply befriending researchers at frontier AI companies through to creating career programmes to help animal advocates get into AI research, governance, and policy. We will want to increase pro-animal values in places of influence, and we should expect the relevant influence to concentrate in and around AI systems.

Note, though, that the ideas in this section carry significant risks if conducted poorly. For example, we might unintentionally alienate the AI researchers we most need to collaborate with, or make LLMs so excessively pro-animal that it causes public backlash (comparable to the pushback against ‘woke’ AIs).

Shift to AI welfare, to protect potential sentient AIs from suffering

AI could soon become sentient,  vastly outnumber biological animals, and suffer on a large scale. You may be able to do more good by shifting your focus from helping animals to helping sentient AIs.

Since work in this space is in its early stages and could easily backfire, I think it’s important that efforts here are careful, measured, and credible. So I’m a big fan of the research-first comms strategy of Eleos AI and New York University’s Center for Mind, Ethics, and Policy. In principle though, I expect animal advocates could contribute to communications and advocacy focused on policymakers, AI companies, or the public — perhaps in collaboration with researchers or AI-focused think tanks — or to relevant research and capacity building.[9] There may be funding available for credible new initiatives in this space.

Shift towards all-inclusive AI safety

The effects of TAI could be extremely far-reaching, and lead to many crazy-seeming scenarios. All sentient life on Earth could be wiped out, or it might spread across the stars. Factory farming might be displaced by vastly improved animal product alternatives, or become even more efficient and widespread. Sentient AIs may vastly outnumber (and morally outweigh) animals, for better or for worse. We might create digital hells or transhumanist utopias.

These possibilities dwarf the current evils of factory farming, so a strong option is to drop the explicit focus on animals and shift to making TAI go well for sentient life in general.[10] 

While the average animal advocate might not have the skills to work on technical AI safety, some animal advocacy skills are transferable to AI policy or governance work. For example, people who have worked on corporate animal welfare campaigns might work on similar campaigns in the AI space. There are emerging grassroots advocacy movements like Encode and PauseAI. And of course many skills are transferable across cause areas and institutions, like marketing, operations, and management.

You can do something about this

AI is getting smarter, and we could be in a completely different reality sooner than you might think. TAI will affect everyone—humans and animals alike. It’s hard to predict the future, but the possibilities still have implications for what animal advocates do today. There are lots of good responses. Ignoring the problem doesn’t seem to be one of them.

We are not powerless; as individuals, we can make changes. I’ve been a vegan and animal advocate for over a decade, and have spent years of full-time paid work focused on farmed animals, but have shifted my career towards AI safety/governance, AI welfare, and broader principles-first EA movement building (to help with all of the above).

As an individual, you can:

  1. Evaluate the claims in this post, and consider if you agree with them.
  2. Discuss the claims and implications with animal advocates and aspiring effective altruists; on the EA Forum, at EA Globals, in the Sentient Futures (formerly AI for Animals) Slack, or within your organisation.
  3. Consider shifting your organisation’s strategy or your own advocacy to reflect the changes.
  4. Consider changing your own next career steps, volunteering, or donations to support organisations more aligned with your updated sense of the strategic priorities.
  1. ^

     See also strong writeups by 80,000 Hours and Forethought.

  2. ^

     AI-enabled technologies might make it cheaper, easier, and more efficient to factory farm animals for food. AI could help speed up factory farming of animals for other purposes that make animal food product usage a smaller part of the problem e.g., if it powers waste management, advanced medical treatments, combatting climate change, or terraforming.

  3. ^

     Caveats: Firstly, if you’ve considered the arguments and evidence but disagree that there’s a significant chance of TAI in the next few years, then your “business as usual” strategies may still make sense under that view. Secondly, I think it could be reasonable to take a bet on having impact over longer timelines on some kind of comparative advantage or portfolio allocation lines. E.g. if you or your org are just unusually well-placed to have impact on longer timelines, relative to other animal advocates. But my guess is that this won’t be true for many animal advocates who agree that there’s a significant chance that TAI may arrive soon; being one of the relatively few who both understand the arguments and are willing to act on them already gives you a comparative advantage for being someone who does in fact act on them?

  4. ^

     I am personally least optimistic about this shift, out of the options. It seems plausibly negative to me if animal advocates shift in this direction, because it reduces the chances that factory farming is ~abolished by the time that TAI arrives. From a longtermist perspective, the effects of abolishing factory farming matter far more than reducing suffering in the next few years, and I expect those effects to be positive. Although of course this is a riskier approach, given possibly short AI timelines; your efforts might come to nothing. If you’re relatively confident in TAI arriving soon but pessimistic about our ability to forecast changes after TAI arrives, how we can influence those changes, or the all-things-considered effects of our actions, the “Optimise harder for immediate results” approach does seem most reasonable though.

  5. ^

     See also these learnings from Sentience Institute’s technology case studies.

  6. ^

     I’m not saying it will necessarily be easy — I recognise that existing staff, funders, and other stakeholders may have concerns and would need to be brought onboard with the pivot, too.

    Or maybe we can predict that animal advocacy interventions will become more cost-effective in the future due to increased wealth or more affordable higher alternative protein products, and therefore focus for now on capacity building and recruitment over immediate impact. (Thanks Kevin Xia and Benjamin Hilton for this argument.) I’m not sure I agree though, since we may be in an especially influential time now for setting the trajectory after TAI.

  7. ^

     Arguably the risk is pretty small, and this isn’t a scenario I’m super worried about, mainly because I expect most future ‘humans’ to actually be ‘digital people’ and hence not in need of factory farming. Still, there are some reasons to think it might happen.

  8. ^

     Thanks to Alene for this example in her earlier Forum post.

  9. ^

     Animal advocacy organisations seriously exploring this should feel free to message me; I have additional unpublished thoughts on strategic priorities and how to mitigate backfire risks. I have done research on this topic and worked briefly on grantmaking in the area.

  10. ^

JoA🔸 @ 2025-08-02T19:16 (+8)

Thank you for this post! It's quite clear and illustrates all the different "reflexes" in the face of potential TAI development that I've observed in the movement. Since we can often jump to a mode of action and assume it's the correct path, I find it useful to get the opportunity to carefully read the assumptions and show all the possible responses. 

Right now, my decision parliament tries to accommodate "Optimise harder for immediate results" and "Focus on building capacity to prepare for TAI". Though it is frustrating to know that one of the ways of responding to AI developments you list here will be the "best" path for sentient beings... but that we can't actually be sure of which one it is.

Karen Singleton @ 2025-08-03T22:08 (+4)

Thanks for this very clear wake-up call. I've been wrestling with similar questions. I think this presents measured and sensible approaches.

I fully agree with your "This matters for animals and their advocates" section. The idea that AI issues are just for tech bros is such a dangerous blind spot. The potential scenarios you outline (intensified factory farming, ecosystem lock-in, breakthrough technologies) really drive home how transformative this could be for animals specifically.

I also strongly agree with the "build capacity" approach, especially your point that theories of change themselves need to shift. I think you're right that many current AI-for-animals efforts focus on improving efficiency within existing frameworks rather than grappling with how fundamentally different the strategic landscape might become.

On AI welfare, the research-first approach you highlight makes sense, any efforts here need to be careful, measured, and credible given the risks of getting this wrong.

I'm less convinced that "optimise for immediate results" is the right response to uncertainty about AI trajectories. If we're truly in a transition period, the most important work might actually be the hardest to measure in the short term, like influencing the foundational assumptions being built into emerging systems right now.

The "predict and prepare" approach feels most promising to me, but I think we might be too focused on specific AI applications (like cultivated meat acceleration) we're likely heading into entirely new economic and governance systems where the basic rules about value, ownership, and moral consideration are being rewritten. The assumptions embedded during these transition periods could determine whether animals are treated as production units or moral patients for generations. Therefore the economic frameworks emerging alongside AI development could be just as consequential as the AI systems themselves.

Questions I'm asking myself, and feel free to posit any answers! Are animal advocates engaging enough with the economic/governance transitions happening alongside AI development or are we too narrowly focused on the technology itself? Are we thinking big enough about the window of opportunity during these transitions?

Thanks again for this piece, I think it'll make a good shareable read to help others in our organisations understand the questions we should be asking/ what we should be doing.

Jamie_Harris @ 2025-08-04T17:56 (+3)

Thanks Karen! Interested if you have specific things in mind for implications of the economic angle? I can certainly see it playing into some of the "Predict how AI will change things, and try to make that go well for animals" predictions, or leading to more of an emphasis on "Shift towards all-inclusive AI safety".

Karen Singleton @ 2025-08-07T00:25 (+1)

Great question! I'm thinking about how the economic disruptions from AI create opportunities to reshape the foundational rules before new systems crystalize.

For example, as AI automates more labour and potentially destabilises growth-oriented models, we might see experiments with post-growth economics, universal basic services or entirely new frameworks for measuring value. These transition moments are when assumptions about what "counts" economically become malleable, including whether animals are seen as production inputs or beings with inherent worth.

Right now, our economic systems have deeply embedded assumptions that treat animals as commodities, externalise ecological costs and prioritise efficiency over welfare. But during systemic transitions, these assumptions become visible and potentially changeable in ways they normally aren't.

I think this fits most naturally into your "predict and prepare" category, but with a focus on economic system design rather than just technological applications. Instead of just preparing for cheaper cultivated meat, we might also prepare for the governance frameworks that will determine how new economic models treat animals.

The policy levers might be things like: ensuring animal welfare is embedded in any new economic measurement systems, preventing harmful defaults from getting locked into emerging governance structures or influencing how post-growth economic experiments value different forms of life.

Does that distinction between technological applications and systemic economic design make sense? I suspect the latter might be more neglected right now.

I've been exploring some of these ideas in more depth [here].