Animals in AI-transformed futures: can anything be done today?

By JoA🔸 @ 2026-01-09T17:17 (+19)

Context

Skip ahead if you’ve read my other post on transformative AI and animals. If you haven't, this section may answer some additional questions.

Confidence level: this was timecapped and may be rough in places. I put low confidence in my judgment on speculative issues. This is an offshoot of a project I did during the Summer 2025 FutureKind AI Fellowship.

Intent: It seems that transformative AI x Animals (henceforth TAIA) is bottlenecked by a lack of object-level interventions.[1] This is a list of interventions in transformative AI x Animals that could make for feasible pilots (by feasible, I mean something that could be done by 2 FTEs with a year of funding, like an Charity Entrepreneurship-incubated charity). This excludes most meta-work like running fellowships and organizing conferences.[2] Most interventions here have been suggested in pre-existing posts: this is crowdsources rather than brainstormed. 

Scope: These interventions don't have to appear positive to be on the list. Many of them are already being implemented, and they’re not all animal-focused. Focusing on TAIA means choosing interventions that are grounded in scenarios where AI is transformative and probably changes our intervention levers. I don’t cover interventions that already seem promising without transformative AI, like improving welfare in precision livestock farming. I also don’t cover artificial sentience, which I consider to be a separate issue.

Hedging on timelines: While I made this list with timelines to TAI of 7 years or less, I don't think that this changes much. You may be more excited about meta-work, but even then, staying meta for too long could be negative for the cause areaThis post has broad agnosticism on what sort of TAI trajectory is most plausible. Your judgment on this will influence what interventions you think are good.

Other caveats:

Interventions

The bullet under each description reflects my first-pass judgment of the intervention.

Judgment Calls on How Present Issues Can Influence TAI

My judgment: these interventions look somewhat reactive and the case for them doesn't depend on rigorous prioritization. However, they benefit from clearer feedback loops - though it's not certain that they'd pay off in AI-transformed futures.

Developing AI-Enabled Epistemic and Coordination Tools

This involves building or deploying AI tools designed to improve human decision-making, deliberation, and compromise. The theory of change is that better epistemic environments will allow humanity to surface and act on latent preferences for animal welfare.

Preventing, or Improving, Terraforming and Similar Processes

Advocating for space governance norms that keep space pristine from interactions with earth-originating non-human biological life.

Seizing Opportunities in Current Priority Areas

Whether it is in farmed animal welfare, alternative proteins, or wild animal welfare, there has been much discussion on how TAI could offer great opportunities for leverage, if seized appropriately. Interventions there could either mean: deprioritizing current programs and pivoting to new ToCs that bet on specific opportunities coming from AI advances (eg R&D having much faster feedback loops in alt proteins); or having one researcher per area exploring a large range of potential interventions.

Judgment Calls on How TAI Trajectories Affect Animals

My judgment: We're far from reaching a consensus on what AI safety interventions most effectively achieve their goals, or even what the priority goals are (especially from the perspective of impartial welfare). Thus, crucial considerations will be subjective. 

Improving the Perception of Animal Welfare in Influential Spaces

Building credibility within tech circles through high-signal engagements, and giving a professional, transpartisan image of animal welfare. Lewis Bollard’s excellent appearance on the Dwarkesh podcast was a highlight, but there are other, smaller-scale examples of this.

AI Pause / Anti-AI advocacy

If no outcome with transformative AI (save for extinction) avoids the worst harms to animals, a logical focus could be pausing AI development, banning superintelligence, or developing strict red lines.

Support Broad AI Safety Efforts

This approach involves contributing to mainstream AI safety work to ensure TAI “goes well”. A lot of animal advocates have done this already.

Preventing Value Lock-In and Takeovers

Some assume that the status quo is so bad that a lock-in of current (or worse) values is the most important thing to prevent.

Judgment Call on AI values

My judgment: We don't yet know much of anything about AI values, especially in different worlds. Even holding certain broad positive attitudes toward “animals” may not be robust enough to bring about the best consequences: even TAI may not be able to cover all animals’ interests, and partial consideration could be awful for the majority of animals. However, at least the ToCs of the interventions are somewhat agnostic on whether TAI disempowers humans (while retaining certain values which humans built into it - which seems strange, especially in the long run); or whether humans remain in control.

Integrating Animal Welfare into AI Governance

Lobbying for the (minimal) inclusion of sentient nonhuman interests within intergovernmental frameworks, codes of practice, etc. This has already been done to an extent in the context of the EU AI Code of Practice. This could probably be done by pre-existing orgs.

Corporate Commitments & Benchmarks

Encouraging AI labs to include "sentient beings" in training specifications, mission statements, and "constitutions" to explicitly protect animal interests. Organizations like Sentient Futures are already exploring this, and some researchers appear receptive to these framing shifts. Progress could be made through scaling these orgs (or through ambitious moves by those already in the field), or through founding a new, more focused, 1 or 2-person org.

Targeted Training Data

This intervention is being pursued by Compassion in Machine Learning (CaML). They generate synthetic pretraining data oriented toward consideration for animals (and sometimes digital minds).

Animal-Inclusive Alignment Research

Starting a technical (or even foundational?) research organization to identify potential animal-harming tendencies in AIs. Something that is close to this is what CaML is doing, but it seems that many interventions and ToCs in that style could be worth pursuing. Foundational research may be harder with fear of short timelines, but there may be interest in thorough research on whether alignment to “all sentient beings” can ever be robust to the first objections that have been raised against it.

Interventions That I Find Harder to Defend

My judgment: I currently don't see why one would pursue these if they want to improve outcomes for animals.

Crazy Train Moral Circle Expansion

Some advocates may want to seize unprecedented societal shifts to raise the salience of animal sentience. If AI actually enables communication with some non-human animals, this will almost inevitably be used for narrative-building. Some have suggested using empathy towards human-like LLMs for the same purpose.

A Patient Philanthropy Fund for TAIA

Getting funders who think TAIA may be important to pool resources in a fund in order to act when we have better indications of trajectories, and when some TAI scenarios can be falsified (inspired by Founders Pledge).

Sparking Public Discourse on AI-driven Harm to Animals

Tactics could include getting high-shock stories in the media (e.g., PLF abuses or autonomous vehicle accidents involving wildlife), and trying to draw the line from that to future larger-scale AI harms. The theory of change would probably be that if there was more pressure for AI to not harm animals, this would improve the development on the margin.

Searching for a "Cause X" within TAIxAnimals

A focused research effort to identify a single, high-leverage "priority issue" that could serve as a focal point for the movement’s limited resources. Could be done by one independent researchers or members of a think tank.

Shallow takeaways and next steps

Nothing in the list emerged as strongly compelling, but I'm generally on the skeptical side. Given that launching misguided interventions could be irreversibly costly for a new cause area, it makes sense to do some red-teaming, to preserve option value and not waste resources. 

I wouldn't be surprised if some readers of the post disagree with most of my judgment. If so, I'd love to hear:

  1. Why you think some interventions here are robustly positive
  2. If you think I’m missing some robust interventions
  3. If you have plans to implement such an intervention in the future

A few weeks ago, Aidan Kankyoku from the Sandcastles blog) did something similar, and even longer, in the context of ending factory farming.[5] Since his takeaways are different from mine, you may appreciate reading it.

Acknowledgements

The initial idea for a list of intervention ideas came from Max Taylor, which does not mean that he endorses my suggestions or conclusions. Kevin Xia gave quick feedback on an earlier version of this list. Many of these interventions were implicitly suggested in previous posts on AI x Animals. LLMs were used to polish my own notes on most interventions. 80% of it was rewritten, some sentences were kept the way ChatGPT rewrote them.

  1. ^

    I don't think that resolving bottlenecks in TAIA is a priority, but I assumed this could be a good nudge to those who think differently.

  2. ^

    I have been inconsistent with the criteria in order to include interventions which I know are being considered.

  3. ^

    The implementation / outcome robustness framework justifies this well in my view:
    Outcome robustness: Intervening on X in a given direction would be net-positive. 

    Example of failure: It’s unclear whether “human” space colonization (SC) is better than misaligned AI SC, given how many systematic ways these coarse categories could differ in various directions. (Especially when we consider exotic possibilities, like interactions with alien civilizations and acausal trade.)

    Implementation robustness: Our intervention on X would (i) change X in the intended direction, and (ii) avoid changing other variables in directions that might outweigh the intended positive effect.

    Example of failure: AI safety interventions might (i) increase the risk of human disempowerment by AI, e.g., by increasing AI companies’ complacency; or (ii) increase the risk of extinction by causes other than successful AI takeover, e.g., a great power war with novel WMDs.

  4. ^

    This doesn't mean that I endorse it, it's just a blind spot.

  5. ^

    "Crazy Train Moral Circle Expansion" was inspired by a section in his post.