AI and animal welfare: am I missing something?

By SiobhanBall @ 2026-04-25T09:43 (+39)

Animal welfare is what brought me to EA. I spent several years working for animal advocacy organisations, and the EA ideals to do with rigorous thinking about where effort makes the biggest difference was something I believed in fully. 

This post is me thinking aloud, not staking a firm position. I'd genuinely welcome pushback from people who know this space better than I do.

The framing of the problem is a bit odd to me 

The AI x animals argument, as I understand it: AI systems are making decisions that affect how we use animals. Those systems don't adequately represent animal welfare. If we can get welfare into the benchmarks/constitutions of AI labs, we can shift outcomes for animals at huge scale before they get locked in. Okay. 

But nobody is 'ignoring' animal welfare; they're just indifferent. AI systems are being built to do exactly what they were designed to do, which is to faithfully execute human preferences. And those are, in aggregate, to eat cheap meat, conduct research on living organisms when it's convenient, and prioritise cost and efficiency in agricultural supply chains. AI is reflecting the values of the humans. I don't think you can sneak those values in, unless there are specific opportunities to tweak things here and there before they get cemented.

Are there such opportunities? So far, I can't break this down to anything tangible. 'If we don't do anything, the systems will become entrenched and determine animal outcomes for decades to come' - what systems? What outcomes? Who, where? Can someone give me a few clear examples of tractable situations? 

Naming the situation isn't enough. I suspect there are situations that represent a theoretical fork in the road: The EU AI Act is a real regulatory framework being implemented now. Procurement systems at major food companies are being built. Agricultural AI platforms from John Deere and Bayer are being deployed. But how is any of this tractable - what are you hoping orgs/grantees/EA people can do about those things? 

It seems like the best we could hope for is to effect a thin layer of consideration on top of a reality (the collective attitudes of humans towards animals) that will bypass our efforts the moment it conflicts with something humans really do care about. Like profit. 

The point I've seen raised about Claude's constitution only containing one line regarding animal welfare, first of all seems arbitrary (it doesn't matter how many words there are; only what the words say), and secondly, merely reflects the real situation; our attitudes, as a whole. Focusing on 'making AI go better for animals' by convincing AI labs to suddenly care seems to be addressing the symptom rather than the cause. 

Where I think AI might help

What if, instead of trying to push concern for farmed or wild animals in to AI labs situationally, we can use AI to make traditional animal agriculture obsolete? Maybe that's where funding should go; at least, there are clearly definable ways that AI could catalyse that outcome:

Cell culture optimisation is an enormous search space; finding the exact combination of nutrients, temperatures, and growth factors that make cells proliferate efficiently. AI can model and run simulated experiments at a speed that wet lab trial and error cannot match.

Scaffold design, one of the hardest unsolved problems in cultivated meat, involves getting cells to grow in three-dimensional structures that actually resemble meat texture. AI can help design and test scaffolding materials and geometries by modelling cell behaviour computationally.

I've also read that AI can optimise production processes in ways that could drive costs down dramatically.

Each of these seem like a more robust theory of change to me than 'do something to prevent detrimental lock-in'.

What I'm uncertain about

The regulatory and scaling challenges that cultivated meat faces are large and I'm not qualified to assess them fully. I'm aware cultivated meat has had a difficult few years commercially and faces active political opposition in some markets. I don't know if those are terminal or temporary problems. 

It's also possible I'm underestimating the leverage of getting welfare into AI systems; maybe one well-placed benchmark really does shift how frontier labs think, and that ripples out in ways I'm not aware of.

If that's so, then can someone tell me, in plain English, what that looks like? I.e '[lab] is currently planning [this development]. If we do [this action], we can change it to [this outcome], which will mean [x number] of animals experience [less suffering, presumably].' 

At the very least, if there is a much clearer plan for impact for 'making AI go better for animals', then I think it ought to be communicated more concretely than what I've seen so far, to avoid people in the space either writing posts like this, or just kind of going along with the trend - even if they don't understand it. 

TLDR: I don't understand the tractability of 'make AI go better for animals' except for where it may speed up our path to cultivated meat adoption, which isn't mentioned in any of the 'make AI go better for animals' stuff that I've read. 


Constance Li @ 2026-04-25T19:46 (+28)

Hi Siobhan! Great question and I'm genuinely glad you raised it. I'm the Exec Director of Sentient Futures which is trying to build out the AIxAnimals field and your confusion probably represents a failure on our part to properly communicate our ideas and the field's progress. I can mostly speak to why we as an org have decided not to work directly on AI applications or cultivated meat. 

A couple (non exhaustive) points:

  1. There are two different concept you are talking about here: Applied AI vs Frontier AI
    1. The applied side: (i.e. accelerating cultivated, replacing animal testing, etc) takes a lot of industry expertise and has been tried and is still being tried outside of the EA space. There are a lot of capital, status quo, cultural, regulatory, etc barriers on this front that would take a lot of work and funding to overcome. With the constraints of talent and funding in the AIxAnimals space, I think this would be near impossible for this group of people to pivot there, especially since with the funding constraints of alt proteins, even far more experienced food scientists are finding themselves unemployed. We've tried to support these efforts by platforming them in our conferences and newsletters, but it would be spreading ourselves quite thin and we do not have the expertise in any of these areas.
      1. There are also other applied AI interventions that we've platformed such as inter-species communication, precision livestock farming, and AI to replace animal testing. We've known for a while that working on these interventions directly is not our comparative advantage.
    2. The frontier side: (i.e. getting AI models, developers, and regulations to consider animals more. I think this is more tractable and EAs are better positioned to do this at our current stage for a couple reasons:
      1. I don't think people are indifferent: AI folks at labs care more about animals than your average person. Many millions of dollars have already been donated to animal welfare efforts from AI lab employees (not public, but verified) with potentially hundreds of millions to come. Some are internal champions who really care, but just need a robust benchmark or red teaming case to pitch to their colleagues. Many are open to incorporating consideration for animals as long as it doesn't run up against some other tradeoff like capabilities, human safety, or substantial profit. The work of the AIxAnimals field is to find these win-win situations where possible and package them up in a way that the frontier labs are happy to make a change.
      2. Specifically the EU AI Act - a group of animal folks got together and participated in the working group consultations for the code of practice and were successful in getting non-human welfare added as a systemic risk consideration in the final language. It was a short window of time and if we had waited, then we would have missed this opportunity to set a policy precedent and build upon other things like the TFEU Article 13. This was just done with a small team of mostly volunteers or people taking time out of their normal jobs to draft arguments and attend meetings. Making sure that it is enforced is going to take a lot more work that we don't currently have capacity for. But the language precedent is there to build on and use as a foundation. There was essentially no opposition push back (unlike with cultivated or any other animal protection laws) because the animal industries are not paying attention to frontier AI regulation. So this is quite tractable, but I suspect it won't be for long.
  2. Cultivated meat has a lot of enemies and things working against it - I do think fighting cultivated meat bans is one of the more important things that we could be doing for animals (see the lessons from this wargame we played) in the long term. But it seems like you can squash one and then another ban comes up, and it's just this perpetual game of whack-a-mole. There are many people who are skilled at policy who I respect and donate to that are taking care of this. I do think they could use more help, but I don't think it is as neglected or as underfunded. I think they also have more mainstream appeal and accessibility. For example, many governments and schools do give funding for alt protein R&D. Important, tractable, not as neglected. The AIxAnimals path goes to one that's less traveled and where there are not a bunch of institutions already trying to protect the status quo. I don't think it would be wise for everyone to go all in on cultivated because there is no guarantee that even if it was technically successful, it would get cultural adoption. We need to diversify our interventions.
  3. Frontier and applied AI holds the promise of solving wild animal suffering. Most animal suffering is not man-made, and the current pace of research into reducing wild animal suffering is just a drop in the bucket compared to the scale of the problem. More advanced AI does hold the promise of being able to process large amounts of information and make complicated trade-off decisions and immediate actions. Right now, AI is awful at reasoning about wild animal welfare and defaults to ideals of conservation, biodiversity, and nonintervention. One concrete thing we've been working on is increasing the training data of animal perspectives and how AI intervention could be positive for animals. See this https://hyperstition.sentientfutures.ai/
  4. If you want a concrete BOTEC about the value of aligning AI to animal welfare, check out this one by @MichaelDickens : https://mdickens.me/2026/03/24/alignment-to-animals_BOTEC/
  5. For a broad overview of what AI means for animal advocacy (both applied and frontier), I'd highly recommend watching this interview with Lewis Bollard. 

This is what I have for now. I am not sure if that is a satisfying answer for you because it is just such a nascent field and we are trying to figure it out as we go, but I really appreciate you raising it. 

SiobhanBall @ 2026-04-27T09:53 (+2)

Thanks for engaging, Constance, this does answer some of my questions.

I take your point that the window for the EU AI Act was short and that missing it would have meant missing it entirely. Are there any other win-win situations have been found and packaged so far, beyond the EU AI Act?

The BOTEC is a careful piece of reasoning, but it's a model that compares two categories of work without specifying what either involves in practice. It doesn't answer my questions about what concretely gets done, and how animals stand to benefit. 

Your homepage describes Sentient Futures as existing to ‘identify the leverage points for weaving welfare into the core of future systems and cultivate the foundational community needed to activate them.’ A casual reader can’t translate that into what work is happening in real terms. 

You acknowledged the communication gap yourself; I wonder if that's where some effort could go, given you're trying to build a field that hinges on persuading people of the gravity and impact of changes that need to happen now, with time-sensitive urgency.

Constance Li @ 2026-04-28T02:23 (+9)

I wonder if that's where some effort could go, given you're trying to build a field that hinges on persuading people of the gravity and impact of changes that need to happen now, with time-sensitive urgency.

I'm not entirely certain that this is the case. There is always a tradeoff on what you choose to spend your time on. We aren't trying to convince the public or even other EAs or animal advocates as our main ToC. We are trying to grow more fertile ground for the people who are already interested and convinced to be able to have the connections, knowledge, and resources they need in order to pursue their own interventions. A lot of our comms happens inside our Slack community (which is intentionally high friction to join) and even then, most of it is in private channels. Here are the Slack stats from the last month: 9,388 Messages from members, 4%In public channels, 32% In private channels, 64% In direct messages.

And it seems like there are already some external communication pieces coming out from groups like Animal Ethics (see this short documentary) for animal advocates and @Max Taylor is writing a book for the public. 

We are pretty heads down on the operations of field building, which is much more manageable with a smaller, niche audience to start off with. Even then, we have more inbound interest than we can handle (we were only able to accept ~100/300 applicants to our fall AIxAnimals course and then had to work pretty hard to expand that to ~200/300 for this spring). I've experienced the mistake too many times where I've tried to advertise more widely or engage with people that have lower context on what I'm working on and it is really time consuming and has less leverage than say putting on a well organized conference (see this retro) that pulls in people who already have high context and then get ideas and connections to do things like the documentary or the book I mentioned above. 

It may not seem obvious that this is a good field building technique, but I think focusing in on an existing core community rather than external communications is much higher leverage given the current capacity constraints we have. It's like making sure the grass is mature before you invite a bunch of people to come play in your park. 

You and other folks are very welcome to come to our project incubator showcase next week! For 8 weeks, ~40 groups of mentors and mentees have been working on projects to push the frontier of tech and nonhuman welfare. It is a totally new program so it has taken up a lot of our time. 

Then we are rolling right into our 6th conference and our 1st in-person residency program. 

I think when inbounds start to dry up, that would be a signal we need to focus more on external comms, but right now it seems like there are others that are happy to take on that job and there is a lot of active work being done to narrow in on AIxAnimals interventions, making any official comms about reasoning that we put out at risk of getting stale pretty fast. That's a big part of why the website sounds so generic. 

That said, once things calm down and we have stable, repeating programs, I do think our website copy could use some refreshing. 

Are there any other win-win situations have been found and packaged so far, beyond the EU AI Act?

Nothing as concrete. Just other things that build the field like people having counterfactual value or having a bunch of conversations, some of which change people's minds. Another potential policy thing (which other EA's seem to hate because they think it is low impact and not very counterfactual) is trying to make sure animals are included in safety regulations for self driving vehicles

SiobhanBall @ 2026-04-28T10:22 (+2)

Ok, my updated understanding is that Sentient Futures is primarily focused on field-building, with a view to supporting interventions as they emerge over time.

One thing I’m still trying to get a better grip on is how this translates into impact on animals, and ideally, on what timescale. I’ve had similar questions when thinking about wild animal welfare more broadly: when does investment in building a field start to produce concrete outcomes that benefit animals? 

In the AI x animals case, it seems slightly more pressing because of the time-sensitivity point. I’m trying to reconcile the idea that 'this is urgent' with an approach that is upstream and preparatory.

I’m also conscious that most of what I’m seeing is the public-facing layer, and you mentioned that a lot of the communication is happening in more private or high-context settings; so it may be that the picture looks more abstract from the outside than it does from within. 

Thanks for the invite. I’ll join the showcase. Looking forward to seeing what interventions are being worked on. 

Martijn Klop 🔸 @ 2026-04-26T17:54 (+15)

"Are there such opportunities? So far, I can't break this down to anything tangible. 'If we don't do anything, the systems will become entrenched and determine animal outcomes for decades to come' - what systems? What outcomes? Who, where? Can someone give me a few clear examples of tractable situations? 

... Agricultural AI platforms from John Deere and Bayer are being deployed. But how is any of this tractable - what are you hoping orgs/grantees/EA people can do about those things?" 

... If that's so, then can someone tell me, in plain English, what that looks like? I.e '[lab] is currently planning [this development]. If we do [this action], we can change it to [this outcome], which will mean [x number] of animals experience [less suffering, presumably].' 


I just want to express that these are really sound questions and I hope you keep asking them. I believe they are, as of yet, unanswered. I'd love to see AIxAnimals field/thought leaders work out their theories of change more clearly and concretely. 

I think you correctly point out that there are gaps, and some of the framings I've seen seem to suggest that poultry farmers will just sit idly by as their animal-friendly AI COO's will make commercially suboptimal farm-management and procurement decisions. 

But, this is still a nacent field and I think questions like these will help it along!

SiobhanBall @ 2026-04-27T09:58 (+1)

Thanks, Martijn. Yes, that's another point of confusion; someone would need to pick up the tab for the added costs associated with better welfare. 

 

Constance Li @ 2026-04-28T01:47 (+2)

Specifically about AI in farming, I talk about that a bit in this presentation about the tradeoffs between efficiency, welfare, and the middle ground of health. Also I compare 2 different scenarios where it is either the pro-animal people or the industry that gets the first mover advantage and how those might play out.

Vasco Grilo🔸 @ 2026-04-26T19:05 (+6)

Hi Siobhan. Thanks for the post. I broadly agree with the sentiment you express in it.

Itsi Weinstock @ 2026-04-26T03:09 (+5)

Sounds like this is in reaction to yesterday's launch of the Falcon Fund. I'm very excited this fund is happening, and I am personally donating to.

I appreciate you thinking out loud! As Constance said, I think this points to a lack of communication around the concrete ideas. I think Sentient Futures has done a phenomenal job in bringing attention to this whole area, and to me the whole point of the Falcon Fund is to start turning all the thinking into concrete projects.

My background is in ML in the alternative protein space. So I am also very excited about the prospect of AI helping the development of cultivated meat. However, at this stage, I think the AI-cultivated space suffers even moreso from the exact problem you're describing where there is no well-defined problem. I wrote about this here if you'd like to discuss it more. I don't think AI can model and run simulated experiments in this space. One day it might be able to, but that is much more likely going to be downstream of other general advances in the AI-sciences, and won't be developed in the cultivated industry itself. We don't have the money, talent, or data throughput to make that happen. I actually think what we should be aiming for in the short-medium term is something that looks like speeding up wet lab trial and error, I think there are a lot of gains to be had from designing experiments better and deploying something that looks like self-driving labs as soon as possible. If you have specific ideas for ideas that we can try testing now, I would love to talk.

In general I agree that we need to put more thought into the AI-cultivated plan.

On AIxAnimals I think there are some pretty clear things we need to do, which are also described on the Falcon Fund page.

Having an observatory/watchdog organization is really critical. We currently have no view on how AI is being used in industries that will impact animals. That will include when they run factory farms, do scientific research in areas like precision livestock farming, and help with ecological management. We need to see what's going on in order to make decisions.

Having benchmarks is similarly important to actually have visibility into how these systems behave that are going to be very important. Then yes, hill-climbing on these numbers would be great.

I am sympathetic to your skepticism, especially that things that limit profitability will be hard. But I think we can afford to try here. Human values are not the same things as what the markets provide. Humans stated preferences are to like animals. Every ballot measure presented in any state in the US, whether led by Republicans or Democrats, has passed. It is one of the few bipartisan issues. I think a lot of the issues of animal welfare are due to people not being informed. In which case AI can be a powerful tool in aligning people's behavior with their values, which is something we should encourage.

It is going to be very hard to be concrete about what effect we see in the real world until AI systems become industrially relevant. Until then the primary job is to observe as much as we can and make sure we're moving things in the right direction. We can do this by having a benchmark, and seeing if the change in model spec language improves the benchmark.

MichaelDickens @ 2026-04-25T22:53 (+3)

AI systems are being built to do exactly what they were designed to do, which is to faithfully execute human preferences. And those are, in aggregate, to eat cheap meat, conduct research on living organisms when it's convenient, and prioritise cost and efficiency in agricultural supply chains. AI is reflecting the values of the humans. I don't think you can sneak those values in, unless there are specific opportunities to tweak things here and there before they get cemented.

This is true in a way. A deeper problem is that we don't know what values AI is reflecting. If you talk to an LLM, it will express some values, but it gives inconsistent answers depending on what questions you ask. We have no way of knowing whether its expressed values reflect its "true" values, if it has any. And we don't know how things will change as AI becomes increasingly powerful.

Mo Putera @ 2026-04-26T12:12 (+2)

What do you think of efforts like Saffron Huang et al 2025? It's from a year ago as of this week so I'd guess Anthropic to have developed this line of work further since and integrated it into other workstreams and such.

AI assistants can impart value judgments that shape people's decisions and worldviews, yet little is known empirically about what values these systems rely on in practice. To address this, we develop a bottom-up, privacy-preserving method to extract the values (normative considerations stated or demonstrated in model responses) that Claude 3 and 3.5 models exhibit in hundreds of thousands of real-world interactions. We empirically discover and taxonomize 3,307 AI values and study how they vary by context. We find that Claude expresses many practical and epistemic values, and typically supports prosocial human values while resisting values like "moral nihilism". While some values appear consistently across contexts (e.g. "transparency"), many are more specialized and context-dependent, reflecting the diversity of human interlocutors and their varied contexts. For example, "harm prevention" emerges when Claude resists users, "historical accuracy" when responding to queries about controversial events, "healthy boundaries" when asked for relationship advice, and "human agency" in technology ethics discussions. By providing the first large-scale empirical mapping of AI values in deployment, our work creates a foundation for more grounded evaluation and design of values in AI systems.

MichaelDickens @ 2026-04-26T19:58 (+2)
  1. An LLM's expressed values are not the same thing as its actual values, insofar as it has any.
  2. This paper doesn't really tell us anything about how ASI values will work. This paper is relevant to the immediate problem of making Claude commercially useful and non-destructive, but it's not relevant to ASI.
Mo Putera @ 2026-04-27T03:32 (+2)

What kind of empirical evidence would update you positively?

MichaelDickens @ 2026-04-27T03:50 (+4)

I'm not sure if there's any. My concerns are more theoretical than empirical, so it would take theoretical work to significantly change my mind.

Empirical work can provide a small amount of information, e.g. the fact that Claude expresses concern for ethics is a slight positive update relative to the world where Claude doesn't care about ethics, and I would feel slightly better about a Claude-based ASI than a ChatGPT-based ASI. But only slightly, because I don't think empirically observable behavior is that relevant to determining whether an AI is aligned. At least not using any empirical methods that we've devised so far.

For more on this, see e.g. A central AI alignment problem: capabilities generalization, and the sharp left turn, especially the part starting from 'How is the "capabilities generalize further than alignment" problem upstream of these problems?'

(ETA: On how various plans miss the hard bits of the alignment challenge is also kind of about this...I was looking around for some writings on why current empirical work isn't that relevant but it's hard to find anything that directly makes the argument)

From my POV we are deeply confused about what it would even mean to align ASI. If I could even describe what sort of theoretical work would be good evidence of progress on alignment, we'd be in a better place than we are currently.

A significant reason for my high P(doom) is that most safety researchers at AI companies are ignoring theoretical issues and pretending that alignment is purely an engineering problem. I don't think they are institutionally capable of solving alignment.

Mo Putera @ 2026-04-27T05:40 (+3)

By empirical evidence I meant anything empirical at all, including things like emergent misalignment and what might come out of Jacob Steinhardt's interpretability program and what Ryan Greenblatt says here and whatever the right value-analogue of Anthropic's functional emotions paper is (below) and so on, not just observable behavior. Maybe I'm conflating things or overloading "empirical", in which case my apologies.

image.png

Regarding the sharp left turn, Byrnes' opinionated review is the best argument for worrying about this that I'm aware of, but he isn't talking about today's LLMs and their descendants, which rules out your last paragraph's pointer to current work. Roger Dearnaley's intuition pump behind his take that the sharp left turn might not be as hopeless as it seems is resonant with me, but his description seems vibes-based so I can't tell if he's misunderstanding the sharp left turn. I do think Dearnaley's personal "full-stack" attempt at assessing alignment progress is the sort of answer I'd want to your question re: what sort of work would be good evidence, although my impression is you disagree for high-level generator reasons that would be ~intractable to resolve within the margins of EA forum comments...