Reflections on AI Safety vs. AI x Animals without a clear conclusion

By Kevin Xia 🔸 @ 2025-10-17T15:56 (+13)

Or: What's your p(things will be kind of alright)?

Scrappy draft for EA Forum Amnesty Week (huge thanks for hosting again!), so:

AI futures from an animal perspective could give different results, and therefore different priorities

I recently read through these two AI pathways about how "desirable, realistic futures with AI could look like." Like the project itself states, I agree that "AI risks and timelines get plenty of attention, for good reason" - if things will generally be kind of alright, we don't need to pay much attention to them; our focus should very much be on the extreme end of say catastrophic risks. However, coming at this from an animal-focused perspective, in a lot of these "things will be kind of alright" scenarios, things would be very much not alright for non-human animals (think, this Tool AI world, but with systems further entrenching and making factory farming more efficient at the cost of the animals' welfare). It would be expected that these worlds, then, don't really get discussed in conventional AI safety regardless of their likelihood, but that the animal movement should or should not primarily discuss these "things will be kind of alright for humans but not for animals" scenarios, depending on their likelihood. 

P(doom) (and probably p(utopia)?) values aren't all that high

My general sense is that discussions around AI x- or s-risks don't usually pose that these risks are extremely likely to occur (from an absolute perspective), but rather extremely likely to occur, compared to the little amount of resources spent on preventing them, e.g., say we take these 1-5% numbers from here, the space is still heavily underresourced. This is an important distinction, because if p(doom) is only at 5% and say p(utopia) (that is, a world that is great for everyone, likely including non-human sentient beings) is also at 5%, 90% of the worlds are (and arguably should be!) plausibly not that important to conventional AI safety discourse. P(doom for animals), then, may still be very high/much higher. 

What I am curious about

If, then, the remaining 90% are beneficial for humans but not for animals, AI-and-animal-concerned people might want to focus primarily on these 90%.  And we'd expect the AI x Animals movement to evolve very differently from the conventional AI safety movement,[1] given our focus on very different worlds (for example, maybe alignment isn't that big of a deal for animals, or the Technical AIS vs. AI Governance vs. other stuff split should look very different from conventional AIS work), suggesting that this cause prioritization may make a big difference in the interventions and careers we pursue.

I would like to get a better sense of people's p(things will be kind of alright),[2] and how you think animals will be affected in these worlds. More precisely, I am interested in something like "how likely do you think it is that the development of AI will affect the world in a way that is generally good (enough) for humans but remain pretty bad for non-human animals, but still quite transformative overall." I think this would be quite important for anyone's cause prioritization who is really concerned about animals, but also about AI. For example, you may still prioritize conventional AIS because:

  1. ^

    When I started writing this post, my core idea was that I suspected AI x Animals would/should work on radically different problems than conventional AI Safety, assuming a high likelihood of things being alright for humans but doomed for animals. In writing, I noticed that this might only be an important case to be made if I had a sense that the current AI x Animals space over-emphasizes or is evolving to over-emphasize conventional AI safety adjacent interventions. I think the movement is too small to speak of over-emphasizing anything, really, although I do want to make sure the priorities are clearer as the movement grows and evolves.

  2. ^

    It might be more accurate to ask for your "p(doom for animals)"; I think in most worlds where things are just "kind of alright" things go pretty badly for animals, but this might be the very crux of the argument! I decided to leave the question like this, because I am actually really curious aboout of these "middle part" worlds!