Subagents for Shrimp
By Richard Y Chappellđ¸ @ 2025-12-01T15:08 (+13)
This is a linkpost to https://www.goodthoughts.blog/p/subagents-for-shrimp
... and other good causes
Itâs International Shrimpact Week! My contribution offers a moderateâs case for shrimp welfare, as one cause among many that shouldnât be neglected within your moral portfolio. Alas, since it is so extremely neglected by the population at large, you have an especially striking opportunity to promote balance and moderation by sparing a few dollars to save zillions of shrimp from suffering during slaughter. Donate here to support my campaign for sensible shrimp centrism against the extremists to either side (then help some people too, via my GiveDirectly fundraiser). If youâre more inclined to support hegemonic shrimp-first radicalism, go use Benthamâs fundraiser instead!
Introduction
A common theme of my blogging is that moral motivation is limited. No-one wants to be a totally self-sacrificing utilitarian agent. We are not so impartial as that. Some conclude from this that impartial utilitarianism must be wrong, but that seems mere wishful thinkingâevaluating othersâ lives and basic needs as properly a higher priority than luxuries for ourselves is surely among utilitarianismâs most clearly correct verdicts. The more reasonable conclusion is rather that we are all deeply morally imperfect. I add: thatâs OK! (Not ideal, but OK.) We shouldnât get too hung up on questions of virtue or deontic status. (You donât want to be status-obsessed, do you?) Instead ask: what low-hanging fruit can we reach to easily do more good?[1]
Something I like a lot about Effective Altruism is its relentless focus on this question. There is no more important question for you to consider than how you can do the most good (at whatever non-trivial cost youâre willing to bear). Yet itâs so modest! Do whatever you want with 90% of your resources; just save 10% (or whatever) for the impartial good, and youâll do immense good for others at minimal cost to your other interests! Not many people save dozens of lives (even doctors are mostly just filling a role that would be fulfilled almost as well by someone else if they werenât there). But most well-educated citizens in wealthy nations have the opportunity to do at least this much good with their lives, relatively easily, through modest but well-targeted donations.
I find it helpful to model motivation as being guided by âsub-agentsâ with varying priorities and worldviews.[2] We can reserve the vast majority of our resources to be governed by severely partial sub-agentsâconcerned to prioritize our personal projects or the well-being of family and friendsâand still set aside an EA/beneficentric sub-agent with enough resources to do more good than the vast majority of people who have ever lived. Itâs a pretty incredible moral opportunity, when you think about it.
Or maybe it shouldnât be just one. Perhaps we should further subdivide our altruistic concern across different types of causes (human vs non-human, nearterm vs longterm, safe bets vs high-impact longshots, etc.). Thatâs the idea I want to explore in this post.
Worldview Diversification Blocks Fanaticism
Many people intuitively recoil from âhegemonicâ value systems that direct us to put all our eggs in one basket. Especially if the basket is weird and scaly.

So donât! Remember that people, not theories, should be uncertain. Some hegemonic theory may well be true, but youâre probably not in a position to believe it with absolute confidence. (Even if you were, you may yet be unwilling to act accordingly, which amounts to much the same thing in practice.) We can avoid fanaticism by compartmentalizing: limiting the âreachâ or power that we allow various ideas to exert over our lives, and empowering rival ideas to at least a modest extent. This naturally leads to a sensible moderate pluralism, as no single idea or worldview has dictatorial control over your life as a whole. By incorporating diverse sub-agents, each empowered to pursue their own conception of the good (with some portion of your resources), individual decision-makers can reproduce the advantages that liberal democracies have over authoritarian dictatorships. In neither society nor the individual mind should we wish to wholly banish hegemonic theories of the good. Instead, we assign them non-hegemonic representation. (Many good things work best by degrees.)
Consider âstrong longtermismâ. Itâs hard to refute the argument that the interests of future generations decisively swamp those of present-day strangers. But few people are willing to fully endorse the practical implications. So donât do either of these things! Instead, create a sub-agent to represent longtermism, give them some resources, and let them do their thing.
Similarly, if thereâs a strong case that shrimp welfare swamps (present-day) human welfareâand there is!âyou donât have to respond by never helping another human being again. Just create a subagent to speak for the shrimp within your mental economy and give them a share of your altruistically-designated resources, proportionate to your confidence in the shrimp-friendly worldview: it surely shouldnât be zero!
If you want to explicitly reserve space for a normie âglobal health & developmentâ perspective, ensuring that the global poor arenât entirely left out of your decisions no matter how many zillions of future digital shrimp you find yourself in a position to help: go right ahead! Create a representative subagent; you know the drill by now.

Note that you donât have to fully endorse an idea for it to appropriately influence your actions. âFullâ endorsement would require convincing every one of your subagents. But donât you contain multitudes? Shouldnât you include at least some skeptical voices, when faced with almost any significant (and hence disputable) idea?
Beware Fanatical Neglect
Missing crucial subagents can lead to moral disaster (as when people do nothing about the suffering of billions of factory-farmed animals). Expanding our moral circles does not require us to give overriding power to new beneficiaries; just adequate protection against abject moral neglect. I worry that most people are missing crucial subagents for neglected high-impact cause areas (like existential risk and animal welfare).
In âRefusing to Quantify is Refusing to Thinkâ,[3] I highlighted the implicit fanaticism in conventional dogmatism:
Itâs very conventional to think, âPrioritizing global health is epistemically safe; you really have to go out on a limb, and adopt some extreme views, in order to prioritize the other EA stuff.â This conventional thought is false. The truth is the opposite. You need to have some really extreme (near-zero) credence levels in order to prevent ultra-high-impact prospects from swamping more ordinary forms of do-gooding.
You should have at least some moral sub-agents who are anti-speciesist, and value suffering-relief in a species-neutral way. If we can relieve the dying agony of 1000+ beings per dollar, then something has gone very wrong with the worldâs priorities and we should contribute non-trivially to remedying this. The Shrimp Welfare Projectâs humane slaughter initiative plausibly achieves this remarkable feat (by providing free electrical stunners to shrimp slaughterhouses that commit to stunning 1800+ metric tons of shrimp annually): some of your anti-speciesist subagents should be extremely enthusiastic about funding this. Not with all your moneyâyou have other subagents, with other prioritiesâbut with the non-trivial amount that you reasonably allot to represent this credible anti-suffering worldview.
Donation Links
If youâre convincedâand sufficiently principled in your pluralism to allow your shrimp-friendly subagent to fund their favorite charity even if it isnât your all-things-considered favoriteâthen please use this link to donate to my Shrimp Welfare Project fundraiser (featuring a 50% match from a generous donor).[4]
Alas, notorious shrimp fanatic and friend of the blog Benthamâs Bulldog is currently #1 on the Shrimpact Leaderboard. It will take a critical mass of modestly-contributing moderates for my fundraiser to overtake his, so donât miss your chance to chip in:
Save the Shrimp (in moderation)!Alternatively: Animal Charity Evaluatorâs Recommended Charity Fund is also running a âmatching challengeâ (without the competitive element of Substack-specific fundraisers). A worthy option to effectively help a variety of animals if youâre not sold on shrimp in particular.
To round out your moral portfolio, Iâd suggest also finding a promising longtermist charity or grantmaking fund to support. One option is the Long-Term Future Fund.
Finally, if youâd find it reassuring to also empower a ânormieâ altruistic subagent who wants a safe bet to very reliably help the global poorâand who wouldnât?âI know of no safer bet than GiveDirectly (for which I also have a Substack fundraiser):
GiveDirectly to the global poorDonating my Substack subscription revenue
Iâve kicked off my shrimp fundraiser by donating $2000 â 50% of my revenue-to-date from paid subscriptions this year. To balance it out, at yearâs end Iâll send GiveDirectly 100% of all subscription revenue I receive this December (including full annual subscriptions that begin this month):
Subscribe this DecemberPaid subscriptions unlock the full versions of paywalled posts like:
- Thereâs No Moral Objection to AI Art
- Creepy Philosophy
- Vibe Bias
- Meta-Metaethical Realism, and
- The Best of All Possible Multiverses
Enjoy!
- ^
Once done: if youâre willing, ask it again.
- ^
See, e.g., the section on Mixed Motivations in âThe Moral Gadflyâs Double-Bindâ, and the Better Way I propose in âLimiting Reasonââinspired in part by Harry Lloydâs work on bargaining approaches to moral uncertainty.
- ^
And, more recently, in âRule High Stakes In, Not Outâ.
- ^
While they can be helpful for motivating new donors, I wouldnât generally recommend letting âmatching fundsâ change your priorities for where to donate, for the sorts of reasons Holden describes here.
JoAđ¸ @ 2025-12-01T18:23 (+3)
Nice post! Though it comes from another post of yours, I appreciated the paragraph about how "common sense" worldviews may suffer from fanaticism. Thank you for contributing to Shrimpact week!