12 Theses on EA
By Mjreard @ 2025-11-06T00:17 (+53)
This is a crosspost from my Substack, where people have been liking and commenting a bunch. I'm too busy during my self-imposed version of Inkhaven to engage much – yes, pity me, I have to blog – but I don't want to leave Forum folks out of the loop!
I’ve been following Effective Altruism discourse since 2014 and involved with the Effective Altruist community since 2015. My credentials are having run Harvard Law School and Harvard University (pan-grad schools) EA, donating $45,000 to EA causes (eep, not 10%), working at 80,000 Hours for three years, and working at a safety-oriented AI org for 10 months after that. I’m also proud of the public comms I’ve done for EA on this blog (here, here, and here), through my 80k podcast series, current podcast series, and through EA career advice talks I’ve given at EAGs and smaller events.
With that background, you can at least be confident that I am familiar with my subject matter in the takes that follow. As before, let me know which of these seems interesting or wrong and there’s a good chance I’ll write them up with you the commenter very much in mind as my audience. Here they are:
In favor of big, pluralist EA. Effective Altruism is commendably rigorous and demanding compared to most other ideologies, but it’s easy for this to go too far and become counter-productive. Specifically, if your consensus on what is best becomes too narrow, you limit the number of people who will ever want to engage in the first place and lose the opportunity to retain people who might want to work on things you regard as second- and third-best. Similarly, if you’re too demanding in terms of what people do given that they agree with you, even most of those people will bounce off for ordinary human reasons if they feel they’ve failed to live up to the perceived standard. The way to maximize value within these constraints is to acknowledge multiple reasonable interpretations of your abstract principles and then set a demandingness bar low enough that basically anyone who understands the project can feel they belong. These are hard to operationalize, but I have ideas.
Strong technological determinism vindicates neartermist EA. EAs commonly believe in at least weak technological determinism: for any given place on the tech tree, it’s simply a matter of time and energy/resource/research growth until you unlock the next branch. Frequent cases of simultaneous invention provide some circumstantial evidence for this. This is one basis for worrying about AI: Moore’s Law is getting us close to the point that AIs will have as much computing power as the human brain and beyond — once that crucial input is just lying around, many actors could potentially repurpose it into an alien or otherwise dangerous kind of mind. So let’s make sure the minds we’ll inevitably build aren’t dangerous. A stronger technological determinism tempers this optimism by saying that the kinds of minds you get will be whichever are easiest to build or maintain, and that those quite-specific minds will dominate no matter what you do. The upshot is that the world isn’t steerable and the best you can do is reallocate whatever surplus you, the actor, happen to control to those who need it more.
The psychology of professional sports is surprisingly healthy. EA can learn a lot from sports culture, particularly the sense that everyone is on the same page in wanting the team to win no matter what. The primary aspect of this I’m drawn to is that sports fans very rarely ask “what’s in it for me, personally?” On this analogy, I think it’s a mistake for EA orgs to present themselves as recruiting players (direct workers) rather than fans (donors). If you focus on recruiting fans, the players recruit themselves! And you have fans instead of varyingly-disgruntled passed-over players.
EA as an ideology of capped ambition. Some EA critics say that EA may seem all well and good with their bednets, cage-free campaigns, and AI lab transparency, but that once those problems are sufficiently addressed, the utilitarian EAs will only demand more and more until they force us to fully live out the repugnant conclusion, subsisting on potatoes and muzak with our 10 trillion barely-happy neighbors. To avoid those ends, we can’t help the EAs now, lest they grow powerful. This fear is unfounded in part because EA is not utilitarianism (it incorporates common sense morality and has side constraints), but also because EA’s social power comes from the glaring obviousness of the problems it points to. As the frontier ITN issues become less obvious, EA becomes less compelling and less socially powerful. People will disagree too much on what should be a priority at all for EA to be a single, coherent force in the world.
The opportunity framing of EA uber alles. I think of the obligation-vs-opportunity framing of EA as mapping onto right-left political dispositions. EA can be the obligation imposed by your unearned privilege or it can be the opportunity to do something becoming of a great man. I think the great-man theorists are more impactful and more attractive role models for the movement. Bill Gates isn’t wracked with guilt as he goes about the work of the Gates Foundation. There’s also a class of highly-talented EAs who we lose for months at a time because they break down with various forms of negative self-talk. That self-talk is particularly pernicious for very impact-sensitive people, who will notice and feel worse for letting the self-talk get them down in the first place. To the extent it’s possible, I’d love it if such people grew an ego and proudly dispensed utility only when they saw fit. I think they’d dispense more of it! I know minds and cultures are hard to change, but let’s agree this is the vision worth aiming for.
A rant at the end of EA and self-hatred. I wrote my popular EA and self-hatred series (1, 2, 3) because EAs are the primary culprits in EA’s recent reputational dip. Not those other EAs who did bad things on the object level or mostly-fictional extremely naïve utilitarian “EAs,” but very likely you, dear reader, who blamed and hedged and jumped ship the moment the going got tough. Sure you told yourself some story about your own views or what best served impact instrumentally, but it’s suspiciously correlated with everyone else claiming — implicitly or otherwise — to be the virtuous minority within big bad EA. I want to lay into you for a few paragraphs.
Product idea: profiles in altruism. Strangers Drowning is a powerful book that I highly recommend. It gives brief biographies of 12 or so ~impartial altruists who go to great lengths to alleviate suffering. While the book tries to maintain a detached, sociological perspective, one can’t help but feel the heroism of its subjects. I know a lot of people who may not go as far as those in Strangers Drowning, but who nevertheless make striking sacrifices for others without hesitation or wavering (very much) in their convictions. I’d like to do an interview series with some of them that aims to cut away as much of their modesty as possible and highlight the saints who live among us.
We need another GiveWell. My maybe-naïve story of how Open Philanthropy came to be is that Dustin Moskowitz & Cari Tuna were so impressed with GiveWell that they wanted GiveWell to allocate their money for them on a more strategic, long term basis. I’m not sure how much attracting billionaires specifically was part of GiveWell’s model, but I think expert advice *for anyone* is a sought after product in the world. Indeed this is a lot of the story of 80,000 Hours’s success. If you have a distinct brand and compelling style, SEO and AI recommendations are there for the taking and who knows who you catch along the way. One strategy is just to expand GiveWell’s marketing budget (hopefully this has already been done to the point of diminishing returns), but I think you should also think about distinct subject matter areas aspiring givers might be interested in. For example, recommended campaigns to support to take back the House/Senate or beat Trump — it’s distinct enough from GiveWell while still attracting effectiveness-oriented people who care about large scale issues.
80k AGI pivot redux. I plan to read back through Sadly, 80k a few times and pull out the key cruxes on whether the AGI pivot was a good idea, dropping the stuff about the video and my personal story. The aim is to present a clearer vision of the alternative to opinionated 80k (or similar projects) and make it easy for people to work out what they believe. Appropriately enough, the post itself will embody the ethos I support for 80k in this regard: research a bunch of reasonably-diverse, but plausibly high(est)-impact roles and causes and give an assessment, alongside the abstract framings of the Career Guide. Indicate how you, 80k, resolve the thorniest cruxes, but fundamentally leave it up to the reader on the assumption they are just as, if not more, capable of resolving these things than 80k.
Two modalities of meta-EA. Are you recruiting for roles at orgs or are you building a community? Per my sports idea above, I think the community building (for its own sake) is neglected. When I first encountered EA, the ethos was very much focused around earning to give and where to donate. There was a sense we were fans/supporters of these orgs rather than competing for jobs at them and that all of us were on equal footing no matter how much we earned, gave, or followed the news. I think this built a deep talent pool (at least compared to what early EA might have expected), from which many great leaders and contributors were later plucked. Adjusting for EA’s larger scale, I think we’re doing worse on talent now that we’re so single-mindedly focused on it. I think it’s easier to get a fan to take a job with the team than it is to convert someone who came in as a job seeker into both a fan and a star player. It is more complicated than this. We think we can identify and reach out to stars directly. My thought is that this comes off as desperate to the stars and over-promising to the non-stars.
Field builders need to get comfortable with elitism. From 80k, to research fellowships, to university groups, it’s hard to deny that the dominant opening pitch for EA is “we have the jobs.” At a minimum, that’s certainly what people are hearing. On the other side of that are openings with 300:1 applicants-to-roles ratios and a lot of disaffected people taking to the forum to vent about their failure to make progress in the EA job market. The simple truth that’s hard to say is that there are in fact an unlimited number of jobs, but only for exceptional candidates. Field builders should get comfortable communicating this early on in their interactions with people. I think they’re afraid to do this both because it can seem harsh and judgmental to the listener and because they want to avoid false negatives when they’re being scored on how many people land jobs and not being scored on how few people feel like they’ve wasted their time. My view is that people will understand both that certain jobs require inordinately talented people, and that you, the field builder, don’t know them well enough to predict whether they are inordinately talented. What you can do is make the problems and the work seem interesting enough to explore despite the transparently low chance of success.
- Two differences between EA and utilitarianism. Utilitarianism and EA share the same object (welfare), but endorse different means. Specifically, EA endorses only a subset of the means utilitarianism does (which is to say *all* means). The first limitation is that EA is expressly non-totalizing. As a practitioner of EA, you’re meant to arbitrarily allocate some large fraction of your time/energy/resources to selfish ends rather than welfare-maximizing. Second, you can’t make harsh or extreme welfare trade offs in pursuit of greater utility. No actively killing ten to save a thousand. Yes, you’re still allowed to eat massive moral opportunity costs, but society allows and expects that. Like society in this respect, EA is not coherence-maximizing. EA is about figuring out what’s best and then taking only the easy wins. The idea that one drop of incoherence is a get out of jail free card for living however you feel like is a tack taken up by people whose firmest commitment is avoiding the question of how to live.
That’s enough for things I actually know about. Tomorrow: philosophy!
Sudhanshu Kasewa @ 2025-11-06T15:10 (+5)
Thanks Matt. Good read.
A stronger technological determinism tempers this optimism by saying that the kinds of minds you get will be whichever are easiest to build or maintain, and that those quite-specific minds will dominate no matter what you do.
Is there a thing you would point to that substantiates or richly argues for this claim? It seems non-obvious to me.
Mjreard @ 2025-11-06T16:23 (+2)
Specifically inspired by Mechanize's piece on technological determinism. It seems overstated, but I wonder what the altruistic thing to do would be if they were right.
James Herbert @ 2025-11-06T17:42 (+4)
“Two modalities of meta-EA. Are you recruiting for roles at orgs or are you building a community? Per my sports idea above, I think the community building (for its own sake) is neglected.”
Yes. Nicely put.
Also, if someone from the forum team reads this, I can’t figure out how to format my quote as a quote whilst using safari on iOS.
Peter @ 2025-11-06T17:59 (+1)
For 2, what's "easiest to build and maintain" is determined by human efforts to build new technologies, cultural norms, and forms of governance.
For 11 there isn't necessarily a clear consensus on what "exceptional" means or how to measure it, and ideas about what it is are often not reliably predictive. Furthermore, organizations are extremely risk averse in hiring and there are understandable reasons for this - they're thinking about how to best fill a specific role with someone who they will take a costly bet on. But this is rather different than thinking about how to make the most impactful use of each applicant's talent. So I wouldn't be surprised if even many talented people cannot find roles indefinitely for a variety of reasons: 1) the right orgs don't exist yet 2) funder market lag 3) difficulty finding opportunities to prove their competence in the first place (doing well on work tests is a positive sign but it's often not enough for hiring managers to hire on that alone), etc.
On top of that, there's a bit of a hype cycle for different things within causes like AI safety (there was an interp phase, followed by a model evals phase, etc). Someone who didn't fit ideas of what's needed in the interpretability phase may have ended up a much better fit for model evals work when it started catching on, or for finding some new area to develop.
For 12, I think it's a mistake to bound everyone's potential here. There are certainly some people who live far more selflessly and people who become much closer to that through their own efforts. Foreclosing that possibility is pretty different than accepting where one currently is and doing the best one can each day.