Effective Altruism Will Be Great Again
By Mjreard @ 2026-03-07T16:33 (+273)
This is a linkpost to https://open.substack.com/pub/frommatter/p/effective-altruism-will-be-great
Forum note: this post embodies the spirit of a new project I—Matt Reardon—am starting to reinvigorate in-person EA communities. I'm hiring a co-founder and I'm interested in meeting others who want to collaborate on this vision. My DMs are open.
Sequence thinkers will be forgiven and rejoice
In some fleeting moments lately, I catch glimpses of 2022—the year Effective Altruism’s ascent seemed unstoppable. Universally positive (if limited) press, big groups at top universities across the world, a young EA entrepreneur was the darling of the financial industry, the first EA running for Congress calling in enormous financial and personnel resources for his campaign. More than those public-facing facts though, was a feeling on the ground that if you had a good grip on things and a plausible idea, you would get funding, go to the Bay, and make it happen.
I actually regret how slow I was to see it at the time. You could just do things, and yes, that’s always been true, but at that time, you didn’t even have excuses. In my first year as an advisor at 80,000 Hours, I allowed people a lot of excuses. I think this came from some emphatically non-prophetic feeling of “this is too good to be true.”
And, of course, it was. But now we’re here again: buoyed by another darling startup whose founders are committed to giving away most of their wealth to things that will do the most good™. Some of the same risks are present too.[1] The biggest difference though, is how EAs now conceive of themselves and their movement. FTX proudly brandished its heart-and-lightbulb in one of the most brazen acts of moral self-licensure in history. Anthropic hems and haws about their relationship to scope-sensitive do-gooding.
But the future belongs to those who mince words less. I would like my friends at Anthropic and my EAs across the board to win the future. And while I can’t predict the *whole* future, I do predict that Effective Altruism’s flag will fly high and proud in the next few years. New projects will launch, ambition will again be the coin of the realm, and no one will be able to deny the underlying reality that a community formed of a commitment to understanding and bettering the world at scale is the reason why. The only decision to be made is whether this will happen in spite of us or because of us.
The Retreat
The case for “in spite of” is probably straightforward to most readers of this blog. The dizzying heights we reached so quickly in the FTX era set us up for an even more disorienting fall. EA’s decline into mild disrepute is mostly a consequence of EAs themselves playing into that catastrophizing narrative and refusing to hit back at their critics out of shame. While I think this is basically correct, a more nuanced account reveals why we might struggle to meet the present ascent as well as we could.
The FTX collapse was not just a lightning bolt of humiliation that shocked EAs into shame, it also accelerated two pre-existing trends in the movement: the professionalization of EA and the growing consensus that addressing transformative AI dominated all other cause areas in expected value terms.
Professionalization entails a lot of things. For one, when you’re the tiny germ of a new idea in the head of a few undergrads in 2009, your movement is about you, your friends, and how your call to action can be made consistent with everyday life, i.e., try to be smart about how you donate 10-50% of your income. When you draw 11-figure funding and have shored up consensus about some seriously novel, globe-spanning neglected problems, you’ll need to create some institutions and build some alliances. Now, you’re no longer asking people to modify their normal lives in light of your weird ideas; you’re modifying your weird ideas to meet the expectations of their competent, would-be implementers. At a minimum, you’ll want some nice offices, salaries, and HR policies, but the urgency of drawing the best possible implementers means you also need to make it clear that the subject matter they work on is the only exotic requirement: no pledges, no veganism, no evidential decision theory.
The consensus on AI hardly needs elaboration, but I do think a neglected frame here is noticing that the consensus is a triumph of something akin to sequence thinking. You achieve your ultimate goal by succeeding at each logical step leading up to it, so you focus on whichever step is in front of you. Want to do the most good? Just work on the most important problem. Want to get more of that work done? Get more people to do it. More people do it when you don’t get bogged down in reasons and morals and world models.
These two trends complemented each other too. As consensus around AI risk grew, so did the opportunity to be more legible to professional types. We don’t have to start with your empathy for the homeless man on the street, do the shallow pond experiment, sell you on RCTs, raise the animal welfare question, and then talk about the numbers at stake in nuclear near-misses to get you bought in on AI risk if we just need you to balance our books. We’re an interest group like many others, we think this newfangled AI might be bad somehow and it’s a really hot topic right now. Interested in helping out? We pay all right for a nonprofit.
I concede that Occam’s razor is against me here.
The asceticism, veganism, group houses, and polycules sat in uneasy tension with that pitch. There was also a reasonable thought that when you start with first-principles philosophy, it’s hard to avoid people forming opinions on all kinds of things and breaking off into still-associated subgroups who present at least some risk of reputational contagion. So quite sensibly, you diversify your approach and see how promising it looks to have a clean, professional, not-very-ideological, arguably-non-committal AI safety “field” as opposed to an EA “community.”
And I want to be clear that although this isn’t my cup of tea, it is an emphatically worthwhile experiment. Relying on everyone to come around to your idiosyncratic world view is not a strategy. Everyone is not going to just. At some point, you need to play nice with people who see things very differently than you and make them feel fully respected if you want to accomplish big things. Professionalism and legibility are fine ways to do that. The worry is that you elevate means over ends. In your rush to be pragmatic and accumulate respect and power in service of a better world, you neglected the better world part, banking on the notion that you locked that in ten years ago.
What Greatness Demands
My view is that a vision of a better world and the principles that cultivate it are really hard to pin down. They require almost all hands on deck all the time. For this, I am really glad that Forethought exists and that Will MacAskill—who will never escape the Effective Altruist label so long as he lives—is near the helm.
The issue is that while we were feeling out professionalization and AI uber alles, FTX happened and what was an experiment became the only lifeboat in a storm. Nearly all hands left the deck. Even Forethought is framed as an AI project, though I’m heartened that they draw so much from history and prioritize helping society do moral reflection.
My real issue is that EA in all its forms is still so small. MacAskill shouldn’t be near the helm. He and all the other members of the original EA student group should expect some edge from being around so long as the movement grew and from being selected for the ambition to make the movement happen at all, but the way we’ll know EA is really winning is when the Oxford crew is wholly eclipsed by a new generation of even more ambitious and clear-thinking altruists.
And we won’t find them if our exclusive offer is cushy, well-scoped research roles at buttoned-up think tanks. To lead this movement, to safeguard the future for our highest values, we need to ask people to own the whole outcome: understand things from first principles, build and articulate their own world models. Tell us we’re wrong. Compel us to see the greater good and find the better solutions to create a better world.
You can’t instruct people, or even hire them, to surpass you like that if you regard yourself as holding all the cards and doling out your specific wisdom on specific topics. And to be honest, we know that compute governance, scary demos, and RSPs (gulp) are all pretty weaksauce. Maybe these being the only tools we’ve dredged up is symptomatic of the fact that lots of people in this space can’t even articulate the central worry that well.
Perhaps there are no better ideas out there. We’re going with what we’ve got. Time is short. Understandable. My pitch is that we invest more in giving people the whole picture—from the beginning. Why are we here? What’s our best picture of the good world? How did *we* locate the problems we’re most worried about now?
When I ask the most admirable and impressive people trying to save the world at scale how they got into this line of work and what pushes them to the laudable (but so far insufficient) heights they’ve reached, the most common story is the classic EA one. “I wanted to do good; I wanted to understand how to do good.” They ran into Singer. They read old 80k. They spent time with the abstractions and debated Pascal’s Mugging. They found the best tools and they built their own models. Now they’re leaders who rely on those models and the judgment they gained building them to make a hundred decisions a day and steer their projects to value in a hostile and confusing world. Most of all, they believe in the tools more than the conclusions.
The best of us are all relatively heads-down on last mile projects though.[2] This might be crunch time. In expectation, this is the most essential work. If no one is doing this and gaining traction, why would anyone feel compelled to sign on? The uneasy de-emphasis of Effective Altruism qua Effective Altruism is holding well enough and bringing EA fully back into the arena is several full-time jobs.
We’re still small. Retreating from FTX arguably made us smaller and less able to bounce back. The baseline reality is that great people still keep the good in their hearts though and despite their full plates they’re more and more willing to say so. It has always looked silly to deny it. And more than anything, there’s nothing here worth denying.
Effective Altruism is Good and Right
The point that I began to bleed into above is that sequence thinking can solve specific problems and make smart bets in context, but it doesn’t make strong people generally. Taking up the task to understand and act in the world does. That task is deeply personal and inherently open-ended. One’s orientation towards it should not be “what’s in the news lately?” or “how do I get a good job out of this?”
The orientation you want is: “what kind of person do I want to be?” I suspect that to even ask the question immediately pushes you towards some tentative answers. You want to do good. You want to help others. You want to be fair minded about that, maybe even impartial. You want to do more good rather than less. You want to believe true things and understand the world. To modern ears, it can sound cringe and over-earnest, but what other answer can you give?
And the reason EA will be great again is that the best EAs, of which there are many, all want to be EAs. It’s in their bones. If they weren’t staring down the barrel of an intelligence explosion, or if the prospect of transformative AI somehow disappears with relative certainty, they would open their Animal Welfare for Dummies books the next day. Or they’d roll up their sleeves to defend America. Some of them are doing these things now. The point is that the EAs are ready to do what’s right with no fuss and no ego. Believe it or not, that’s what AI safety work entailed just seven or eight years ago.
I know this because I know them. And to know them is to see the earnestness and care they put into understanding our situation and letting their views be guided by sturdy conceptions of the good, deep humility, and above all regard for what’s actually true rather than what sounds good. There’s a hunger for that in people: somewhere they can do real thinking about what really matters.
Values-agnostic arguments for AI risk surely have their place in the broad sweep of public discourse. And it may even be a big place, jockeying among all the other object-level, point-scoring rhetoric out there, but a large class of special people will always have an allergy to it. They want to understand how this fits in with everything else happening in the world, how this fits in with their highest ideals and what they want their lives to be about. The METR graphs just don’t offer that. Effective Altruism does.
- ^
Even though no one suspected fraud, people were worried about a crypto bubble in 2022, just as they’re worried about an AI bubble now. Also oh boy, first draft of this post was before the DoW snafu.
- ^
The reader will note that I am not the best of us.
Chris Leong @ 2026-03-08T05:28 (+35)
Have you thought about the possibility that EA may have resonated in a particular social context that no longer exists?
But a community that took twenty years to develop its particular structure of norms and mutual knowledge cannot be regrown in twenty years, because the conditions that shaped it no longer exist. The people are older, the context has changed, and the specific convergance of circumstances that brought those particular individuals together in that particular configuration at that particular time is gone. Communities are path-dependent in the strongest possible sense: their current state is a function of their entire history, and you can’t rerun the history.
The main challenge I see at the moment is that for half the potential audience AI is clearly the biggest thing going on at the moment and the other half sees it as clearly overhyped. And it's quite hard to construct a program or run events that will really hit it out of the park for both sides at once.
I would be keen to hear if you think you have any solutions to this birifuction.
NickLaing @ 2026-03-08T07:53 (+13)
If they think its overhyped that's OK, they can just join us over here helping boring-old-right-now people in GHD, we can care about different things ;).
Chris Leong @ 2026-03-09T04:34 (+5)
That's orthogonal to the point that I raised about it being hard to run a course that simultaneously manages to be a strong fit for different groups of people.
NickLaing @ 2026-03-09T05:52 (+3)
yes it is, I was just responding to the "overhyped" comment.
Mjreard @ 2026-03-09T01:23 (+7)
The framing of your question suggests EA's role is to prescribe actions. I think EA is centrally a question and a set of abstract tools for understanding the world's needs. Using those tools will take different people in different directions. I want to support people using the tools well and I resolve not to judge how well people use the tools based on the specific conclusions they draw.
In particular, I think one of the biggest gaps in AI safety is having a clear understanding of why the majority of educated people reject the TAI hypothesis. Likewise, I think that many people who wish to do good in the world reject the TAI hypothesis for bad reasons that they would regret on reflection. Where do people go to correct these errors and build better models of which actions are best to take all things considered?
I don't know of a better venue than the best pockets of the current EA community. I want to make those pockets bigger!
Chris Leong @ 2026-03-09T04:46 (+13)
The framing of your question suggests EA's role is to prescribe actions
Was I presuming this? I didn't think I was. I was just talking about how it is hard to simultaneously meet the needs of folks with very different worldviews.
In particular, I think one of the biggest gaps in AI safety is having a clear understanding of why the majority of educated people reject the TAI hypothesis. Likewise, I think that many people who wish to do good in the world reject the TAI hypothesis for bad reasons that they would regret on reflection. Where do people go to correct these errors and build better models of which actions are best to take all things considered?
You can definitely run a session on this. The challenge is where do you go from there? If you take the conversation to more concrete issues you risk losing half your audience. I'm not claiming this is impossible, just that it's tricky.
I think one of the biggest gaps in AI safety is having a clear understanding of why the majority of educated people reject the TAI hypothesis
I'm curious what your explanation would be. My explanation would be is that media landscape is filled with hype; there's all these philosophical arguments you can make that are hard to evaluate; even if you know that some amount of predicted crises will come true most people don't have high confidence that they could predict which ones these would be and even if you could it'd take a massive amount of time and people's lives are pretty busy and what would they do with that knowledge anyway?
Mjreard @ 2026-03-09T16:50 (+12)
We're looking at this more differently than I thought. The question "how does EA meet the needs of people with different worldviews" is strange to me. EA should be the place you go to *form* your worldview, by learning about and comparing different perspectives. Whatever has caused this framing to seem tricky/unnatural is the thing I'm pushing back against.
I have a similar take on TAI skepticism, with some added (perhaps excessively charitable) concerns around how economic value gets created in the first place and what hurdles there are between current AI systems and creating that value.
Chris Leong @ 2026-03-10T04:01 (+2)
I expect people to update somewhat. My split was more about where people end up falling after initial exposure to arguments on both sides.
In the past, AI didn't feel so pressing to the AI crowd, so they had more space to explore, rather than the discussion of animals and global poverty feeling like dead weight.
Ben_West🔸 @ 2026-03-14T23:47 (+6)
I would be keen to hear if you think you have any solutions to this birifuction.
Huh, this feels like prime EA territory to me. We need disagreement so that people can engage in key EA activities like "making persnickety critiques of footnote #237 on someone's 10k word forum post."
The case for EA feels much weaker to me if we are all confident that X is the best thing to do - then you should just do X and not worry about cause prio etc.
lilly @ 2026-03-09T23:56 (+25)
I don't think it's possible to create a stable EA comprised of people who (1) "believe in the tools more than the conclusions" and (2) treat AI safety as a settled conclusion.
I think the evolution of Forethought is perhaps emblematic of some of the problems with 2026 EA. Before Forethought became "a research nonprofit focused on how to navigate the transition to a world with superintelligent AI systems," it was the Forethought Foundation for Global Priorities Research, which aimed "to promote academic work that addresses the question of how to use our scarce resources to improve the world by as much as possible." FFGPR funded an annual fellowship that dozens of doctoral students from a range of fields participated in, and funded them to spend a month together in Oxford thinking through their GPR projects together.
New Forethought seems to: (1) focus on a narrower range of projects (those related to navigating the transition to a world with superintelligent AI systems) and (2) not explicitly fund/prioritize community building (e.g., the research supported by Forethought seems to come from Forethought employees, many of whom are based at Oxford, versus academics based around the world).
These changes parallel an issue that affects the EA community more generally: talented people who don't work on issues related to AI—the very people most well-equipped to help EA course-correct—are less and less likely to be brought in. Several things have contributed to this—funding for non-AI projects has dried up; for non-AI-safety EAs, EAGs increasingly consist of conversations spent discussing one's work with people who can rarely help (or worse, look down on it); it is difficult to build connections/friendships with people whose fundamental beliefs diverge more and more from one's own. In short, the average non-AI-safety EA gains less—professionally, socially, and otherwise—from being actively engaged in the EA community in 2026 than they did in 2022. As one of these people, I want to be clear that my values and goals haven't changed—I still want to use evidence and reason to do a lot of good—but it has ceased to feel like being part of the EA community facilitates this.
EA is a philosophy trying to find the most effective ways to help others, and a social movement that aims to put those ideas into practice; the social movement follows from the philosophy. So if EA has answered the question of how to most effectively help others—work on issues related to AI safety—then why should those of us who don't focus on these issues be involved in this social movement?
When the theoretical question of whether we still need EA, if the answer to EA is AI safety, and an AI safety community exists gets posed, people tend to point to all of the non-AI-safety EA things that still exist ("look how much money is still going to GiveWell," or "Coefficient Giving devotes a lot of resources to animal welfare," and so on). But this isn't an answer. And on a personal level, the question has already de facto been answered: as EA orgs like Forethought (and 80k, and others) increasingly shift from focusing on GPR -> AI safety, the fact that CG devotes resources to animal welfare legislation isn't very relevant to the experience that I have when I go to EAGs, or read the Forum, or try to have conversations with AI safety researchers, or peruse 80k's recent episodes, or cease to see grant opportunities relevant to my projects.
The orientation you want is: “what kind of person do I want to be?” I suspect that to even ask the question immediately pushes you towards some tentative answers. You want to do good. You want to help others. You want to be fair minded about that, maybe even impartial. You want to do more good rather than less. You want to believe true things and understand the world.
All of these things are still true about me. But EA doesn't just aim to do good; it aims to do the most good. And EA is no longer agnostic about the answer to the question of how to do the most good. The loss of that agnosticism has been—perhaps rightly—accompanied by a change in the structure of the EA community. This seems like a bullet new EA may just have to bite.
Lorenzo Buonanno🔸 @ 2026-03-14T22:49 (+24)
funding for non-AI projects has dried up
What are you basing this on? I think the opposite is going on. Some datapoints that come to mind:
- Coefficient Giving more than doubled their funding for GiveWell for 2026, adding $175M on top of the existing $100M. They also started two new funds
- GiveWell's funding from non-Coefficient Giving donors is also increasing
- Founders Pledge went from $25M money moved in 2022 → $80M in 2023 → $140M in 2024, and other major funders are emerging
- Giving Green influences >$17M/year in climate donations, and recently started research into biodiversity projects
- The EA Animal Welfare fund raised >$10M/y last year and is now targeting $20M/y
- https://jobs.probablygood.org/ has 148 roles published in the last 4 days, only 10 of which are explicitly categorized as AI safety (although a few more involve AI)
- Charity Entrepreneurship is launching more and more charities per year, and AIM as a whole has more programs
lilly @ 2026-03-15T23:25 (+2)
Yes, I overstated this a bit (“has dried up”), but I kind of think we’re both right. On a large scale, orgs like GiveWell are still getting a lot of funding. But on an individual level, the funding environment feels really different to me than it did five years ago, when there were more fellowship and grant and award opportunities than I could possibly apply to. It does not feel like that today.
Lorenzo Buonanno🔸 @ 2026-03-16T01:38 (+12)
orgs like GiveWell are still getting a lot of funding
It's not just that these orgs are still getting a lot of funding:
- their funding is significantly increasing
- there's many more of them
- many of them are making more and more varied grants themselves, e.g. GiveWell making 2 <$100k grants in 2026 which they didn't use to do 5 years ago, Founders Pledge brand new Catalytic Impact Fund
there were more fellowship and grant and award opportunities than I could possibly apply to. It does not feel like that today.
I'm surprised by this, I think there's a ton today. I'm not following this space actively but, besides the >100 job openings and >3 AIM programs mentioned above, here's some off the top of my head:
- High Impact Professionals Impact Accelerator Program
- CEA bootcamp (which as far as I know is not mainly about AI)
- School for Moral Ambition fellowships and circles
- Magnify Mentoring mentee applications (I think it now accepts more people than WANBAM did five years ago, but can't quickly find numbers. I see it got $371k from Coefficient Giving in August 2025, and their revenue seems to be increasing)
- Animal Advocacy Careers course and career advising
- Their Job Board has 21 job openings from last week
You can also have a look at the most recent posts tagged "opportunities to take action" and the EA opportunities board, there's lots of non-AI stuff, enough to overwhelm newcomers as much as EA in 2021, and likely way more than EA in 2017.
Also in general if Coefficient Giving and others are making more grants to more things, it likely means that there are more opportunities.
Mjreard @ 2026-03-10T02:02 (+10)
At the core of my project is the idea that people can disagree and still cooperate.
I agree with you that the people who currently control the talent infrastructure that flows from CG, i.e. CEA and 80k, have for the most part become uninterested in views on cause prio that don't buy into the TAI hypothesis. They are not however, completely uninterested. As you say, they invite people working on global health, animals, and other causes to EAG; they support groups which discuss and invite speakers on these topics.
I understand that this is not much support in material terms compared to AI and this does stack the deck against non-AI causes for people making EA career choices. The question is what you want to do with it.
For my part, I am choosing to leverage that small amount of support to strengthen free-ranging discourse about how to do the most good. The bar may be higher for non-AI projects and people to get CG funding as they emerge from this discourse, but I don't think it is insurmountable. Further, I hope people who engage with my project will use it as a launching pad to reach out to non-CG funders and non-"EA" collaborators for their non-AI projects.
Both of these will be more challenging, but I personally resolve to support people doing what they endorse doing based on how thoughtful and ambitious they are, where I measure neither of those things in terms of how much they agree with me or my funders in substance. It'll be tough re bias, but idk, liberalism conquered the world last century, maybe it'll do it again this century.
lilly @ 2026-03-10T16:44 (+11)
At the core of my project is the idea that people can disagree and still cooperate.
I think this is the crux of the issue. A lot of EAs working on non-AI projects don't even disagree that AI is the most important issue of our time. The problem is relevance/interest. Many non-AI-safety people have already spent years building careers in other areas. We have PhDs, social networks, and jobs that aren't oriented around AI safety—in many cases because we were explicitly following EA principles/ideas/advice from a decade ago—and don't plan to start from scratch. (I agree that the average college student encountering EA today should focus on issues related to AI safety.)
It used to be the case that engaging with the EA community was a good way to facilitate doing important work in non-AI areas, but this is becoming less true; if the average EA has the bandwidth/resources to go to two conferences a year, the global health EA may find it higher yield to go to global health conferences, the GPR EA may find it higher yield to go to philosophy/econ conferences, and so on. Non-EA global health people (etc) are also thinking about how to do the most good within the field of global health, and global health EAs may benefit more professionally (and socially?) from engaging directly with them. It doesn't seem like a great use of our limited time (for us, or for the AI safety people, probably) to professionally network with EAs working on things totally unrelated to what we're working on.
Anyways, I'm not sure I fully understand what your proposal is, but I'm just trying to articulate what I see as a fundamental barrier to getting those of us who don't do AI safety work to be more actively engaged in EA: many of us agree that AI is the most important issue of our time, but that doesn't mean it makes sense for us to re-focus our careers on AI, given our existing backgrounds and skills. Correspondingly, it makes less and less sense for us to spend our professional time engaging with a community that is focused on AI safety. (I think there's perhaps a better case to be made that it would be socially fulfilling to do this; I don't feel socially compatible with the average AI safety EA, but maybe others do.)
Arepo @ 2026-03-12T02:52 (+13)
I agree that the average college student encountering EA today should focus on issues related to AI safety
I broadly nodded along to your OP, but strong disagree here. There are tonnes of people working in AI safety, to the extent that it's already hypercompetitive and the marginal value of one more person getting in the long queue for such a job seems very low.
Meanwhile I continue to find the case for AI safety, as least as envisioned by EA doomers, highly speculative. That's not to say it shouldn't get any attention, but there's a far better evidenced path from e.g. 'nuclear bombs or major pandemics cause the fall of civilisation' than from 'LLMs cause the fall of civilisation'.
And if you're sufficiently pessimistic on the doomer narrative, we're all screwed and there's likely at least as much EV in short-term improvement of the lives of existing beings as in fighting an impossible struggle to prevent AGI from ever being developed. So there's a credence window in which AI safety as top priority belongs. That window might be reasonably wide, but I don't think it's anywhere near wide enough to justify abandoning all other causes.
lilly @ 2026-03-12T12:32 (+5)
I didn’t mean this to be that deep; I meant (1) the average college student EA (i.e., many EAs should still pursue other kinds of careers) and (2) AI safety broadly construed (to include issues related to biorisk, policy, and many issues unrelated to x-risk). I don’t know much about how competitive jobs are throughout this space, but at least in some spheres (eg, academic philosophy) there is growing interest in AI, so much so that it’d be prudent for a philosophy PhD student to work on issues related to AI solely to get a job (i.e., bracketing any interest in EA/having a socially valuable career). I assume that’s true in at least some other spheres as well (policy?), and while I could see that changing in the next few years, it feels like the entire job market will change a lot in the next few years, such that I doubt the advice “don’t go into AI safety because it’s oversaturated; do X instead” will be reliable advice for most X.
Mjreard @ 2026-03-10T17:38 (+8)
We might just fully agree. I don't think there were ever career-long professional benefits to EA for people specializing in specific cause areas that outweigh those at cause-specific conferences (but please come teach community builders/members/young people about your work at EAG).
I think EA has always been for:
- figuring out where you want to specialize, and
- building/maintaining your knowledge and motivation around the world's needs generally
The first is professionally relevant early in your career (or for generalists looking to lateral), but not so much later. The latter is personal/social/intellectual and perhaps a broad way that a specialist can give back by helping people working on the first thing.
If it is settled that AI is the thing to do, maybe point one has become irrelevant. I dispute this,[1] but less so than point two, which I think has strong independent value.
It may also be helpful context that I personally am not an expected utiltity maximizer. I'm doing my project because I want people to engage with EA arguments and then do what they want to do with them, as opposed to doing what I want them to do in a more superficial sense.
- ^
For example, it may just be critical to understand what else might be ITN in the world to understand why AI is important, or to think clearly about what its implications for welfare are. If those other problems aren't really in the room or fully explored, it's easy to miss crucial considerations. Similarly, what do we mean by "AI" and "settled?" Lots of EA epistemology can help here. Relatedly, the moral context of everything happening in the world can provide motivation that might otherwise be lacking.
lilly @ 2026-03-12T14:01 (+20)
Hm interesting. One reaction I have is that in-person communities have different functions, and it might be worth specifying more precisely what function you envision in-person EA communities having in 2026 (and how this has maybe changed?). Here are some different models I could see; 2 and 3 seem more promising to me than 1 or 4:
- EA as a professional community (like a professional society with professional conferences). This is historically what most in-person EA events have been, but as I’ve argued, I think this kind of in-person community makes less and less sense (though it probably continues to make sense for large sub-groups of EAs, like AI safety EAs, GPR EAs, and so on).
- EA as a moral/spiritual community (like Unitarian Universalism). I suspect some people will bristle at the word “spiritual,” but I think what you’ve said about motivation is true/important, and EA would do well to lean into this. As a kid, I always liked religious services—despite not believing in God—because I enjoyed the music, (some of) the sermons/stories, and the quiet meditation. It would be culty to lean too hard in the direction of an “EA service,” but it could be cool to design social events that explicitly try to get at this (i.e., leave people feeling hopeful/reflective/recharged, rather than doom-y). I suspect a lot of EAs—including myself, lol—would eye roll at the concept though, so it could be hard to get off the ground.
- EA as a social community organized around a shared interest (like a debate team). Debaters don’t formally debate with each other when they socialize (ie, in a structured way), but the things that make them like debate also make them socially compatible. Similarly, maybe we could think of EAs as actually practicing their EA separately, but uniting over the things that make them like EA. I suspect this is a fairly promising model.
- EA as a social community organized around a shared activity (like a hiking club). A hiking club exists so people can hike together and, as my dad often notes after spending the day with his, may not be that socially compatible in other ways. I could see EA being like this too—maybe I don’t vibe with the AI safety people, but we could have interesting/fun convos about EA? I’m also not sure this works though.
Mjreard @ 2026-03-14T23:17 (+6)
Good breakdown. I agree on 2 & 3 being promising too. One of the first event models I came up with for my project was EA reading + sermon-or-constructive debate related to the reading. It's not cultish if there are no rites/titles/statements of faith/garb/iconography.
Matrice Jacobine🔸🏳️⚧️ @ 2026-03-12T21:57 (+2)
David Mathers🔸 @ 2026-03-13T14:16 (+11)
I have a much more positive feelings about EAs than rationalists, and I think this is quite normal for people who came to EA from outside rationalism. I mean, I actually liked the vast majority of rationalists I've met a lot-when I worked in a rationalist office in Prague it had a lovely culture-but I think only about .5 of rationalists like EA as an idea, and my suspicion is that "dislikes EA" amongst rationalists correlates fairly heavily with "has political views that make me uncomfortable".
Andrew Roxby @ 2026-03-10T20:31 (+5)
"EA is no longer agnostic about the answer to the question of how to do the most good." I'm interested in this assertion - to what extent does it make sense to say that "EA" (the movement? Key organizations?) has taken a firm stance on the question of how the most good can be done? Is there pretty clear evidence of that, in your opinion? These matter to me quite a bit as someone who at present thinks that EA as a movement should provide epistemic tools and a community for people working on many important causes, AI safety amongst them.
Clara Torres Latorre 🔸 @ 2026-03-10T20:40 (+7)
At least the 80k pivot to narrow focus on AI seems to back this point.
Kestrel🔸 @ 2026-03-11T08:58 (+24)
I no longer tell people thinking about careers in an EA way to go to 80,000 hours. I tell them to go to Probably Good, that has taken over the foundational generalist career guidance work.
80k has narrowed itself into increasing irrelevance to broader-tent EA. I understand that there are reasons why they believe a specialist AI safety careers navigator is a better use of their time.
Michelle_Hutchinson @ 2026-03-07T17:40 (+18)
"And the reason EA will be great again is that the best EAs, of which there are many, all want to be EAs. It’s in their bones. If they weren’t staring down the barrel of an intelligence explosion, or if the prospect of transformative AI somehow disappears with relative certainty, they would open their Animal Welfare for Dummies books the next day. Or they’d roll up their sleeves to defend America. Some of them are doing these things now. The point is that the EAs are ready to do what’s right with no fuss and no ego. Believe it or not, that’s what AI safety work entailed just seven or eight years ago."
<3
"The reader will note that I am not the best of us."
The reader notes that one day you'll have to stop pretending, and grudgingly admit that you are.
"the way we’ll know EA is really winning is when the Oxford crew is wholly eclipsed by a new generation of even more ambitious and clear-thinking altruists"
Amen
You have such a way with words.
Mjreard @ 2026-03-07T19:29 (+3)
I thought you might catch that last one. I hope you took it personally.
Michelle_Hutchinson @ 2026-03-09T12:50 (+2)
Better luck next time
Jeffrey Kursonis @ 2026-03-11T15:46 (+10)
“My real issue is that EA in all its forms is still so small.” …as a technical expert I couldn’t agree more.
Movements aren’t gated, like business or fundamentalism. Unspoken yet de facto elitism, applications for EAGs, all have kept the EA movement a tiny fraction of what it could be. More funding than any movement has ever known (because science nerds feel comfy here) have kept it alive. In the past the sense that, “Wow, if I wanted to do some good this group might fund me” did launch some good stuff, but far more hit all the de facto walls and shrunk off to business or trad philanthropy.
A renewal, making it great again, will only come with some new ideas and more openness. That cost effectiveness is the main thing in social entrepreneurship, the very beating heart of ascendant EA has to accommodate some new friends and be changed by them.
OscarD🔸 @ 2026-03-07T17:16 (+10)
Reasonable if you don't want to publicly go into internecine tensions, but the obvious question seems to be how you see this relating to principles-first EA, which is, on its face, a similar idea.
Mjreard @ 2026-03-08T01:42 (+17)
One tension that CEA laudably attempts to navigate is that EA is actually not self-recommending. There are worlds where dwelling on prioritization and personal-morality questions just aren't that impactful. We may live in such a world given the urgency of addressing transformative AI and other matters.
My read is that CEA feels compelled to take views and allocate resources based on these considerations. In one part, it's important for them that users of their programs take jobs or actions from a specific subset of jobs/actions to count as "successes" by CEA's lights.
My tack is to really tie myself to the mast regarding getting people to engage with EA ideas for their own sake. We'll pursue this with vigor and be intellectually challenging, but when it comes to what people *do* with these ideas, the chips will fall where they may. I anticipate that I'll pay my impact bills this way, but I'm not maximizing impact. I'm maximizing EA ideas.
Mo Putera @ 2026-03-08T03:31 (+7)
I anticipate that I'll pay my impact bills this way, but I'm not maximizing impact. I'm maximizing EA ideas.
Would you mind saying more? Not a reasoning-transparent justification, more so a sketch of the high-level generators. Wondering if it's along the lines of Richard Ngo's
I think “maximize expected utility while obeying some constraints” looks very different from actually taking non-consequentialist decision procedures seriously. In principle the utility-maximizing decision procedure might not involve thinking about “impact” at all. And this is not even an insane hypothetical, IMO thinking about impact is pretty corrosive to one’s ability to do excellent research for example.
Mjreard @ 2026-03-09T01:07 (+7)
I think your read is basically right. Thinking explicitly and granularly about the direct chain from your actions to last-mile impact and being sensitive to perturbations of that measurement is one area I think many current orgs over-invest in. I believe it is inconsistent with the processes that created those orgs in the first place (which I'm now trying to replicate without much focus on the direct, measurable outputs).
The biggest issue I see is people spending up to 20% of their best hours getting bogged down in metrics and explicit planning when they could be spending much of that time doing things they're excited about which they've done a quick sense-check on.
I think this is one of the great strengths of liberal, big-tent projects. Support plausibly great people all playing to their strengths. Some of them will disappoint and under-perform your hyper-planned model, sure, but the over-performers will more than make up for it. I want to embody this principle in my org and the groups we support.
ElliotTep @ 2026-03-09T02:30 (+5)
Is the point here that you are still ultimately interested in outcomes, but that you think that the current focus on explicitly measuring and project planning hurts more than it helps, and that curiosity and a thriving intellectual scene where people are more willing to run experiments will achieve better outcomes than more explicit attempts to do so?
Mjreard @ 2026-03-09T03:07 (+10)
Yes. I am at-all interested in outcomes such that I will regard myself as having failed a sanity check if very few people go on to do ambitious, impactful work after engaging with my events/programs/groups. But I am very bound to being permissive about what counts here. Thoughtfulness about high impact is the bar, not my EV calculation of impact.
To put it in the form of a critique, I think too many community building programs adopt metrics like "number of participants who go into roles at AIM, MATS, GovAI, etc." and that this is too prescriptive and discourages people from really forming their own world models in an EA context.
My metric is whether I'm impressed with the pushback I get on my takes when I go into these spaces or whether I'm learning new and plausibly very important things about big problems.
ElliotTep @ 2026-03-09T22:22 (+2)
Makes sense. Also: great post.
andiehansen @ 2026-03-10T20:35 (+3)
I liked this, for the most part! It seems useful to push for the ambition necessary to make the most of this time. But there was one thing that seemed confusing when you mentioned EAs defending America if AI and animal welfare weren't a pressing concern.
Or they’d roll up their sleeves to defend America.
If I'm trying to be charitable, maybe this is referring to safeguarding democracy in America? But when I read "roll up their sleeves to defend America," a military career also comes to mind.
However, my immediate impression was to read it as channeling American patriotism or something. I'm guessing the former interpretation might have been more what you were going for?
Without trying to be too political, as a Canadian this first interpretation deflated a lot of the piece for me. If anything, EAs could roll up their sleeves to defend the rest of the world from America right now (if AI and animal welfare weren't priorities).
I really appreciated the vast majority of it, but I wanted to highlight that this sentence was ambiguous.
Mjreard @ 2026-03-10T21:30 (+5)
I mean defending America from Donald Trump and his forces who are currently waging war against America.
Clara Torres Latorre 🔸 @ 2026-03-10T20:43 (+1)
Non american here.
I read that sentence as a rethorical like "doing whatever thing is necessary" and I don't see it implying that "defending America" is necessarily even good.
However, if your read is the right one, then I find it off putting as well.
I would appreciate @Mjreard clarifying what the intent behind that was.
Ben Anderson @ 2026-03-12T02:26 (+1)
This is a truly great post, thank you.