You can just leave EA
By Holly Elmore ⏸️ 🔸 @ 2025-12-24T07:25 (+16)
People have been reacting to my last post with #notallEAs, which, totally, but… have you considered that if you have to distinguish yourself from other EAs then maybe the label doesn’t describe you well?
I fought for EA to mean something simpler— just someone who 1. Figured out the best way to improve the world and 2. Does it— but I lost. EA became focused on careers and technical AI Safety. Part of that was it kind of stopped being a thing everyone can participate in in their own way. I’m beyond thrilled that Giving What We Can has gotten more confident in itself again, but for a while there even giving money was being treated as deprecated in the core community. If you’re not going to have an EA career, you can no longer be a real insider.
Again, this is not how I wanted it. Don’t be mad at me for just describing what happened. I wanted EA to nurture fractional participation at every level through teachings and community support, more focused on the middle of the funnel. It started much more that way. But tastes changed and circa 2017 CEA officially changed its recruitment model to be about focusing on making “core EAs”, and EA messaging started being more about recruiting people into a handful of careers. It’s in the second edition of the EA handbook. It was openly discussed, roughly coincident with the switch to longtermism.
Whether everyone reading knows it or not, there is a core community that calls the shots about EA. Even if you run your own group outside of an EA hub, these people tell you what’s effective and worth doing by providing the materials, controlling the money, and setting the trends. In early EA, lots of people did their own research and compared notes. Now that’s less common and there are think tanks (like Rethink Priorities, where I used to work) where Open Phil dictates what research to do and whether it can be shared. (Trying to please OP was a huge concern at RP, and it exerted a huge psychic influence even on me that affected how clearly I could think for myself.)
So what I’m saying is, if you’re not at the top calling the shots, maybe you just shouldn’t cast your lot with them. Because they are the ones controlling what “EA” means, perks and liabilities. If you’re all one people when it comes time to get benefits, how can everyone be distinct when it comes time to share responsibility for EA problems? Every time I engage on this I come upon a bailey of smiling people loving to identify as the same thing, only to have them crawl up into the motte of “not ALL” by the time they’ve finished my post. If you don’t accept the critique and claim you don’t recognize it, maybe you also don’t really need or accept the label.
You value the friends? You can just be friends. If that doesn’t work without adopting the label, they aren’t good friends.
You like the online conversation? You can just talk.
You want an intellectual community? You don’t have to be in communion with them.
You want funding? This one’s tough but you can’t let it dictate your identity to you. Accepting money creates a bond, which you need to accept responsibility for. If you can’t, maybe it’s not worth getting EA money.
You want the EA community to be what you wish it was? Yeah, I did too. But you have to take a clear-eyed look at what it is. And if you take part in it and bear the name, you have to accept the good and the bad of how it actually is.
The last thing I wanted to do was leave EA. I wanted it to be the community it was at the beginning, and I had a lot of influence, but I couldn’t dictate what EA “really” meant in the face of the actual people and choices making up the community. I stuck around for a long time arguing that my version of EA was how it should be, and insisting that that’s how it was for me even if others were doing it differently. When I was forced to leave EA to pursue PauseAI, I could admit to myself that I was co-signing the bad stuff by being there and lending my name and work, and it was shitty of me to think I could shirk responsibility just because I wanted EA to be something else.
So, idk, if you think I’m wrong because you’re an EA and I’m not describing you— in what sense are you an EA? You can always leave.
NickLaing @ 2025-12-24T08:31 (+35)
Hey I'm wondering what you mean by "leave EA" exactly here? First its not clear to me what you mean practically by "leave" exactly? Second FWIW I call myself an Effective Altruist and I don't feel like I need to sign up to the extent/standards you do to carry that label.
I call myself an EA because I'm committed to "Finding the best way to help others" and "Turning good intentions into impact" (love these from the CEA website). In addition I've been impressed by the character and heart of EAs I have met who do Global Development things, and I appreciate the forum development discourse (although there is less material year on year).
I feel like people will have diverse reasons for identifying as an "EA" from your nice list, whether that's community, the mindset, the online discourse or a combination of them all. Some might have vaguer reasons which is all good too.
Also I suspect I'm just in far less deep than you were here, so its harder for me to identify with your experience. I can also imagine the AI/GCR community and disagreements within it are more fraught than within GHD.
Geoffrey Miller @ 2025-12-24T23:22 (+18)
Holly --
I think the frustrating thing here, for you and me, is that, compared to its AI safety fiascos, EA did so much soul-searching after the Sam Bankman-Fried fiasco with the FTX fraud in 2022. We took the SBF/FTX debacle seriously as a failure of EA people, principles, judgment, mentorship, etc. We acknowledged that it hurt EA's public reputation, and we tried to identify ways to avoid making the same catastrophic mistakes again.
But as far as I've seen, EA has done very little soul-searching for its complicity in helping to launch OpenAI, and then in helping to launch Anthropic -- both of which have proven to be far, far less committed to serious AI safety ethics than they'd promised, and far less than we'd hoped.
In my view, accelerating the development of AGI, by giving the EA seal of approval to first OpenAI and then Anthropic, has done far, far more damage to humanity's likelihood of survival than the FTX fiasco ever did. But of course so many EAs go on to get lucrative jobs at OpenAI and Anthropic, and 80,000 Hours is delighted to host such job ads, that EA as a career-advacement movement is locked into the belief that 'technical AI safety research' within 'frontier AI labs' is a far more valuable use of bright young people's talents than merely promoting grass-roots AI safety advocacy.
Let me know if that captures any of your frustration. It might help EAs understand why this double standard -- taking huge responsibility for SBF/FTX turning reckless and evil, but taking virtually no responsibility for OpenAI/Anthropic turning reckless and evil -- is so grating to you (and me).
Holly Elmore ⏸️ 🔸 @ 2025-12-25T01:50 (+7)
I thought EA was too eager to accept fault for a few people committing financial crimes out of their sight. The average EA actually is complicit in the safetywashing of OpenAI and Anthropic! Maybe that’s why the don’t want to think about it…
Kestrel🔸 @ 2025-12-24T10:41 (+6)
So I think the problem (?) is that nobody donates to EA infrastructure for the purpose of cultivating a nice community. They donate to EA infrastructure almost exclusively for the purpose of cultivating impactful actions (that are the actions they want to see)
I mean, I sure would like it if people donated to cultivate a nice community. However, I don't think I'm owed that from an explicitly EA funding pot. Why should EA-aligned donors spend cash on me and not on e.g. malaria prevention? Heck, I'm an EA-aligned donor, and I spend cash on malaria prevention that could have been spent on me.
Pato @ 2025-12-30T09:34 (+3)
When a lot of people (like me) say “#notallEAs” they are probably not saying it anecdotally to refer to themselves, as you are implying. They’re just pointing to the overlap. So I think that part of the post is misguided.
Even if the last question is misguided, if I, a supporter of PAI, were to consider myself an EA, why would that be?
There are several possible reasons: I changed my career plans because of it; I work and have been working thanks to EA funding (by working at PauseAI lmao); I’ve been in the EA hotel for a bunch of months and plan to go back to it or the PauseAI Hotel (which is right next to it); I’ve attended conferences, received grants, and read a couple of books; I check out the forum here and there, agree with the philosophy, agree with the median EA in models of the world and ethics more than with any other median X; I plan to donate to EA orgs soon and want to keep engaging with the community, etc.
The list seems pretty big in contrast to “but the core funders and leaders aren’t supporting the advocacy for an AI moratorium.”
—
I also don’t think any of your arguments are good enough to justify disengaging with the EA movement if a specific person agrees with the philosophy and has only a handful of disagreements or problems with the EA median member/ movement. This applies to Rationalist spaces too, to a certain extent.
It’s not like there are better alternatives to it for people who are trying to figure out important things about the world and how to improve it.
Even if you think a lot of them have a huge bias in some specific regard, you can still interact with them with that in mind, and you are still less likely to find other biases in them than in any other big community by a large margin. You’re still much more likely to find people who are very knowledgeable + kind + smart + dedicated to doing good in EA than in any other space that I know of. People who can change your mind, fund part of your work, or help you on your path to having a better impact in the world in other ways.
It’s really good to be mindful of the ways some groups have some control over the community and their potential biases and personal interests. But if the response to that is disengaging with the community instead of defending your disagreements here and there, then you’re giving them more power.
PabloAMC 🔸 @ 2025-12-24T17:54 (+2)
I fought for EA to mean something simpler— just someone who 1. Figured out the best way to improve the world and 2. Does it— but I lost.
For what it is worth, this is not how I feel in my local EA community. There are people leading effective giving organisations and others who just go on with their usual lives with trial pledges; and I feel we are fairly non judgemental.