Community building: Is optimizing for “effectiveness” the most effective approach?
By Solal 🔸 @ 2025-07-01T13:52 (+15)
TL;DR: EA optimizes for doing the most with limited resources. I argue that our notion of resources is confusing and should be restricted to EA-dedicated resources. In that paradigm, pure effectiveness optimization (mostly from a community building standpoint) is not a completely obvious choice. Paradoxically, insisting on maximum effectiveness might result in less work done overall, because of the impact of effectiveness standards on the quantity (not just use) of resources.
Thesis: EA community builders (and some other people) should consider whether our high effectiveness standards, while crucial for resource allocation, might be counterproductive when applied to community growth and engagement. The optimal approach likely involves maintaining high standards for resource deployment while being more inclusive in community participation.
(Epistemic status: This post reflects my ongoing thinking about what EA is and should be. I write primarily from a community-building perspective, based on organizing EA events and observing local group dynamics.
I observed several promising people bounce off EA due to misaligned cause priorities or intellectual gatekeeping. I tried my best to analyze the issue rationally, but some of them were close friends. It would be hard for me to accept a conclusion such as "They are just not welcome here" which puts me at risk of some motivated reasoning. For that reason, I won't conclude that we should do either this or that, but simply point out a question that I think deserves some thinking!
Also, several claims about "what would happen if..." and community dynamics come from personal experience, not rigorous research. They are to be taken with a grain of salt!)
A note on terminology: When I say "we" in this post, I'm primarily thinking about EA community builders, local group organizers, and those involved in EA outreach and events. However, these considerations may also apply in some way to career advisors, and individual EAs in their personal interactions, probably less so to evaluators and effective giving organizations.
The Standard EA Model is "Get the most out of limited resources"
We have limited resources, and many things to do. This elegant idea is one of the cornerstones of EA: we should try to be as effective as possible to get the most impact per resource (money, time, people, careers, tools, political power...). This implies two things: act as effectively as possible, and direct the resources you can’t use by yourself toward the most effective interventions.
This framework has been enormously productive. It's given us rigorous cause prioritization, careful measurement of intervention effectiveness, and a whole vibrant community that actually takes the question "But is this actually the best use of marginal resources?" seriously.
I do want to emphasize how much this approach is in my opinion the right one (or the very best we have) in a context where the model "scarce resources & many things to do" applies. EA has directed hundreds of millions of dollars toward interventions with strong evidence bases, and built a community that consistently pushes its own limits back in pursuit of getting closer to the truth, identifying important, neglected problems, and making the world a better place.
When Optimization Optimizes Away Your Optimizers
What if the bottleneck restricting our activities was not the effectiveness of our resource use, but the amount of resources? We often talk about "limited resources" as some kind of universally available thing for us (as a movement) to use. I think we should shift our attention onto the amount of resources dedicated to EA work instead.
First piece of the puzzle: EA-dedicated resources is a better metric to track than resources.
Second piece of the puzzle: EA community doesn't just evaluate interventions for effectiveness. It also, quite naturally, evaluates people, ideas, and approaches. I have often heard people (community builders, individuals) ask lightly:
- Is this person working on something important?
- Is this idea likely to pan out?
- Will this approach have any impact?
- That guy is nice but not aligned much
- This org is aligned on our values but not on EA's principles…
These informal evaluations determine who feels welcome in EA. They shape whose ideas get taken seriously. They influence who decides to stick around and contribute their resources.
(Side note: I want to write a fully fledged other post exploring why I think EA's roots in academics and internet spheres have shaped a way to think of integration in the community that is tailored to a very specific set of people, but for the time being consider this claim completely unfounded)
Thinking that our effectiveness standards only affect how we spend resources seems like a big oversight to me: they also have affect how much resources we have access to in the first place. Maybe we should now (in the context of this post at least) start to separate fundraisers and evaluators from community builders and individuals in the way we apply EA’s principles.
Some (fictional) examples
A mid-career professional has built a successful company reducing industrial emissions. They're making good money and want to give back. They hear about EA and like the idea of evidence-based giving.
But when they explore our community, they discover climate change ranks low on the priority list. It's important, but less neglected than AI safety or global health. Fewer clear intervention opportunities exist.
Implicit message: Your life's work isn't worthless, but it's not what serious effectiveness-minded people would focus on.
Result: Instead of becoming a major EA donor who might gradually expand their cause portfolio, they direct their philanthropy elsewhere.
Missed opportunity: Some funding, one more people to give weight to EA worldwide, and most of all, some local group gaining one altruistic person.
Example 2: The Evidence-Based Local Organization
A well-run foundation works on education in wealthy countries. They use rigorous evaluation methods, measure their impact carefully, and want to do as much good as possible with their resources. They share EA's core commitments to evidence and effectiveness.
But when they learn about EA, they discover their work doesn't align with our priorities. Their geographical scope is limited. Their cause area isn't top-priority. Their methods aren't perfectly optimized.
Implicit message: You’re not an EA org, you would not understand, keep your people busy while we look somewhere else for the real players.
Result: Instead of becoming EA-adjacent organizations that might gradually adopt EA frameworks or expand into higher-priority areas, they remain disconnected from our community.
Missed opportunity: These organizations might expand their scope, diversify into higher-priority cause areas, or adopt more effective interventions. Even if they kept their original focus, they could become more effective within their domain and increase EA awareness around them.
Why should we care?
While I have a tendency to delve into the abstract side of things, the question here is linked with several quite concrete concerns of EA:
Diversity struggles: Despite years of effort, EA remains demographically quite narrow. This may be an effect of the effectiveness standards systematically excluding specific groups (maybe because they have less favored positions in society and thus less spare resources and/or power to have an impact, or because of the intellectual gatekeeping that is necessary to maintain such standards).
Innovation from the periphery: I've heard of several high-impact organizations that emerged from outside the core of EA (GiveDirectly, FineMinds...). That such a thing may happen is akin to the question of "Why aren't rationalists winning?". We only know of those organizations the ones we discovered after they had enough success to measure… How many organizations and people could have had tremendous impact but were discouraged or led away early on and ended up having a much less impactful trajectory? (Maybe none, but maybe many!)
What to do about all of this?
How should different parts of the EA ecosystem respond?
This post does not try to tackle in any way the question of widening the tent. Nor is it related to "overall-effectiveness versus value-effectiveness". The question here is not philosophical; it is purely an instrumental wondering about whether our choices are indeed maximizing whatever we want to maximize (overall effectiveness for many EAs, value-effectiveness for some other, from what I have gathered).
To delve deeper into the question, one should consider further arguments about:
The dilution concern: This is real, nkthough I think it can be addressed through careful community structure rather than exclusion. For instance, having clear "core" and "exploratory" programming could maintain high standards where needed while creating inclusive entry points.
Opportunity costs: These exist, and may or may not be offset by the resources new members bring.
Different standards for different functions: Perhaps resource allocators should maintain very high effectiveness standards, while community builders and event organizers experiment with more inclusive approaches.
I am under the impression that many ideas which seem very broadly accepted in EA's community come from long posts or papers written a few years ago by people from the early days of EA. Those thoughts have tremendous value. We should of course continue to read and listen to them. They were however often written under the assumption that EA was a unified movement, most often through the perspective of someone living in an area with a high concentration of EAs (England or United States).
EA has grown, and now seems more like a collection of friendly groups and organizations than a movement being carefully crafted from the bottom up by philosophers. To that extent, I think that most arguments of type "EA should be more accessible", "EA should...", or even "EA...", are probably built on the wrong assumption that the denomination "EA" is somehow carving reality at its joints.
In the same way that most humans are similar, and yet 80,000 Hours recommends taking a lot of time to think about your specific path, fit, etc... Each person (at least organizer/decision maker) should take some time to build their own mental model of what EA is and should be!
I would be happy to hear your thoughts on all of this, and any feedback is welcome. In particular, I would love to know whether what I write about is common knowledge or has already been written somewhere (if it is, please tell me where to find it!) or if it may have made you update a bit on anything.
Many thanks to Capucine Griot for their precious feedbacks on a previous draft of this post!
Appendix A: Concrete suggestions under such uncertainty?
I have been wanting to post this for a long time, but I couldn't bring myself to send it without proper constructive suggestions. It all clicked into place during EAG London 2025, when someone (can't remember who, but thanks!) told me "It's not useful to get more people into AI Safety, but yes, you should get into AI Safety". (They meant that in their opinion, there were already almost too many junior researchers in AI Safety compared to the infrastructure to let them upskill, making any effort to "bring even more" not that impactful, but that working as an AI safety researcher was still a potentially very impactful thing to do).
Our communication should not change—trying to bring people to work on the most effective interventions and to focus on the most ITN-scoring causes is a very strong priority. However, let's not forget that when a cause is not neglected, it is thanks to the number of people who did choose that path! While having any more is not marginally very impactful, every single person making a non-neglected cause non-neglected is very valuable! (I adhere to Shapley values)
If I had to cautiously give specific advice for different actors:
For community builders:
- Consider a "gradient" approach—high-effectiveness core programming alongside more inclusive entry points
- Maybe play around with not immediately redirect people away from their current interests; help them be more effective within their domain first
- Remember that it is called effective altruism, not altruistive effectivism!
For individual EAs in conversations:
- When someone expresses interest in a non-priority cause, engage with "How could you be more effective in that area?" before "Have you considered this other cause?"
- Validate the altruistic motivation before discussing cause prioritization
- Try to restrain the urge to call out “heavy-tailed distribution !” in every sentence. Heavy tail distributions are a thing to be aware of, but they are also a self fulfilling prophecy!
For new EA groups:
- Making sure that people feel that they belong because of their altruistic drive and willingness to be more effective, instead of feeling that they could belong if they accepted to change everything.
- This is actually one of the specific situations in which I am pretty convinced that being too concerned with effectiveness reduces the overall impact. Groups that stay too small do not benefit from the motivating group dynamics that groups larger than a certain threshold do.
For established EA groups:
- Can maintain higher effectiveness standards for core activities
- Hosting broad discussion topics for newcomers ("What is doing good?" rather than "Why doing good is better done very far from here?") and try to genuinely be enthusiastic about most net positive ideas, not just the most effective ones.
For career advisors:
- For some people, pointing out the most impactful accessible career path is the right thing to do. For other, pointing something that is too far away will most likely drive them away... See Keeping everyone motivated (a case for effective careers) for a more detailed case.
- "How to be more effective in your current role while exploring EA ideas" vs. "Switch to AI safety immediately"
Appendix B: A simple model of the tradeoff
Note: This isn't meant to be a rigorous model—just a way to clarify my thoughts.
Let me use some simple notation to illustrate why maximizing "effectiveness" might not maximize total impact:
- R = Resources available to EA (money, time, people)
- E = Effectiveness standard (how efficiently we use each unit of resource)
- W = Total work accomplished (what we actually care about)
By definition: W = R × E
The standard EA approach assumes we should maximize E (effectiveness) to maximize W (total work). This makes perfect sense if R is fixed—if we have a set amount of resources, using them more efficiently creates more impact.
But here's the key insight: R isn't fixed.
When we raise our effectiveness standards E:
- We filter out "less effective" donors and organizations
- We make EA less accessible to people whose work doesn't perfectly align
- We shrink the pool of people who feel welcome contributing
In other words, increasing E may tend to decrease R.
This relationship likely varies by context:
- For a local EA group trying to grow, being too selective might severely limit growth (strong negative relationship)
- For a specialized AI safety research organization, high standards might be essential (weak or no negative relationship)
- For EA in a new country/culture, lower initial standards might be necessary to establish presence
The optimal balance depends on:
- Stage of development (new vs. established groups)
- Geographic context (EA hubs vs. new regions)
- Type of organization (community groups vs. direct work orgs)
- Available alternatives (are there other communities these people could join?)
This creates a tradeoff. Raising standards by a small amount:
- Increases W through the direct effect (higher E)
- Decreases W through the indirect effect (lower R)
The net effect could go either way! We might be at a point where lowering our standards slightly would actually increase total impact by bringing in more resources than we lose in effectiveness.
The "optimal" level of effectiveness standards is where these two effects balance out—where raising standards further would lose us more in resources than we'd gain in effectiveness. I really do not know where we are on this curve. I would love to get some insight or to find time to flesh it out!
To sum up the question, with more precisely defined terms and more data, we may confirm the validity of the relation and in an ideal world we would dedicate some resources to find out a definitive answer for whether is positive or not for different parts of the EA ecosystem. 🙂
The "optimal" level of effectiveness standards is where these two effects balance out—where raising standards further would lose us more in resources than we'd gain in effectiveness. I really do not know, I would love to get some insight or to find time to flesh it out!
Solal 🔸 @ 2025-07-01T13:54 (+5)
A few minutes after hitting publish, I stumbled on that post from 2022, which tackles approximately the same issues, line of thoughts, and arguments.
https://forum.effectivealtruism.org/posts/udsATFrQtc34iKs2c/doing-more-good-vs-doing-the-most-good-possible
Let's say that mine is simply a re-release!
ethai @ 2025-07-01T17:58 (+2)
I think people should keep re-releasing this idea because this community dynamic very much still exists! I've also seen extremely smart and motivated friends engage with and then "bounce off" EA because of this.