Community Polls for the Community
By Will Aldred @ 2025-05-01T14:52 (+45)
The Meta Coordination Forum (MCF) is a place where EA leaders are polled on matters of EA community strategy. I thought it could be fun (and interesting) to run these same polls on EAs at large.[1]
Note: I link to the corresponding MCF results throughout this post, but I recommend readers don’t look at those until after voting themselves, to avoid anchoring.
Edit (May 3rd): Looks like all but the first two polls are now closed. I thought I’d set them to be open for longer, but clearly I messed up. Sorry about that!
(MCF results; see also the AIS field-building survey results)
(MCF results; see also the AIS field-building survey results)
(MCF results; see also the AIS field-building survey results)
I’m sneaking in this meta-level poll to close. For previous discussion, see this thread; I’m defining ‘independent EAs’ as non-Open Phil umbrella EAs / EAs who fall outside the existing invitation criteria.
The idea, in my mind, is that these independent EAs would be invited for having a track record of contributing (e.g., on this forum) to EA community discussion. The selection could be based on karma, or made by a panel, or (probably my favourite; h/t @Joseph_Chu) via election by the EA Forum community, in a kinda similar way to how we vote in debate weeks.
Benjamin M. @ 2025-05-02T02:18 (+12)
There exists a cause which ought to receive >20% of the EA community’s resources but currently receives little attention
Possible candidates:
- We're severely underrating tractability and importance (specifically in terms of sentience) for wild animals
- We're severely underrating neglectedness (and maybe some other criteria?) for improving data collection in LMICs
- We're severely underrating tractability and neglectedness for some category of political interventions
- Something's very off in our model of AI ethics (in the general sense, including AI welfare)
- We're severely underrating tractability of nuclear security-adjacent topics
- There's something wrong with the usual EA causes that makes them ineffective, so we get left with more normal causes
We have factually wrong beliefs about the outcome of some sort of process of major political change (communism? anarchism? world government?)
None of these strike me as super likely, but combining them all you still get an okay chance.
NickLaing @ 2025-05-01T16:46 (+10)
I'll straight up say I think figureheads and public leaders can be huge for movement growth even though there are risks. When Greta was front and center of the climatre movement I felt momentum was huge and even she decided to step back I think the momentum stall was really noticable.
I liked having Will Mckaskill to look to as a leader and high profile example with his giving style.
I see Rutger Bregnan and the attention he is getting in the media.
It might not be a comfortable thing but I think movements can benefit greatly from figureheads, although obviously there are risks for them, and the movement if they fail/fall for whatever reason.
DavidNash @ 2025-05-02T05:56 (+4)
Is there any data to back up the environmental movement growing and stalling around those times? It may have got a lot of media attention but it seems like the real gains on climate change were made by people who have been working in clean tech for decades and politicians that were already lobbying for various policies in the 2000s/2010s.
NickLaing @ 2025-05-02T08:00 (+2)
I would say the whole climate movement received a huge boost through Greta leading youth protests and being super visible including
- Climate getting higher on the voting agenda, pushing governments to make commitments
- More funding for research and actually implementing that clean tech
- Those lobbyists you talk of having more of a wind behind them
Of course I think we can only attribute a tiny percentage of climate gains in that period to her being a figurehead front and center, but I think things have become harder since without an obvious person to rally behind.
And yes this is super subjective, just my opinion and no, I doubt there's any data to back that up unfortunately.
Will Howard🔹 @ 2025-05-02T12:40 (+6)
Some invitees to the Meta Coordination Forum (maybe like 3 out of the ~30) should be ‘independent’ EAs
This is an interesting idea that I've never heard articulated before. Seems good in principle to have some people with fewer (or at least different to looking-after-their-org) vested interests.
MichaelDickens @ 2025-05-01T17:03 (+5)
The case for doing EA community building hinges on having significant probability on ‘long’ (>2040) AI timelines
>2040, no. >2030, yes.
MichaelDickens @ 2025-05-01T17:06 (+4)
Some invitees to the Meta Coordination Forum (maybe like 3 out of the ~30) should be ‘independent’ EAs
Independent as in not affiliated with any org? If that's what it means then I probably agree
MichaelDickens @ 2025-05-01T17:05 (+4)
There exists a cause which ought to receive >20% of the EA community’s resources but currently receives little attention
My top picks for small causes that should maybe receive >20% of resources:
- wild animal welfare
- post-AGI non-human welfare
Jason @ 2025-05-01T15:49 (+4)
There exists a cause which ought to receive >20% of the EA community’s resources but currently receives little attention
but no, I don't know what it is (or have a clear and viable plan for finding it)
GideonF @ 2025-05-02T10:03 (+3)
There exists a cause which ought to receive >20% of the EA community’s resources but currently receives little attention
My guess is that pesticides impact on insect welfare probably falls into this category.
JoA🔸 @ 2025-05-07T07:35 (+1)
I thought of insect farming, but this is definitely one too!
MichaelDickens @ 2025-05-01T17:01 (+2)
We should try to make some EA sentiments and principles (e.g., scope sensitivity, thinking hard about ethics) a core part of the AI safety field
On a literal interpretation of this statement, I disagree, because I don't think trying to inject those principles will be cost-effective. But I do think people should adopt those principles in AI safety (and also in every other cause area).
Jason @ 2025-05-01T15:26 (+2)
Assuming there will continue to be three EAG-like conferences each year, these should all be replaced by conferences framed around specific cause areas/subtopics rather than about EA in general (e.g., by having two conferences on x-risk or AI-risk and a third one on GHW/FAW)
Some but not all should be replaced (low confidence)
Dr Kassim @ 2025-05-02T01:30 (+1)
Replacing general EA conferences with cause-area-specific ones would enable deeper collaboration, better-aligned networking, and more focused problem-solving by concentrating expertise and attention on the most pressing challenges within each field.
Dr Kassim @ 2025-05-02T01:28 (+1)
Field-building in areas like AI safety or global health is important, but without continued investment in building Effective Altruism as a movement and framework, we risk losing the moral clarity, intellectual rigor, and talent pipeline that make those fields effective in the first place.
JoA🔸 @ 2025-05-01T19:11 (+1)
We should promote AI safety ideas more than other EA ideas
AI Safety work is likely to be extremely important, but "other EA ideas" is too broad for me to agree. It would mean, for example, that it's more important than the "three radical ideas" and I have trouble agreeing with that.
JoA🔸 @ 2025-05-01T19:08 (+1)
We should focus more on building particular fields (AI safety, effective global health, etc.) than building EA
I don't have very specific arguments. EA community-building seems valuable, but I do think that work on specific causes can be interesting and scalable (for example, Hive, AI for Animals, of the Estivales de la question animale in France, all concretely seems like good ways to draw new individuals into the EA/EA-adjacent community).
JoA🔸 @ 2025-05-01T19:04 (+1)
Most AI safety outreach should be done without presenting EA ideas or assuming EA frameworks
Agree "on principle", clueless (and concerned) on consequences.
From my superficial understanding of the current psychological research on EA (by Caviola and Althaus), a lot of core EA ideas are unlikely to really resonate with the majority of individuals, while the case for building safer AI seems to have broader appeal. Nonetheless, I do worry that AI Safety with a lack of EA ideas involved is more likely to favor an ethics of survival rather than a welfarist ethic, is unlikely to take S-risks / digital sentience into account, so it also seems possible that scaling in that way could have very negative outcomes.
JoA🔸 @ 2025-05-01T19:00 (+1)
We should be trying to accelerate the EA community and brand’s current trajectory (i.e., ‘rowing’) versus trying to course-correct the current trajectory (i.e., ‘steering’)
Not a very developed objection, but "steering" seems to lack tractability to me, so I'd rather see the EA community scale to an extent, even though it could be perfected. Things like GWWC aiming to increase the number of pledge takers, or CEA organizing more medium-scale summits, seems more tractable to me, and potentially quite good.
JoA🔸 @ 2025-05-01T18:58 (+1)
The case for doing EA community building hinges on having significant probability on ‘long’ (>2040) AI timelines
Not sure it's okay to say this, but I simply agree with Michael Dickens on this. If we expect to have have AGI by 2038, or even say, 2033 (8 years from now!) it seems like EA community building could be very important. I know people who went full-time into AI safety / governance work less than one year after discovering the issue through EA.
JoA🔸 @ 2025-05-01T18:55 (+1)
There exists a cause which ought to receive >20% of the EA community’s resources but currently receives little attention
Agree depending on what counts as "little attention". Wild animal welfare, perhaps S-risks, but neither of those are completely neglected.
I'd also be tempted to say "limiting the development of insect farming" as it seems likely to be very cost-effective, but I don't think the field could currently absorb that much funding.
Yarrow @ 2025-05-01T16:51 (+1)
We should promote AI safety ideas more than other EA ideas
AGI is probably a long time away. No one knows when AGI will be created. No one knows how to create AGI. AGI safety is such a vague, theoretical concept that there’s essentially nothing you can do about it today or in the near future.
Rían O'M @ 2025-05-01T16:13 (+1)
Assuming there will continue to be three EAG-like conferences each year, these should all be replaced by conferences framed around specific cause areas/subtopics rather than about EA in general (e.g., by having two conferences on x-risk or AI-risk and a third one on GHW/FAW)
Is this not already the case? I.e. don't the major EAGs already focus on specific cause areas?