Should the EA community be cause-first or member-first?
By EdoArad @ 2023-05-29T15:50 (+206)
It's really hard to do community building well. Opinions on strategy and vision vary a lot, and we don't yet know enough about what actually works and how well. Here, I'll suggest one axis of community-building strategy which helped me clarify and compare some contrasting opinions.[1]
Cause-first
Will Macaskill's proposed Definition of Effective Altruism is composed of[2]:
- An overarching effort to figure out what are the best opportunities to do good.
- A community of people that work to bring more resources to these opportunities, or work on these directly.
This suggests a "cause-first" community-building strategy, where the main goal for community builders is to get more manpower into the top cause areas. Communities are measured by the total impact produced directly through the people they engage with. Communities try to find the most promising people, persuade them to work on top causes, and empower them to do so well.
CEA's definition and strategy seem to be mostly along these lines:
Effective altruism is a project that aims to find the best ways to help others, and put them into practice.
It’s both a research field, which aims to identify the world’s most pressing problems and the best solutions to them, and a practical community that aims to use those findings to do good.
and
Our mission is to nurture a community of people who are thinking carefully about the world’s biggest problems and taking impactful action to solve them.
Member-first
Let's try out a different definition for the EA community, taken from CEA's guiding principles[3]:
What is the effective altruism community?
The effective altruism community is a global community of people who care deeply about the world, make helping others a significant part of their lives, and use evidence and reason to figure out how best to do so.
This, to me, suggests a subtly different vision and strategy for the community. One that is, first of all, focused on these people who live by EA principles. Such a "member-first" strategy could have a supporting infrastructure that is focused on helping the individuals involved to live their lives according to these principles, and an outreach/growth ecosystem that works to make the principles of EA more universal[4][5].
What's the difference?
I think this dimension has important effects on the value of the community, and that both local and global community-building strategies should be aware of the tradeoffs between the two.
I'll list some examples and caricatures for the distinction between the two, to give a more intuitive grasp of how these strategies differ, without any clear order:
Leaning cause-first | Leaning member-first |
---|---|
Keep EA Small and Weird | Big Tent EA |
Current EA Handbook (focus on introducing major causes) | 2015's EA Handbook (focus on core EA principles) |
80,000 Hours | Probably Good |
Wants more people doing high-quality AI Safety work, regardless of their acceptance of EA principles | Wants more people deeply understanding and accepting EA principles, regardless of what they actually work on or donate to. |
Targeted outreach to students in high ranking universities | Broad outreach with diverse messaging |
Encourages people to change occupations to focus on the world's most pressing problems | Encourages people to use the tools and principles of EA to do more good in their current trajectory |
Risk of people not finding useful ways to contribute to top causes | Risk of not enough people who want to contribute to the world's top causes |
The community as a whole leads by example, by taking in-depth prioritization research with the proper seriousness | Each individual is focused more on how to implement EA principles in their own lives, taking their personal worldview and situation into account |
Community members delegate to high-quality research, think less for themselves but more people end up working in higher-impact causes | Community members think for themselves, which improves their ability to do more good, but they make more mistakes |
The case of the missing cause prioritization research, Nobody’s on the ball on AGI alignment, and many amazing object-level posts making progress on particular causes | The case against “EA cause areas” , EA is three radical ideas I want to protect, "Big tent" effective altruism is very important (particularly right now), and many posts where people share their own decisions and dilemmas |
... | ... |
Personal takeaways
I think the EA community is leaning toward "cause-first" as the main overarching strategy. That could be the correct call. For example, I guess that a lot of the success of EA in promoting highly neglected causes[6] was due to community-builders and community-focused organizations having a large focus on spreading the relevant ideas to many promising people and helping them to work on these areas.
However, there are important downsides to the "cause-first" approach, such as a possible lock-in of main causes and less diversification in the community. Many problems with the EA community are possibly explained by this decision.
It is a decision. For example, EA Israel, particularly as led by @GidiKadosh, has focused more on the "member-first" approach. This also has downsides. Say, only in the past year or so we really started having a network of people working in AI Safety, and we are very weak on the animal welfare front.
I'm not sure what is the best approach, and very likely we can have the best of both worlds most of the time. However, I am pretty sure that being more mindful of this particular dimension in community building is important, and I hope that this post would be a helpful small step in understanding how to do community building better.
Thanks to the many people I've met at EAG and discussed this topic with! I think that crystalizing this idea was one of the key outcomes of the conference for me.
- ^
I try to make the two main examples a bit extreme, to make the distinction clearer, but most opinions are somehow a mesh of the two.
- ^
I've taken some liberty with paraphrasing the original definition to make my claims clearer. This example doesn't mean that Will Macaskill is a proponent of such a "cause-first" strategy.
- ^
These haven't been updated much since 2018, so I'm not sure how representative they are. Anyway, again, I'm using this definition to articulate a possible strategy.
- ^
By this, I mean a future where principles very close to the current main "tenets" of EA are widespread and commonsense.
- ^
Maybe the focus on "making the principles of EA more universal" is more important than the focus on the community and this section should be called something like "ideas-first". I think now that these two notions should be distinguished, and represent different goals and strategies, but I'll leave this to other people (maybe future Edo) to articulate this clearly if this post would be useful.
- ^
Say, x-risks, wild-animal suffering, and empirically-supported GH&D interventions.
Will Aldred @ 2023-05-29T18:16 (+32)
there are important downsides to the “cause-first” approach, such as a possible lock-in of main causes
I think this is a legitimate concern, and I’m glad you point to it. An alternative framing is lock-out of potentially very impactful causes. Dynamics of lock-out, as I see it, include:
- EA selecting for people interested in the already-established causes
- Social status gradients within EA pushing people toward the highest-regarded causes, like AI safety[1]
- EAs working in an already-established cause having personal and career-related incentives to ensure that their cause is kept as a top EA priority
A recent shortform by Caleb Parikh, discussing the specific case of digital sentience work, feels related. In Caleb’s words:
I think aspects of EA that make me more sad is that there seems to be a few extremely important issues on an impartial welfarist view that don’t seem to get much attention at all, despite having been identified at some point by some EAs.
- ^
Personal anecdote: Part of the reason, if I’m to be honest with myself, for my move from nuclear weapons risk research to AI strategy/governance is that it became increasingly socially difficult to be an EA working on nuclear risk. (In my sphere, at least.) Many of my conversations with other EAs, even in non-work situations and even with me trying avoid this conversation area, turned into me having to defend my not focusing on AI risk, on pain of being seen as “not getting it”.
Vaidehi Agarwalla @ 2023-05-30T14:05 (+25)
- Social status gradients within EA pushing people toward the highest-regarded causes, like AI safety.[1]
I think this is relatively underdiscussed / important. I previously wrote about the availability bias in EA jobhunting and have anecdotally seen many examples of this both in terms of social pressures and norms, but also just difficulty of forging your own path vs sticking to the "defaults". It's simply easier to try and go for EA opportunities where you have existing networks, and there are additionally several monetary, status, & social rewards for pursuing these careers.
I think it's sometimes hard for people to decouple these when making career decisions (e.g. did you take the job because it's your best option, or because it's a stable job which people think is high status)
Caveats before I begin:
- I think it's really good for people who need to (e.g. from low SES backgrounds) to take financial security into consideration when making important career decisions. But I think this community also has a lot of privileged people who could afford to be a little more risk-taking.
- I don't think it's bad that these programs and resources exist - I'm excited that they exist. But we need to acknowledge how they affect the EA ecosystem. I expect the top pushback will be the standard one, which is that if you have very short timelines, other considerations simply don't matter if you do a E(V) calculation.
- I think that people should take more ownership of exploring other paths and trying difficult things than they currently do, but I also think it's important to consider the ecoystem impacts and how it can create lock-in effects on certain causes.
- These projects exist for a reason - the longtermist space is less funding constrained than the non-longtermist one, it's a newer field, and so many of the opportunities available are field building ones.
Here are some concrete examples of how the presence of upskilling opportunities & incentives in
more specifically x-risk and AIS space) in the last 12-24 months , with comparisons of some other options and how they stack up:
(written quickly of the top of my head, I expect some specific examples may be wrong in details or exact scope. If you can think of counter-examples please let me know!)
- Career advising resources:
- 80K has been the key career resource for over 10 years, and they primarily investing resources in expanding their LT career profiles, resources & advice (without a robust alternative for several years.
- 80K made a call to get others interested in various aspects of career advising they are not covering and have posted about it in 2020, 2021, and 2022 but (as far as I can tell) with limited traction.
- There are some other career options - Animal Advocacy Careers and Probably Good - they are at early stages and still ramping up (even in 2023).
- Career funding / upskilling opportunities:
- There are the century fellowship & early career funding for AI / bio & Horizon for longtermist policy careers (there is nothing similar for any other cause AFAIK). These are 1-2 year long open-ended funding opportunities. (There is the Charity Entrepreneurship incubator, which mostly funds neartermist and meta orgs and accepts about 20 applicatns per round (historically one per year, from 2023 will be 2 rounds per year))
- When Future Fund was running (and LTFF has also done this), there were several opportunities for people interested in AI safety (possibly other LT causes too, my guess was the bulk was AIS) to visit the bay for the summer, or do career transition grants and so on (there was no equivalent for other causes)
- Since 2021, we now have multiple programs to skill up in AI and other X-risks (AGISF & biosecurity program from BlueDot, SERI MATS, various other ERIx summer internships). (somewhat similar programs with fewer resources are the alt proteins fellowship from BlueDot, a China-based & South East Asia-based farmed animal fellowship in 2022, and AAC's programming)
- There are paid general LT intro programs like the Global Challenge Project retreats, Atlas Fellowship, Nontrivial (There is Intro to VP program, community retreats organized by some local groups & LEAF which have less funding / monetary compensation)
- There are now several dedicated AIS centers at various universities (SERI @ Stanford, HAIST @ Harvard, CBAI @ Harvard / MIT) and a few X-risk focused (ERA @ Cambridge (?), CHERI in Switzerland). As far as I know, there are no such centers for other causes (and even non-AI x-risk causes). These centers are new, but can provide better quality advice, resources and guidance for pursuing these career paths over others.
- Networking: This seems rougly equal.
- The SERI conference has run since 2021 (there is EA Global, and several EAGx's per year, but no dedicated opportunities for other causes.)
- Funding for new community projects
- Bulk (90%) of EA movement building from OP is funded by the longtermist team, and most univesity EA groups funding is from the longtermist team. I'd love to know more about how those groups and projects are evaluated and how much funding ends up going to more principles-first community building, as opposed to cause-specific work.
- Most of OP's neartermist granting has gone towards effective giving (because it has the highest ROI)
- There are even incentives for infrastructure providers (e.g. Good Impressions, cFactual, EV, Rethink etc.) to primarily support the longtermist ecosystem as that's where the funding is (There are a few meta orgs supporting the animal space, such as AAC, Good Growth, and 2 CE incubated orgs - Animal Ask & Mission Motor)
- Career exploration grants:
- At various points when Future Fund was running, lots of small grants for folks to spend time in the Bay (link), do career exploration, etc. The LTFF has also given x-risk grants that are somewhat similar (as far as I know, the EAIF or others have not given more generic career exploration grants, or grants for other causes)
MarcusAbramovitch @ 2023-06-02T17:26 (+26)
Thanks for writing this. After reading this, I want EA to be even more "cause first". One of the things that I worry about for EA is that it becomes a fairly diffuse "member-first" movement, not much unlike a religious group that comes together and supports each other and believes in some common doctrines but at the end of the day, doesn't accomplish much.
I look at EA now and am nothing short of stunned at how much it is accomplishing. Not in dollars spent. But in stuff done. The EA community was at the forefront of pushing AI safety to the mainstream. It has started several new charities. It's responsible for a lot of wins for animals. It's responsible for saving hundreds of thousands of lives. It's about the only place out there that measures charities, and does so with a lot of rigor. It's produced countless reports that actually change what gets worked on. It creates new charities every year. It changes what it works on pretty well.
I think caring about its members is an instrumental goal to caring about causes. The members do, after all, work on the causes. EA does recognize this though and with notable exceptions, I think it does a very good job of it.
Amber Dawn @ 2023-05-29T17:36 (+20)
Thanks Edo, I really like this distinction. In particular, your table helped me understand a bunch of seemingly-unrelated disagreements I tend to have with "mainstream EA" - I tend to lean member-first (for reasons that maybe I'll write about one day).
GidiKadosh @ 2023-05-30T07:08 (+17)
Thank you for writing this post. Even without agreeing with the exact distinction as it's made on the table, I think this is a good framing for an important problem. Specifically, I think the movement underestimates the importance of having a mismatch between how it presents itself and its exact focus.
The way I think about it is:
(1) An individual encounters the movement and understands that the value they're going to gain from it is X → (2) they decide to get involved because they want X → (3) it takes quite a while (months to years, depends on their involvement) to understand that the movement actually does Y OR sees they don't get the value X they expected → (4) There's a considerable they're not as interested in Y and doesn't get as involved as they originally thought they would.
It means that the movement: (1) Missed many people who would've been interested in Y, (2) invested its resources sub-optimally on people who seek X instead of people who seek Y.
I've experienced this on a weekly basis in EA Israel before we focused our strategy and branding on something that sounds like a members-focused approach. Even after doing that, I have dozens of stories of members being disappointed that the movement doesn't offer them concrete tools for their own social action (as much as it offers tools on how to choose a cause area), or disappointed that the conferences are mostly about AI safety and biosecurity.
Even with a strong member-first approach, the movement could still invest considerable resources into organizing AI safety conferences and biosecurity conferences - which would also attract professionals from outside the movement. And the movement could still be constructed in a way that gets people from the EA movement to these other conferences and communities.
I'm a bit time limited at the moment, but would be happy to discuss this with people working on this topic. I wrote before about this mismatch as a branding problem, tried to address this through better ways to explain what EA is, and got the chance to present EA Israel's member-first approach at conferences (linked above) since CEA was interested in some different community-building results that came out of EA Israel. If you're working on this topic and think I might be helpful, feel free to get in touch!
One last thought - I think that @Will Aldred's framing in the comments is correct in describing a connection between how this approach could shape the structure of the movement. Moreover, l think this goes even beyond incentive structures - for instance, the mismatch described above between X and Y could be a good explanation for why community building efforts leans toward "multi-session programs where people are expected to attend most sessions". This is because the current branding requires us to gradually move people from wanting X to understanding that Y is actually more important. This is kind of the opposite of product-market fit.
I'm not saying that either of the approaches is incorrect, but I think this mismatch is harmful. I hope this is resolved either way.
EdoArad @ 2023-05-30T15:40 (+8)
I'm glad that I tricked you into sharing more of your thoughts :)
I think you give good reasons for the harms of an incoherent community-building strategy.
DavidNash @ 2023-05-29T18:45 (+15)
Thanks for writing this post, I've been thinking about this framing recently. Although more because I felt like I was member-first when I started community building and now I am much more cause-first when I'm thinking about how to have the most impact.
I don't agree with some of the categorisations in the table and think there are quite a few that don't fall on the cause/member axis. For example you could have member first outreach that is highly deferential (GiveWell suggestions) and cause-first outreach that brings together very different people that disagree with EA.
Also when you say the downsides of cause-first are that it led to lock in or lack of diversification I feel like those are more likely due to earlier on member-first focus in EA.
EdoArad @ 2023-05-30T16:02 (+2)
(I generally don't feel that happy with my proposed definitions and the categorization in the table, and I hope other people could make better distinctions and framing for thinking about EA community strategy. )
I don't quite share your intuition on the couple of examples you suggest, and I wonder whether that's because our definitions differ or because the categorization really is off/misleading/inaccurate.
For me, your first example shows that the relation to deference doesn't necessarily result from a choice of the overall strategy, but I still expect it to usually be correlated (unless strong and direct effort is taken to change focus on deference).
And for the second example, I think I view a kind of "member first" strategy as (gradually) pushing for more cause-neutrality, whereas the cause-first is okay with stopping once a person is focused on a high-impact cause.
EdoArad @ 2023-05-30T15:47 (+2)
now I am much more cause-first when I'm thinking about how to have the most impact.
Do you mean, "the most impact as a community builder"?
DavidNash @ 2023-05-30T16:02 (+2)
I guess the overlap is quite high for myself between 'impact' and 'impact as a community builder'.
EdoArad @ 2023-05-30T16:12 (+2)
Thanks, that makes sense. Can you say a bit about what has changed, and in what way you now focus more on impact?
DavidNash @ 2023-05-31T09:58 (+16)
When I started community building I would see the 20 people who turned up most regularly or had regular conversations with and I would focus on how I could help them improve their impact, often in relatively small ways.
Over time I realised that some of the people that were potentially having the biggest impact weren't turning up to events regularly, maybe we just had one conversation in four years, but they were able to shift into more impactful careers. Partially because there were many more people who I had 1 chat with than there were people I had 5 chats with, but also the people who are more experienced/busy with work have less time to keep on turning up to EA social events, and they often already had social communities they were a part of.
It also would be surprising/suspicious if the actions that make members the happiest also happened to be the best solution for allocating talent to problems.
Chris Leong @ 2023-05-29T22:55 (+8)
I like your attempt to draw a distinction between two different ways to view community building, however some parts of the table appear strange.
When people say that they want EA to stay weird, they mean that they want people exploring all kinds of crazy cause areas instead of just sticking the main ones (in tension with your definition of cause-first).
Also: one the central arguments for leaning more towards EA being small and weird is that you end up with a community more driven by principle because a) slower growth makes it easier for new members to absorb knowledge from more experienced ones vs. from people who don't really understand the philosophy very well themselves yet b) lower expectations for growth make it easier to focus on people with whom the philosophy really resonates vs. marginally influencing people who aren't that keen on it.
Another point, there's two different ways to build a member first community:
- The first is to try to build a welcoming community that best meets the needs of everyone who has an interest in the community.
- The second it to build a community that focuses on the needs of the core members and relies on them to drive impact.
These two different definitions will lead to two different types of community.
To build the first you'd want to engage in broach outreach with diverse messaging. With the second, it would be more about finding the kinds of people who most resonate with your principles. With the first you try to meet people where they are, with the second you're more interested in people who will deeply adopt your principles. With the first, you want engagement with as many people as possible, with the second you want enagement to be as deep as possible.
EdoArad @ 2023-05-30T16:11 (+2)
When people say that they want EA to stay weird, they mean that they want people exploring all kinds of crazy cause areas instead of just sticking the main ones (in tension with your definition of cause-first).
I think this is an important point, and I may be doing a motte and bailey here which I don't fully understand. Under what I imagine as a "cause-first" movement strategy, you'd definitely want more people engaging in the cause-prioritization effort. However, I think I characterize it as more top-down than it needs to be.
Also: one the central arguments for leaning more towards EA being small and weird is that you end up with a community more driven by principle because a) slower growth makes it easier for new members to absorb knowledge from more experienced ones vs. from people who don't really understand the philosophy very well themselves yet b) lower expectations for growth make it easier to focus on people with whom the philosophy really resonates vs. marginally influencing people who aren't that keen on it.
This feels true to me.
Chris Leong @ 2023-05-30T16:25 (+4)
I guess a lot of the strange causes people explored weren’t chosen in a top down manner. Rather someone just decided to start a project and seek funding for it.
This is probably changing now that Rethink is incubating new orgs and Charity Entrepreneurship is thinking further afield, but regardless I expect most people who want EA to be weird want people doing this kind of exploration.
trevor1 @ 2023-05-29T17:00 (+7)
Really grateful for the focus on construction instead of destruction. It might not be as dramatic or exciting, but it's still kind of messed up that damaging large parts of EA count as costly signals to signal credibility, even though people other than the poster are the ones who carry the entire burden of the costs.
I think another dimension of interest for cause-first vs member-first is how much faith you have in the people who make up the causes. If you think everyone is dropping the ball then you focus on the cause, whereas you focus on the people if you trust their expertise and skill enough to defer to them.
Conor McGurk @ 2023-06-02T21:24 (+6)
Note: I'm writing this comment in my capacity as an individual, not as a representative of CEA, although I do work there. I wouldn’t be surprised if others at CEA disagree with the characterization I’m making in this comment.
I want to provide one counterexample to the conception that most of mainstream EA is leaning “cause-first” in the status quo. CEA is a large organization (by EA standards) and we definitely invest substantial resources in “member-first” style ways.[1]
To be specific, here is a sampling of major programs we run:
- Groups
- University Groups (mostly focused on the University Group Accelerator Program currently, which is a scaled program targeting a broad range of mostly non-top unis)
- City & National Groups (most of the funds go towards our top 15 city-national groups, but we also fund a long tail of other groups all around the world.)
- Virtual Programs (designed to be accessible, available globally, focused on EA fundamentals principles, although it also covers causes.)
- Events
- EAGs and EAGx are designed to help members coordinate, and EAGx in particularly is held around the world and fairly "big tent"
- Community Health / Online
- Services offered by these programs (e.g. this forum) are basically infrastructure for community members
Some important caveats: there’s other things we do, we think seriously about trying to capture the heavy-tail and directing people towards specific cause areas (including encouraging groups we support to do the same), and we definitely shifted some content (like the handbook) to be more cause-area oriented. CEA is also only one piece of the ecosystem.
Overall though, I do think much of CEA's work currently represents investment that intuitively seems more "member-first", (whether or not this is the correct strategy), and we're a reasonably large part of the CB ecosystem.
- ^
Also, although I think the member/cause distinction is useful, it's also sufficiently vague and "vibes-y" enough that many programs and organizations, like CEA, could probably be construed as focusing on either one.
EdoArad @ 2023-06-04T09:00 (+2)
Thanks for your perspective Conor! Looking into these activities in more detail, I have some notes:
- UGAP - I don't know much about this program, unfortunately. The reports I've seen seem to maybe encourage more member-first but I'm not sure. Regarding their KPIs for university groups, it seems like they used HEAs but write that they don't like it and want to use other metrics. I'd be interested in what comes up with that.
- I am also not that familiar with OpenPhil's university program, which I imagine to be mostly hands-off. I guess that they are thinking of community building in a more cause-oriented way, but I don't know.
- City & National Groups - I'd be interested in understanding the considerations involving which groups to fund and which activities seem most important.
- Virtual programs -
- Open programs
- The Precipice reading group (cause-first)
- Introductory EA program (follows the Handbook, which is arguably cause-first)
- In-Depth EA Program (mostly methodological, member-first)
- How to (actually) change the world (member-first, even though it's hosted by Non-Trivial which seems strongly cause-first)
- Past programs
- cause-specific (alt. proteins, animal advocacy, ML safety and AGI safety)
- career-specific: US policy (very practical, seems member-first, even though likely motivated by x-risk concerns), Law (cause-first, maybe due to good pedagogical reasons).
- Open programs
- Events - definitely a mix of the two. Helping members coordinate is done both for intra-cause reasons and to broadly support EAs in their EA endeavors.
- Forum - also definitely a platform for both cause-first and member-first discussions, but I think its goals are leaning more member-first.
Vasco Grilo @ 2023-06-01T19:02 (+5)
Nice post, Edo!
One seemingly important factor to decide whether to lean cause-first or member-first is whether impact varies more across causes or interventions. 80,000 Hours thinks the variation across causes is larger, so it leans cause-first. This recent analysis from Ben Todd suggests variations across causes are not as large as previously thought.
EdoArad @ 2023-05-29T15:54 (+4)
I'd be interested in more examples and better linking with existing written opinions on these topics. So I invite whoever is reading this to suggest some more ideas, or better - contact me to get editing permissions on the post (and co-authorship if you wish).
Vaidehi Agarwalla @ 2023-05-29T21:59 (+4)
Written quickly, prioritizing sharing information over polish. Feel free to ask clarifying qs!
Have been considering this framing for some time, and have quite a lot of thoughts. Will try to comment more soon.
Very rough thoughts are that I don't /quite/ agree with all the examples in your table and this changes how I define the difference between the two approaches. So e.g. I don't quite think the difference you are describing is people vs cause it's more principles vs cause.
Then there is a different distinction that I don't think your post really covers (or maybe it does but not directly?) Which is the difference between seeing your (a community builders) obligation towards improving the existing community vs finding more talented / top people
Arjun and I wrote something on this: https://forum.effectivealtruism.org/posts/PbtXD76m7axMd6QST/the-funnel-or-the-individual-two-approaches-to-understanding
Funnel model = treat people in accordance with how much they contribute (kind of cause first)
Individual model = treat people wrt how they are interacting with the principles and what stage they are in their own journey (kind of people)
EdoArad @ 2023-05-30T15:18 (+2)
So e.g. I don't quite think the difference you are describing is people vs cause it's more principles vs cause
Yea, I think I mostly agree with you. I think the main decision I had in mind is pretty much what you make in The funnel or the individual: Two approaches to understanding EA engagement which does make very similar points!
Pato @ 2023-07-16T09:30 (+3)
I really liked the axis that you presented and the comparision between a version of the community that is more cause oriented vs member oriented.
The only caveat that I have is that I don't think we can define a neutral point in between them that allows you to classify communities as one type or the other.
Luckily, I think that is unnecesary because even though the objective of EA is to have the best impact in the world and not the greatest number of members, I think we all think the best decision is to have a good balance between cause and member oriented. So the question that we should ask is should EA be MORE big tent, weird, or do we have a good balance right now?
And to achieve that balance we can be more big tent in some aspects, moments and orgs and weirder in others.
RichardAnnilo @ 2023-06-05T09:29 (+3)
I'd love to see the results of a good experiment in in the member-first approach.
I'm leaning more towards the cause-first approach, but possibly for the wrong reasons. It's easier to measure, it's impact is easier to communicate and understand, the funnel feels shorter and more straight-forward, the activities and tools to achieve impact are there for me to use, I don't need to invent anything from skratch. This all might be a streetlight fallacy.
The strongest for the member-first approach for me would be:
- After your members take the job in a high-impact position, they will continue to make decisions. Decisions at their work, decisions about where to work next, etc. If they are not well equipped with tools and knowledge about how to make good decisions independently which optimize for impact, their choices might be far from optimal.
- By delegating cause prioritization to a few small groups of researchers, we might succumb to effects of echo chambers, fail to identify important mistakes in our reasoning and even more effective causes.
- The impact from a member-first approach might be >10-100x larger than that of a cause-first approach. It's the difference between motivating a few people to work on AI safety vs changing the societal norms themselves to be more impact-focused when doing career planning.
mhendric @ 2023-05-30T10:48 (+3)
I feel like this post introduces a helpful contrast.
I am personally partial to the member-first approach. A cause-first approach seems to place a lot of trust into the epistemics of leaders and decision-makers that identify the correct cause. I take this to be an unhealthy strategy generally - I believe a vibrant community of smart, empirically-minded individuals can be trusted to make their own calls, and I think this may often challenge the opinion of leadership or the community at large in a healthy way. Even if many individual calls end up leading to suboptimal individual behaviour, I'd expect the epistemic benefits of a diversity of opinions and thought to outweigh this downside in the long run, even for the centrally boosted causes, which benefit from having their opinions challenged and questioned from people that do not share their views, and having the likelihood of groupthink significantly reduced.
On a more abstract level, I think EA is pretty unique as a community because of its open epistemics, where a variety of views can be pitched and will receive a fair hearing, often leading to positive interventions and initiatives. I worry that a cause-first approach will endanger this and turn EA into "just another" cause-specific organization, even if the selection of the cause is well-motivated at the initial point of choice.
Lauro Langosco @ 2023-05-29T21:58 (+1)
However, there are important downsides to the "cause-first" approach, such as a possible lock-in of main causes
I'm surprised by this point - surely a core element of the 'cause-first' approach is cause prioritization & cause neutrality? How would that lead to a lock-in?
Guy Raveh @ 2023-05-30T09:05 (+8)
That might be true in theory, but not in practice. People become biased towards the causes they like or understand better.
Lauro Langosco @ 2023-05-30T10:31 (+1)
Sure, but that's not a difference between the two approaches.
mhendric @ 2023-05-30T10:37 (+3)
But it'll be intensified if the community mainly exists of people that like the same causes because the filter for membership is cause-centered rather than member-centered.
Lauro Langosco @ 2023-05-29T21:49 (+1)
Thanks for the post, it was an interesting read!
Responding to one specific point: you compare
Community members delegate to high-quality research, think less for themselves but more people end up working in higher-impact causes
to
Community members think for themselves, which improves their ability to do more good, but they make more mistakes
I think there is actually just one correct solution here, namely thinking through everything yourself and trusting community consensus only insofar as you think it can be trusted (which is just thinking through things yourself on the meta-level).
This is the straightforwardly correct thing to do for your personal epistemics, and IMO it's also the move that maximizes overall impact. It would be kind of strange if the right move was for people to not form beliefs as best they can, or to act on other people's beliefs rather than their own?
(A sub-point here is that we haven't figured out all the right approaches yet so we need people to add to the epistemic commons.)
EdoArad @ 2023-05-30T16:26 (+2)
only insofar as you think it can be trusted
Note that if you place a high degree of trust, then the correct approach to maximize direct impact would generally be to delegate a lot more (and, say, focus on the particularities of your specific actions). I think that it makes a lot of sense to mostly trust the cause-prioritization enterprise as a whole, but maybe this comes at the expense of people doing less independent thinking, which should address your other comment.