EA movement course corrections and where you might disagree

By michel @ 2022-10-29T03:32 (+79)

This is the final post in a series of two posts on EA movement strategy. The first post categorized ways in which EA could fail.

Summary

Preface

I was going to write another critique of EA. How original. I was going to write about how there’s an increasingly visible EA “archetype” (rationalist, longtermist, interested in AI, etc.) that embodies an aesthetic few people feel warmly towards on first impression, and that this leads some newcomers who I think would be a great fit for EA to bounce off the community.

But as I outlined my critique, I had a scary realization: If EA adopted my critique, I’m not confident the community would be more impactful. Maybe, to counter my proposed critique, AI alignment is just the problem of our century and we need to orient ourselves toward that unwelcome reality. Seems plausible. Or maybe EA is rife with echo chambers, EA exceptionalism[1], and an implicit bias to see ourselves as the protagonist of a story others are blind to. Also seems plausible. 

And then I thought about other EA strategy takes. Doesn't a proposal like “make EA enormous” also rest on lots of often implicit assumptions? Like how well current EA infrastructure and coordination systems can adapt to a large influx of people, the extent to which “Effective Altruism” as a brand can scale relative to more cause-area-specific brands, and the plausible costs of diluting EA’s uniquely truth-seeking norms. I’m not saying we shouldn’t make EA enormous, I’m saying it seems hard to know whether to make EA enormous[2]– or for that matter to have any strong strategy opinion. 

Nevertheless, I’m glad people are thinking about course corrections to the EA movement trajectory. Why? Because I doubt the existing “business as usual” trajectory is the optimal trajectory.[3]

I’m not saying that any particular person or institution should be steering the EA movement; there are strong reasons why centralization could be very counterproductive. But, if we agree that “business as usual” isn’t best, I think we should be aiming to have more productive, action-guiding conversations about EA movement strategy to get us on the right track. And that involves concretely stating what aspects of the community you think should change and why  you think they should change. Hopefully, the frameworks outlined help these discussions.

Domains in which EA could course-correct  

The best possible EA trajectory is the one that helps the most sentient beings live lives free of suffering and full of flourishing, over the long term.[4] Or something like that. Reasonable people disagree on most parts of that sentence, like how much to morally care about consequences vs. rules, how much to morally weigh different types of sentience, how much to morally weigh suffering vs. flourishing, and how much to discount the value of future lives. But I’ll use this optimum as a first approximation for the ideal movement and leave solving ethics for another write-up.

So, is EA currently on this best possible trajectory? As argued above, I doubt it. I think a fair bit of how EA has grown and identified itself is pretty unplanned (i.e., evolved from social dynamics and people kind of just doing things that way rather than deliberate, debated decisions), and I don’t expect this would land the EA movement in the ideal trajectory. [5]

If we grant that EA is likely not on the ideal trajectory, in what ways could we nudge EA toward the ideal trajectory? That’s the purpose of this section. Below I identify different “domains” in which EA could course correct.

Prefacing the domains in which EA could course correct:

DomainMy quick take of where EA is at in this domain Example updates
Extent to which EA community splits along philosophical or cause-area specific brands[6]

I think “EA” is still the overwhelming brand (e.g., EA Global, EA forum), but there are some nested groups that form their own brands, including:

- Longtermist or explicitly x-risk circles (e.g., Forethought; Global Challenges Project)

- AI safety circles (e.g., Lightcone)

- Biosecurity circles (e.g., Boston)
- Rationality circles
- Global health and development circles (e.g., GiveWell)
- Academic philosophy circles (e.g., GPI)

- Suffering-focused ethics
- Animal welfare community (e.g., Animal Advocacy) 

These groups vary on the degree to which they associate with EA and how much they coordinate with one another. 

People who get into EA don’t really stay in “EA.” Rather they enter a smaller sub-pocket that has its own professional network, conferences, and vibe, while still acknowledging that they are inspired by EA.
Extent to which “EA” is a whole social and intellectual identity[7] 

Many engaged people in EA, but certainly not all, identify strongly as ‘effective altruists’ or with effective altruism. I think excited newcomers especially identify as EAs, and this maybe gets weaker over time (or for older community members it’s more like, duh). Also I think community members in politics or established professional networks adjacent to EA sometimes avoid the EA label.


 

EA becomes less of something you identify as, either because it becomes more of a traditional intellectual belief (e.g., human rights) or because it is replaced by more specific identification, say with a cause-area community. 


 

Extent to which EA outreach focuses on promoting EA principles vs. cause areas vs. philosophies vs. specific ideas[8] EA outreach has traditionally presented “effectiveness mindset” in the context of charity and global health and development, and then gradually introduced x-risk and longtermism. But I think there’s been a recent uptick in groups that just draw attention to important cause areas, like AI safety groups and animal advocacy groups. I think the jury is still out on the relative impact of these groups. (1) Greater emphasis on cause-specific groups/ "on-ramps" like Harvard-MIT X-risk that effectively promote EA work but don’t filter for being convinced by typical EA drowning child-type arguments.[9]

(2) EA comms tries to spread basic ideas of scope sensitivity and impartial altruism far and wide.
Extent to which EA mixes social and professional communitiesQuite a bit, at least in EA hubs. From Julia Wise, quoting HR staff at an EA org: “Boy, is it complicated and strange”EA becomes a more professional space and there are less EA-branded parties or hangout spaces
Demanded hardcorenessEA does seem quite totalizing, but sizable variance here depending on what spaces you occupy. Some smart people are REALLY into it, and it seems easy to feel off if you’re not one of those people. EA community becomes more welcoming to people who know they could be maximizing altruistic expected value harder but choose not to because they have other preferences. 
Where EA does outreachA lot of outreach at universities, skewed toward top universities. Also professional groups, local and national groups, standalone online courses, and more recently mass media publicity, especially around What We Owe the Future.(1) EA starts doing less relative outreach to university groups and focuses more on existing professional communities.

(2) EA interfaces more with non-Western countries. 
Growth rateMaybe around 20-30% but low confidence here.EA slows growth rate to maintain a high degree of coordination and build infrastructure than can handle later influx of people
Amount of interaction with outside professional networks Increasingly more interaction with initiatives like Future Fund’s AI Worldview prize and MIT Media Lab Future Fellowship but other parts of EA direct-work seem to still exclusively hire and collaborate with other EAs. Low confidence here and likely high-variance across causes. 

(1) EAs interested in biosecurity do more to make inroads with existing biosecurity researchers and policymakers as well as for-profit biotech orgs (e.g., organizing conferences). 


 

(2) EAs in AI safety try harder to understand the broader AI community and make well-reasoned, considerate appeals for alignment. 

Centralization of EA funding and strategy decisionsFunding is pretty centralized, with a majority of funding controlled by a handful of funders. But no one really ‘owns’ EA movement strategy and there’s a strong norm of not just following a few leaders. Probably some leader coordination events. 

Funding becomes more decentralized, perhaps becoming more specific to different cause areas. 


 

Diversity of worldviews and backgrounds EA is predominantly endorsed by people who are white, male, upper-middle class, highly analytical, and from a Western background, but of course not exclusively. Philosophical views are predominantly consequentialist. CEA actively tries to do more outreach to low- and middle-income countries.
Amount of association with other communitiesEA seems most heavily intertwined with the rationalist community, to a degree that some people who come from a non-rationalist background find at least mildly off-putting. Other adjacent communities include progress studies, global health and development networks, longevity (?), transhumanism (?), and parts of silicon valley.EA splits into different communities, each of which associates with other EA adjacent communities. 


I’m worried that some of those domains are too abstract. Social movements are complicated  and something like “Extent to which people identify with EA brand vs. more philosophical or cause-specific brands” is a pretty fuzzy concept. 
 

So here’s a map of some more concrete things that could lead to, or follow from, a trajectory change:

When you make an EA strategy proposal or critique, please tell the reader which domains you think need changing (including domains I missed here or that you would refactor), and consider pointing to concrete things in reality.

Where one might disagree about ideal course corrections

EA movement strategy is a complicated business. Stated otherwise, there are lots of reasons people could disagree about the ideal trajectory change and a lot of room for debatable assumptions to creep in.

The purpose of this section is to identify some of those reasons different people disagree – or could disagree – on the ideal course corrections. I doubt this is an exhaustive list, but I hope it illustrates the type of questions I think strategists should be asking themselves. 

Key considerationExample takes and, all else equal, how I’d expect that take to influence your opinion
The type of people EA needs to attract 

(1) If you think the type of people that can make the biggest difference on the most pressing problem are technical and conceptual wizards, I expect you’d be more excited about outreach to top universities and talent clusters (e.g., math olympiads)

(2) If you think EA needs to meaningfully influence global politics, I expect you’d be cautious of anything that could sour EAs public reputation, or more excited about creating spin-off brands that don’t associate heavily with EA. 
 

(3) If you think it’s really hard to know what type of people EA needs to attract, I expect you think traditional EA intros like drowning child + play pumps aren’t a bad place to start.

Confidence-adjusted impact estimates for different cause-areas[10]If you’re confident that AI alignment is the most important cause under your ethical worldview, even after having engaged with the best counter-arguments, I expect you’d be more excited about AI alignment specific outreach that isn’t mediated by EA brand.
Worldview diversificationIf you think worldview diversification is super important, I expect you’d be more excited about EA resources not just pooling behind one best-guess cause area
Importance of mass cultural change for your theory of changeIf you think EA is going to have its biggest impact (e.g., prevent existential catastrophe, end global poverty) by invoking mass cultural change, I expect you’d be more excited about keeping EAs growth rate ambitious and avoiding a super hardcore/ totalizing impression
Costs vs. benefits of insularityIf you think maintaining some degree of insularity yields big returns on trust and coordination, I expect you’re more cautious of fast growth for the EA movement
Length of transformative AI timelines (and difficulty of alignment)If you think transformative AI timelines are short, I expect you’d be more excited about effectively trading reputation for impact by doing things like talent search and associating with weird vibes.
Importance of diversity of opinion and backgrounds[11]If you think that different opinions and worldviews will improve EAs ethics and effectiveness, I expect you’d be more excited about outreach in different parts of the world and making spaces more welcoming to non-prototypical-EAs.
Importance of collaboration with non-EA institutionsIf influencing institutions like the US government, the UN, or the EU is vital for your theory of how EA has the greatest impact, I expect you’d be more excited about interacting more with outside professional networks and less excited about associating with ostensibly weird communities. 
Cost incurred if internal parts of community become disenchantedIf you think that disenchanting EAs who are primarily invested in global health and well-being by orienting more towards a longtermist worldview is not that costly, I expect you’d be more inclined to associate EA with a specifically longtermist cause-prio.[12] 
Relative costs vs. benefits of cause-area or philosophical silos

(1) If you think benefits of grouping more along cause-prioritization, like increased coordination, targeted outreach, and tailored branding, outweigh costs like the possibility of becoming too locked into existing prioritization schemes, I expect you’d be more excited about splitting EA into different professional networks.

(2) If you think longtermism is plausibly true and that the idea benefits from being associated with a community that donates prolifically to global health and animal welfare, I expect you’d be more cautious of making longtermism its own community. 

Where you think the EA communities comparative advantage liesIf you think that much of EA’s value comes from being a schelling point, for example, I expect you’re more excited about outreach wide-scale outreach that makes EA approachable (i.e., not too demanding).
Likelihood EA movement (or something like it) could recover from collapseIf you think that EA movement or something like it is likely to recover or rebuild in the long term after a reputation failure (and you lean longtermist), I expect you could be more inclined to gamble EAs reputation to prevent existential catastrophe in near term.
The value of hardcoreness (vs. casual EAs)[13]If you think that one EA who is sincerely maximizing does more good than 20+ casual EAs, I expect you’d be more inclined to keep outreach more targeted.
Value of transparencyIf you think transparency is really important, then I expect that you think an EA movement that appears open to all cause areas while many decision-makers are reasonably confident that AI is most important cause looks shady.[14]
What types of communities keep up good epistemic hygiene If you think that even communities that grow large quickly can keep up epistemic norms like truth-seeking, wise deference, and constructive criticism, then I expect you think that a higher growth rate isn’t problematic.  
How costly it is to be associated with weird ideas[15]If you think that EA incurs a severe cost by being associated with paperclip maximizer AI-risk arguments and acausal trade, I expect you’re more inclined to separate things out from EA brand and warier of associations with some adjacent communities.
Degree to which an approachable “big test” movement trades off with core EA principles[16]If you think that there are little costs to lower-fidelity outreach that encourages activism and donations even if they aren’t maximally effective, then I expect you’re more inclined to try to increase EA growth rate and mass appeal. 
Need for diverse talent If you think EA really needs diverse talent (e.g., operations ppl, political ppl, entrepreneurial ppl) or are unsure on what type of talent EA needs, I expect you think EA should try to be less homogenous and tailor outreach to different communities. 
Relative costs vs. benefits of placing greater emphasis on not-explicitly-EA brandsIf you think that the costs of emphasizing EA-adjacent brands (e.g., x-risk specific brandsGWWCanimal advocacy) like cause-area silos, less focus on EA principles outweigh benefits like more tailored outreach, I expect that you’re more inclined to try to keep on core EA movement that identifies itself with core EA principles
Relative costs vs. benefits of having EA become an identity[17]If you think that dangers associated with EA becoming an identity (e.g., making it difficult to disentangle yourself later and adding “group belief baggage”) outweigh benefits like possible increases in inspiration, then I expect you’re more weary of people identifying as EAs and sometimes seamlessly mixing EA events and social events.  
Relative costs vs. benefits of EA having overlapping social and professional spheresIf you think that benefits of EAs mixing social and professional life, like increased motivation and casual networking, outweigh costs like heightened cost of rejection, I expect you’re fine with EA staying not just a professional network.
Value of good reputation(1) If you think how EA is perceived is crucial to its future trajectory, then I expect you’re more inclined to make sure EA has lots of good front-facing comms and diligently avoids risky projects that have large downside risks. 

(2) If you think, for example, that technical alignment is the biggest problem and you don’t need a great brand to solve it, I expect you’re more inclined to do potentially costly things like talent search outreach. 
Possibility of making a community that’s about a “question” If you think that it’s feasible to create a movement around a question like “How can we do the most good?” without becoming inseparably associated with your best guess answers, I expect you’re more excited about maintaining a strong EA umbrella and doing outreach with the EA brand. 
Incompatibility of different cause area vibes If you think that the space colonization aesthetic is just very difficult to reconcile in the same movement as a Life You Can Save-style aesthetic, I expect you’re more inclined for EA outreach and/or professional networks to branch out and not try too hard to be under the same umbrella. 

Meta considerations 

Alongside the above object-level considerations, I think other more meta-considerations should influence how we weigh different trajectory updates:

Next steps

Closing remarks

I write this post – and I expect others write their good-faith EA critiques – out of a genuine love for the principles of effective altruism. These ideas are so special, and so worth protecting. Yes, we’re likely messing up important things. Yes, vibes are weird sometimes. But we’re trying something big here. We’re trying to help and safeguard sentient life in a  way no community has before – of course, we’ll make mistakes. Regardless of how we disagree with one another, let's acknowledge our shared goal to do. good. better. Now go fiercely debate ideas in the name of this goal :)

Acknowledgments

Thank you to Ben Hayum, Nina Stevens, Fin Moorehouse, Lara Thurnherr, Rob Bensinger, Eli Rose, Oscar Howie, and others for comments or discussions that helped me think about EA movement failures, course corrections, and key considerations. To the hopefully limited extent that I express views in this post, they’re my own. 

  1. ^

    EA exceptionalism: that oft-seen sentiment that “EA can do it better"; that we have nothing to learn from the outside world. I think I first heard this from Eli Rose. 

  2. ^

    Note that I chose “make EA enormous” rather randomly as a recent strategy proposal. I’m using it to illustrate a larger pattern I see in strategy proposals.

  3. ^

    Reasonable people might disagree here. I discuss this more under “meta considerations” in section III. 

  4. ^

     Note what I’m saying the best possible EA trajectory is not: It’s not the one that makes EAs feel most validated; it’s not the one that upsets the fewest people; it’s not necessarily the one that feels inviting to you or me. 

  5. ^

     And even if all the decisions were deliberate, what are the odds that decision-makers over the EA movement’s lifetime have made all the correct decisions? But people could still disagree with me on how likely random social pressures and local decisions – within guardrails placed by EA decision-makers – are to lead to the best possible trajectory. 

  6. ^

     See EA Culture and Causes: Less is More for example arguments for EA splitting at some level. 

  7. ^
  8. ^

     In this list of domains, I make the distinction between different EA brands in outreach and a more overarching community split. I do this because you could imagine an outreach approach that has many different “on-ramps” that effectively funnel into the same community or, vice versa, a small set of “on-ramps” that then branch off into many different communities.

  9. ^

    Other possible “on-ramps” include: 80,000 Hours promotion, Giving What We Can chapters, AI safety groups, Biosecurity groups, animal welfare groups, One For the World chapters. 

  10. ^

    See Benjamin Todd’s comment:

    If we think that some causes have ~100x the impact of others, there seem like big costs to not making that very obvious (to instead focus on how you can do more good within your existing cause).


    I'm particularly bullish on any impact estimate being confidence-adjusted because of reasoning similar to Why we can’t take expected value estimates literally (even when they’re unbiased)

  11. ^

     For people who don’t place a lot of weight on the diversity of opinions, beware of the broccoli effect! This might manifest as something like: 
    “But I don’t want EA to become less purely consequentialist, because if I find out that non-consequentialist lines of reasoning that I haven’t really looked into make sense I’ll end up being less consequentialist, and I don’t want to be less consequentialist.”

  12. ^

    Note that I don’t want to sharpen a somewhat false dichotomy between “longtermist work” and “neartermist work.” See Will MacAskill twitter thread.

  13. ^

    Bad Omens in Current Community Building makes a good point that the value of hardcoreness likely varies across cause areas:

     I think the model of prioritizing HEAs does broadly make sense for something like AI safety: one person actually working on AI safety is worth more than a hundred ML researchers who think AI safety sounds pretty important but not important enough to merit a career change. But elsewhere it’s less clear. Is one EA in government policy worth more than a hundred civil servants who, though not card-carrying EAs, have seriously considered the ideas and are in touch with engaged EAs who can call them up if need be? What about great managers and entrepreneurs?

  14. ^

     Unclear to me if this is the case so don’t want to project that it is. 

  15. ^
  16. ^

    See Thomas Kwa and Luke Freeman debate on this.
    Note that this likely varies a great deal by which specific ideas are spread. Some ideas, like considering the moral value of our grandkids’ grandkids and scope sensitivity, seem far less controversial than others, like AI alignment arguments designed for mass appeal. 

  17. ^
  18. ^

    Switching costs are a particularly important consideration if you think the “business as usual” trajectory isn’t that bad. 

  19. ^

     Yup, this is me shamelessly hyping up my successor exec team at the University of Wisconsin–Madison. s/o Max, Eeshaan, Declan, Cian, and Meera :) 


Charlie_Guthmann @ 2022-10-29T18:30 (+4)

Good followup. Like the blunt but good-faith vibes. Establishing shared vocabulary is good. 

I also think EA is a fantastic, inspiring project but not on the optimal trajectory (which is a very high bar). Course correction makes sense as a response to this. Another option is to start a competitor movement.  In either case you would want to think through all of the different domains that you listed and where their optimum lies. Below are some pros/cons, though completely not exhaustive.

Pros: 

Cons:

Oscar Delaney @ 2022-10-30T01:51 (+3)

Thanks for this. It is interesting to me how many of the key considerations mention 'outreach' (12/24 by my count). I suppose this makes sense that choosing how and how much to grow is one of the foremost strategy decisions. It also shows how hard making these decisions could be, given all the different considerations to weigh up. The issue of who should do this steering and strategising does seem tricky. I share your concern about getting CEA to take on a more authoritative role, and am generally pretty happy with the somewhat anarchic norms (anyone can post more or less anything on the forum and have a chance of influencing much of the community). But then, it is just harder to make and action important trajectory-change decisions without more structured decision-making.

michel @ 2022-10-30T18:09 (+2)

Good point. It’s worth noting that ‘outreach’ is often mentioned in the examples, not the key consideration itself. I think the key considerations that mention outreach in the example often influence more than outreach. For example, “Relative costs vs. benefits of placing greater emphasis on not-explicitly-EA brands” mentions outreach, but I think this closely connected to how professional networks identify themselves and how events are branded.

I have a background in university community building, so I wouldn’t be surprised if that biased me to often make the examples about outreach.

Patrick Gruban @ 2022-10-29T10:03 (+1)

Thank you for this post, I find it very helpful for clarifying my thoughts when working on community building strategy.