Four Goals for EA Community Building, After Running out of Obvious Cause Areas

By Jian Xin Lim 🔸 @ 2025-10-10T17:42 (+9)

Written in the spirit of Draft Amnesty, spurred by various people posting similar things recently. I've sat on this draft for too long, but don't have time to polish it. It flips between how these principles can be applied for EA Cambridge students (where I work) vs more abstract strategy.

The format is 
1. A goal

My general thinking is that EA-Principles community building first[1] can be boring to people, compared e.g. to AI safety field building. That doesn't mean it's not important. There’s some tradeoffs in making EA Cambridge principles first (intro fellowship readings focused on the principles) vs cause first (Research program with tracks for specific causes - e.g. Impact Research Groups). This feels like an increasingly relevant tradeoff as the cause areas spin out of EA and become in-vogue/mainstream. For instance, a lot of “EA-y” GHD work happens outside of the EA space. I think AI Safety has somewhat spun out of EA (more empirical research into whether the claims in this post are true, would be great).

Principles-led community building’s functions, are 4-fold:

  1. Guide people towards the cause areas, through a combination of career planning/networking and discussion about moral axioms (which have now spun off to a decent extent)
    1. Smart, less philosophically minded people, or those who find EA-principles obvious, might prefer to jump straight to the cause areas which is what IRG does
    2. A guided cause prioritisation flowchart
    3. The case of the missing cause prioritisation research
    4. Servicing 1.
      1. Focus on being a funnel for specific causes
      2. Focus on traditional career planning + advertise the intro fellowship as using stuff like this flowchartBe a hub for cross-pollination between cause areas


3. Be a set of principles you can apply within your cause area (e.g. prioritise X-Risks over AI copyright). This still implies people have succeeded at 1.

3.5. Improve the effectiveness of mainstream issues. Like how effectiveness mindset (Givewell) has been good for global health and Animal Rights.

4. Be a way to spin up new, weird cause areas. Similar to 2.

  1. This probably appeals to nerdy, philosophical, willing to entertain weird idea, people?
  2. I can also image EA groups leaning into Rationalism, and focusing on bringing excellent epistemics into the world.
  3. Third Wave Effective Altruism
  4. EA as Field Incubation
  1. ^

     I find it amusing how little EA community builders seem to be able to list off the EA principles. Vaguely gesturing at scope-sensitive, radically impartial, sensitive to tradeoffs, scout mindsetty and altruistic