Key EA decision-makers on the future of EA, reflections on the past year, and more (MCF 2023)

By michel, MaxDalton, Sophie Thomson @ 2023-11-03T22:29 (+60)

This post summarizes the results of the Meta Coordination Forum (MCF) 2023 pre-event survey. MCF was a gathering of key people working in meta-EA, intended to help them make better plans over the next two years to help set EA and related communities on a better trajectory. (More information here.)

About the survey  

Ahead of the event, we sent invitees a survey to understand the distribution of views on topics relevant to invitees’ plans and the future of EA. These topics included:

The results from this pre-event survey are summarized below and in this post on field-building projects. We received 41 responses (n = 41) out of 45 people invited to take the survey. The majority of people invited to MCF in time to answer this survey are listed below:

Alexander Berger

Amy Labenz

Anne Schulze

Arden Koehler

Bastian Stern

Ben West

Buddy Shah

Caleb Parikh 

Chana Messinger

Claire Zabel

Dewi Erwan

James Snowden

Jan Kulveit

Jonas Vollmer

Julia Wise

Kuhan Jeyapragasan

Lewis Bollard

Lincoln Quirk

Max Dalton

Max Daniel

Michelle Hutchinson

Nick Beckstead

Nicole Ross

Niel Bowerman

Oliver Habryka

Peter McIntyre

Rob Gledhill

Sim Dhaliwal

Sjir Hoeijmakers

William MacAskill

Zach Robinson 

 

We’re sharing the results from this survey (and others) in the hope that being transparent about these respondents' views can improve the plans of others in the community. But there are important caveats, and we think it could be easy to misinterpret what this survey is and isn’t.

Caveats

Executive Summary

This section summarizes multiple-choice questions only, not the open-text responses.

Views on Resource Allocation (see more)

Direct vs. Meta work: 

What goal should the EA community’s work be in service of? 


Behavioral Changes Post-FTX (see more)


Future of EA and Related Communities (see more)

Rowing vs. Steering 

Cause X

EA Trajectory Changes

All questions in this section were asked as agree/disagree statements, where 1 = Strongly disagree; 7 = Strongly agree.


Relationship Between EA & AI Safety (see more)

All questions in this section were asked as agree/disagree statements, where 1 = Strongly disagree; 7 = Strongly agree. We compare MCF invitees’ responses to AI safety (AIS) experts’ responses here

Deference to Open Phil (vs. CEA) (see more)

Results from Pre-Event Survey 

Resource Allocation

Caveat for this whole section: The questions in this section were asked to get a quantitative overview of people's views coming into this event. You probably shouldn’t put that much weight on the answers since we don't think that this group is expert at resource allocation/cause prioritization research, and there was understandable confusion about what work fit into what categories.[2]

Meta vs Direct Work: What rough percentage of resources should the EA community (broadly construed) devote to the following areas over the next five years?

Group Summary Stats 


Summarized commentary on Meta vs Direct Work 

What/who should the EA community’s work be in service of?

What rough percentage of resources should the EA community (broadly construed) devote to the following areas over the next five years? (n = 38) 

Mean (after normalizing) for % of work that should be in service of…

The three different routes focused on reducing existential risk sum up to ~65%.


Summarized commentary on What/Who the EA Community’s Resources Should Be in Service Of 

Reflections on Past Year

Changes in work-related actions post-FTX

EA and related communities are likely to face future “crunch times” and “crises”. In order to be prepared for these, what lessons do we need to learn from the last year?

The Future of EA

Summary of visions for EA

Rowing vs steering EA

To what degree do you think we should be trying to accelerate the EA community and brand’s current trajectory (i.e., “rowing”) versus trying to course-correct the current trajectory (i.e., “steering”)? (n = 39; Mean = 4.51; SD = 1.63 )      
Commentary on Rowing vs Steering 

Cause X

What do you estimate is the probability (in %) that there exists a cause which ought to receive over 20% of the EA community's resources but currently receives little attention? (n = 31; Mean = 26%; Median = 10%; SD = 29%)


Summary of commentary on Cause X

Problems with the question

Nuance around Cause X

Cause X Candidates proposed include:

Agreement voting for EA trajectory changes

EA orgs and individuals should stop using/emphasizing the term “community.”  (n = 41, Mean = 3.98, SD = 1.49)
 

Assuming there’ll continue to be three EAG-like conferences each year, these should all be replaced by conferences framed around specific cause areas/subtopics rather than about EA in general (e.g. by having two conferences on x-risk or AI-risk and a third one on GHW/FAW) (n = 40, Mean = 3.13, SD = 1.62)



Effective giving ideas and norms should be promoted independently of effective altruism. (n = 40, Mean = 4.65, SD = 1.46)
 


 

People in EA should be more focused on engaging with established institutions and networks instead of engaging with others in EA. (n = 40, Mean = 4.93, SD = 1.25)
 

There is a leadership vacuum in the EA community that someone needs to fill. (n = 39, Mean = 4.85, SD = 1.63)
 

Summary of comments about leadership vacuum question: 


We should focus more on building particular fields (AI safety, effective global health, etc.) than building EA. (n = 39, Mean = 4.72, SD = 1.38)

 


Summary of comments about agreement voting questions 

What are some mistakes you're worried the EA community might be making?

Note that respondents often disagreed with each other on this question: some thought we should do more X, and others thought we should do less X. 

Crisis response:

Leadership:

Culture:

Resource allocation:

AI:

Other:

Relationship Between EA & AI Safety 

Rating agreement to statements on the relationship between EA & AIS

Below we compare MCF invitees' responses to AI safety experts asked the same questions in the AI safety field-building survey. 

 

Summary of Commentary on EA's Relationship with AI Safety 

Relationship between meta work and AI timelines

How much does the case for doing EA community-building hinge on having significant probability on “long” (>2040) timelines? (n = 29, Mean = 3.24, SD = 1.33)
 

Commentary on relationship between meta work and AI timelines

There has been a significant increase in the number of people interested in AI safety (AIS) over the past year. What projects and updates to existing field-building programs do you think are most needed in light of that?

  1. Programs to channel interest into highly impactful roles, with a focus on mid-career community-building projects.
  2. More work on unconventional or 'weirder' elements in AI governance and safety, such as s-risk and the long-reflection.
  3. Reevaluation of how much the focus on AI depends on it being a neglected field.
  4. More educational programs and introductory materials to help people get up to speed on AI safety.
  5. Enhancements to the AI Alignment infrastructure, including better peer-review systems, prizes, and feedback mechanisms.
  6. Regular updates about the AIS developments and opportunities
  7. More AI-safety-specific events 

Prioritization Deference to OP vs CEA

Note that this question was asked in an anonymous survey that accompanied the primary pre-event survey summarized above. We received 23 responses (n = 23) out of 45 people invited to take the anonymous survey.


We hope you found this post helpful! If you'd like to give us anonymous feedback you can do so with Amy (who runs the CEA Events team) here.

  1. ^

     We said this survey would take about 30 minutes to complete, but in hindsight, we think that was a significant underestimate. Some respondents reported feeling rushed because of our incorrect time estimate.

  2. ^

    This caveat was included for event attendees when we originally shared the survey report.


michel @ 2023-11-03T23:30 (+2)

A quick note to say that I’m taking some time off after publishing these posts. I’ll aim to reply to any comments from 13 Nov.