Help us find pain points in AI safety

By Esben Kran @ 2022-04-12T18:43 (+31)

TL;DR: We want to hear your personal pain points, click here for the survey (3+ min)

Examples of pain points are: "I feel like AI safety is too pessimistic", "AI safety is not focusing nearly enough on transparency and interpretability research" or any other professional or personal experience you think could be better in AI safety.

Additionally, we (Apart Research) hope you would like to book a problem interview meeting with me (U.S.) or with Jonathan (EU, jonathan@apartresearch.com). You can also write to me on Telegram, email me on esben@apartresearch.com!

đź©ą Pain points in EA & AI safety

As far as we know, there has not been any major analysis of pain points present in EA and AI safety and we find it very valuable (see below). Therefore, our goal with this project is to have 40+ interviews and 100+ survey responses about pain points of community members.

From these interviews and responses, we will maintain a list of pain points that is updated as we receive more information. Additionally, after the goal metrics have been reached, we publish a post on the EA forum and LessWrong forum to summarize these pain points and how many people experience each. Lastly, we try to summarize positive thoughts about the community as well so we might be able to enhance things that already work.

In addition to summarizing the points, we will propose potential solutions and enhancements where we find it possible! We have already worked on defining several AI safety-aligned for-profit ideas and are currently working on a technical AI safety ideas platform that will be published soon.

🤔 Theory of impact

By compiling this list, we can identify points of impact in the community that enables us to do mesa-cause-prioritization, i.e. figure out which projects might provide the largest beneficial impact to AI safety work for organizations working on these issues. This becomes especially important given the urgent timelines.

Examples of similar work includes the big list of cause candidates, EA Tech Initiatives list (outdated), the list of possible EA meta-charities and projects, the Nuclear risk research ideas, the EA Work Club, Impact Colabs, EA meta charities interviews, project ideas, more project ideas, even more EA org. project ideas, EA student project ideas and many more.

The unique value of our project is that we target the EA / AI safety community and that we attempt to identify the pain points of the community before proposing solutions.

🦜 What will the interview look like?

By baseline, the interview will be very informal and open and focused on getting to the bottom of what your pain points look like. The time plan will look roughly like this during a ~30 minute call:

  1. Introductions (3-5 minutes)
  2. Demographics (1 minute)
    • Age, occupation, etc.
    • EA and AI safety experience
    • Where are you coming from? (country, viewpoint, earlier career)
  3. Identifying pain points (5-10 minutes)
    • You describe the pain point you’re experiencing
    • We think about if you might have more pain points together
  4. Diving deeper (10-15 minutes)
    • Debugging the actual pain point, i.e. “Why, why, why”
    • Ranking these pain points
    • Thinking about solutions
  5. Identifying positive points (3-5 minutes)
    • We talk about positive points in EA and AI safety communities
    • Talk about how these plus points can be enhanced
  6. Wrapping up (3-5 minutes)
    • Connecting to have our communication channel open
    • Asking you if we can contact you again in relation to this project
    • Asking for referrals to people that you think might have pain points they would like to share – this might even be outside the community (pain points related to their non-participation in EA)

The list (stays updated)

Summarized in spreadsheet format here and you’re welcome to directly add any pain or plus points you do not already see. As mentioned above, we will also publish these results in a more comprehensive fashion as another post.

Pain points

Positive points

💪🏽 You reached the end!

Awesome! If you want to help in studying the resulting data, come up with solutions or conduct interviews, join us in our Discord. And again:

Share your pain points in EA and AI safety with us through this survey (Google Forms) or book a calendly meeting with me (Google Meet). You can also write to me on Telegram, email (esben@apartresearch.com), or connect with me for EAG London and San Francisco through Swapcard.

What will Apart Research work on?

As mentioned, we share this list and an associated analysis and possible solution proposals for each with references to other posts where possible. These pain points will also inform our general work in AI safety and hopefully others' work as well.

Disclaimer: This was made with a prioritization on speed so we are very open to feedback in the comments or on the survey.


Jonathan Rystrom @ 2022-04-19T08:05 (+5)

Super interesting stuff so far! It seems that quite a few of the worries (particularly in "Unclear definitions and constrained research thinking" and "clarity") seem to stem from AI safety currently being a pre-paradigmatic field. This might suggest that it would be particularly impactful to explore more than exploiting (though this depends on just how aggressive ones timelines are). It might also suggest that having a more positive "let's try out this funky idea and see where it leads" culture could be worth pursuing (to a higher degree than is being done currently). All and all, very nice to see pain points fleshed out in this way!

(Disclaimer: I do work for Apart Research with Esben, so please adjust for that excitement in your own assessment :))

Ben_West @ 2022-04-13T08:26 (+4)

Thanks for doing this! I'm excited to see the results.

Jay Bailey @ 2022-04-16T02:03 (+3)

What level of involvement in AI Safety are you looking for as a minimum for people to:

A) Fill out the survey?
B) Sign up for an interview?

Personally, I'm trying to upskill into entering AI Safety but am not yet particularly involved in the community - it would be good to know whether me or people like me are part of the intended targets of this research, or if it is focused on people with a more significant investment/ties into the cause already.

Esben Kran @ 2022-04-19T03:51 (+1)

We welcome anyone to answer the survey and people who would describe themselves as "associated to AI safety research" in any capacity.