AI safety starter pack

By mariushobbhahn @ 2022-03-28T16:05 (+126)

 There are a ton of good resources for AI safety out there. However, conversations with people trying to get into the field revealed that these materials are often unknown. Therefore, I want to provide a very basic overview of how to start. My claim is that a person who invests ~2-3 hours a week to study the recommended material will have a good understanding of the AI safety landscape within one year (if you are already familiar with ML you can get there in ~3 months, e.g. by doing the AGI safety fundamentals fellowship). This post is primarily designed for people who want to get into technical AI safety. While it might be still helpful for AI governance it is not specifically geared towards it.

I want to thank Jaime Sevilla, Alex Lawsen, Richard Ngo, JJ Hepburn, Tom Lieberum, Remmelt Ellen and Rohin Shah for their feedback. 

I intend to update this post regularly. If you have any feedback please reach out. 

Updates:

Motivation & mental bottlenecks

When I look back on my own (still very young) journey in AI safety, the biggest bottlenecks were not the availability of resources or a lack of funding. My biggest bottlenecks were my own psychological barriers. For example, I had vague thoughts like “AI safety is so important, I’m not up for the task”, “If I dive into AI safety, I’m doing it properly--fully focused with a clear mind” (which meant I always delayed it) or “AI safety is so abstract, I don’t know where to start”. To be clear, I never explicitly and rationally held these beliefs. They were mostly subconscious but nevertheless prevented me from doing more earlier. 

I think there are a few simple insights that make joining the field much easier.

  1. AI safety is still in its infancy: If you believe that AI safety might be one of the biggest or even the biggest problem of humanity, then the “correct” number of people working on it is at least 1000x larger than it currently is. The position for which you are a good fit might not even exist yet but it probably will in the future---possibly by your own creation.
  2. AI safety is multi-disciplinary: AI safety doesn’t just require technical people. It needs social scientists, it needs community builders to grow the field, it needs people in governance, it needs people thinking about the bigger picture, and many more. You don’t need to be good at math or coding to be able to contribute meaningfully.
  3. You can contribute relatively quickly: The field is young and there are a ton of low-hanging fruit waiting to be picked. There are organizations like the AI Safety Camp that facilitate these projects but you can usually find small online projects throughout the year.

Lastly, there are some simple tips that make the start of your journey easier and more fun

  1. Find others & get help: Working with others is more fun, you learn faster and it creates positive accountability. I can generally recommend it and the AI safety community is usually very friendly and helpful as long as you are well-intentioned. If you realize that you are emotionally bottlenecked, ask others for their experiences and let them help you--you are not the only one struggling. To find a community more easily, check out the resources of the AI safety support group such as the AI safety slack (see below).
  2. Build habits: In general (not only for AI safety), building habits is a good way to get into a field. You could start by setting a daily alarm that reminds you to check the alignment forum, the EA forum or LessWrong. You don’t even have to read an article, just skimming the headlines is already enough to build up the habit.
  3. Don’t overload yourself: You don’t need to read everything all the time and it’s completely fine to ignore the latest trend. In many ways, this document is intended to provide a feeling of “If I do some of this, I’ll be just fine”.
  4. Choose your speed: some people need to take it slow, others want to dive in and not think about anything else. If you want to speedrun this document, just do it. If you want to take it slow, that's also fine.

Resources

The first five are sorted broadly by how much background knowledge you need and the rest is harder to order. 

  1. Brian Christian’s The Alignment Problem (book): beginner-friendly and very good to get a basic overview of what alignment is and why we need it.
  2. AI safety from first principles by Richard Ngo: Good dive into AI safety without requiring much background knowledge.
  3. The Alignment newsletter by Rohin Shah: can contain technical jargon but provides a good sense of “what’s going on in AI safety”.
  4. The AGI safety fundamentals fellowship designed by Richard Ngo: It covers the most important topics in AI safety and I really liked it. It requires a bit of background knowledge but not much. If you just missed the yearly run, consider doing it outside of the official program with others. If you do just one thing on this list, it should be this program.
  5. Rob Miles’ Youtube channel: Rob claims that the videos require no background knowledge but some probably do. The videos are really good.

Funding

If you want to contribute more seriously to a project there is a good chance you can get funded for it and you should have a low bar for applying. Even a rejection is helpful because it provides feedback for improvement. The funding depends a bit on your background, current career stage and exact topic but it’s important to know that the opportunities exist (copied from Richard Ngo’s Careers in AI safety):

I hope this little overview helps. Let me know if you profited from it or if you have suggestions for improvement. If you have more questions on AI safety, feel free to reach out or hit me up at any EA events I attend. I can’t promise that I’ll always find the time but I’ll try my best. 

 

If you want to be informed about new posts, you can follow me on Twitter.


 


L Purcell @ 2022-07-01T03:35 (+8)

Thanks mariushobbhahn, very useful. You summarised all my own hesitations. 

I have bookmarked a number of resources, signed up for a newsletter, and have more clarity over what my next steps should be (which includes seeking more help). 

Thanks for taking the time to put this together. I definitely benefitted.

mariushobbhahn @ 2022-07-01T06:39 (+2)

Great. Thanks for sharing. I hope it increases accountability and motivation!

Yonatan Cale @ 2022-03-28T16:34 (+8)

Thank you, this is very useful! 

I expect to refer lots of people to this post

jskatt @ 2022-08-11T04:05 (+3)

This list is great. I recommend adding the new Intro to ML Safety course and the ML Safety Scholars Program. Or maybe everyone is supposed to read Charlie's post for the most up-to-date resources? It's worth clarifying.

mariushobbhahn @ 2022-08-11T07:51 (+2)

Added it. Thanks for pointing it out :) 

michaelchen @ 2022-03-30T13:24 (+3)
you can usually find small online projects throughout the year

Where?

Callum McDougall @ 2022-06-15T11:35 (+9)

Update on the project board thing - I'm assuming that was referring to this website, which looks really awesome!

https://aisafetyideas.com/

mariushobbhahn @ 2022-06-15T13:44 (+8)

Nice. It looks pretty good indeed! I'll submit something in the near future. 

Charles He @ 2022-06-17T00:21 (+2)

Wow, the site mentioned looks fantastic

https://aisafetyideas.com/

(I don't know anything about AI safety or longtermism) but just to repeat the discussion above, this site looks great looks great.

  • It has a PWA feel (if you click and interact, the response is instant, like a native app on an iPhone, not a website with a click delay).
  • The content seems really good.

There's probably considerations, founder effects and seeding of content, but this implementation seems like a great pattern for community brainstorming for other causes. 

Imagine getting THL staff and other EAs going on a site for animal welfare or global health.

mariushobbhahn @ 2022-03-31T11:39 (+1)

There is no official place yet. Some people might be working on a project board. See comments in my other post: https://forum.effectivealtruism.org/posts/srzs5smvt5FvhfFS5/there-should-be-an-ai-safety-project-board

Until then, I suggest you join the slack I linked in the post and ask if anyone is currently searching. Additionally, if you are at any of the EAGs and other conferences, I recommend asking around. 

Until we have something more official, projects will likely only be accessible through these informal channels. 

Rahela @ 2022-04-04T07:19 (+2)

Thanks mariushobbhahn this is helpful. I just signed up to the newsletter. 

AkshatN @ 2023-10-10T17:05 (+1)

The AGI safety fundamentals fellowship leads to a broken link, and the amount of googling I've done to find the actual page, suggests that the website is actually down.

Is there any way something can be done about this?

Lorenzo Buonanno @ 2023-10-10T17:57 (+2)

Maybe it's this? https://aisafetyfundamentals.com/