List of AI safety newsletters and other resources
By Lizka @ 2023-05-01T17:24 (+49)
This is a list of AI safety newsletters (and some other ways to keep up with AI developments).[1]
- If you know of any that I’ve missed, please comment![2]
- And I'd love to hear about your experiences with or reflections on the resources listed.
Thanks to everyone who puts together resources like the ones I'm collecting here!
Just to be clear: the list includes newsletters and resources that I haven’t engaged with much.
Podcast & video channels
- Dwarkesh Podcast - deeply researched interviews
- AI Explained (YouTube) - ~weekly recaps of recent news or research from AI and AI safety)
- The AI Policy Podcast (CSIS) - every two weeks, discussions about AI policy and more.
- Robert Miles AI Safety (YouTube)
- AI X-risk Research Podcast with Daniel Filan (AXRP)
- The 80,000 Hours podcast often has episodes related to AI safety
- You can listen to many Forum posts, including a "Curated and Popular" feed, and more. See more info here.
- There are also the following, although I haven't personally engaged with them:
- The AI Safety Podcast
- The AI Daily Brief (YouTube) (not safety-focused)
- Future of Life Institute Podcast
Newsletters
General-audience, safety-oriented newsletters on AI and AI governance
Note that the EA Newsletter, which I currently run, also often covers relevant updates in AI safety.
AI Safety Newsletter (Center for AI Safety) (🔉)
- Stay up-to-date with the latest advancements in the world of AI and AI safety with our newsletter, crafted by the experts at the Center for AI Safety. No technical background required.
- Audio available via the Forum cross-posts.
Transformer (Shakeel Hashim)
- A weekly briefing of AI and AI policy updates and media/popular coverage.
AI Policy Weekly (Center for AI Policy)
- Each week, this newsletter provides summaries of three important developments that AI policy professionals should know about, especially folks working on US AI policy. Visit the archive to read a sample issue.
More in-the-weeds safety-oriented newsletters
GovAI newsletter (Centre for the Governance of AI)
- Includes research, annual reports, and rare updates about programmes and opportunities. They also have a blog.
ChinAI (Jeffrey Ding)
- This weekly newsletter, sent out by Jeff Ding, a researcher at the Future of Humanity Institute, covers the Chinese AI landscape and includes translations from Chinese government agencies, newspapers, corporations, and other sources.
AI safety takes (Daniel Paleka)
- Summaries of news and research in AI safety (once every month or two).
The EU AI Act Newsletter (Future of Life Institute (FLI))
- A bi-weekly newsletter about up-to-date developments and analyses of the proposed EU AI law.
The Autonomous Weapons Newsletter (Future of Life Institute (FLI))
- Monthly updates on the technology and policy of autonomous weapons.
Other AI newsletters (not necessarily safety-oriented)
EuropeanAI newsletter (Charlotte Stix)
- This bi-monthly newsletter covers the state of European AI and the most recent developments in AI governance within the EU Member States.ai
Import AI (Jack Clark)
- This is a weekly newsletter about artificial intelligence, covering everything from technical advances to policy debates, as well as a weekly short story.
Policy.ai (Center for Security and Emerging Technology (CSET))
- A biweekly newsletter on artificial intelligence, emerging technology and security policy.
The AI Evaluation Substack
- The AI Evaluation Substack is a monthly digest that covers the latest developments, research trends, and critical evaluations in the field of artificial intelligence.
TLDR AI
- Daily email about new AI tech.
Newsletters on related topics, or which often cover AI or AI safety
RAND newsletters (and research you can get on RSS feeds)
- E.g. Policy Currents
GCR Policy Newsletter
- A twice-monthly newsletter that highlights the latest research and news on global catastrophic risk.
Forecasting newsletter (and Alert/Sentinel minutes)
- Covers prediction markets and forecasting platforms as well as some changes in recent forecasts.
Crypto-Gram (Schneier on Security)
- Crypto-Gram is a free monthly e-mail digest of posts from Bruce Schneier’s Schneier on Security blog.
Oxford Internet Institute
- This newsletter, which is distributed eight times a year, provides information about the Oxford Internet Institute, a multidisciplinary research and teaching department of the University of Oxford dedicated to the social science of the Internet.
Statecraft (Santi Ruiz)
- Interviews with policymakers and others.
Don't Worry About the Vase (Zvi Mowshowitz)
- A blog "Don't Worry About the Vase" is a Substack newsletter: "Doing both speed premium short term updates and long term world model building. Currently focused on weekly AI updates. Explorations include AI, policy, rationality, medicine and fertility, education and games."
Other resources: collections, programs, reading lists, etc.
- Getting involved
- AI Safety Training - A database of training programs, conferences, and other events for AI existential safety, collected by AI Safety Support
- 80,000 Hours lists "collections" of opportunities for getting involved, like internships in ML, fellowships, and Master's options (see also the EA Opportunity Board and the overall 80,000 Hours job board.)
- Emerging Technology Policy Careers compiles information about policy and public service careers in emerging tech policy.
- Recurring courses & programs
- AGI Safety Fundamentals (AGISF) - courses by BlueDot Impact on AI alignment (101 and 201) and AI governance
- MATS Program - ML Alignment & Theory Scholars (MATS) Program (previously SERI MATS - Stanford Existential Risks Initiative ML Alignment Theory Scholars)
- Intro to ML Safety by the Center for AI Safety (CAIS)
- I’m not sure how recurring or standardized these are:
- MLAB: Upskill in machine learning (advanced)
- ML Safety Scholars: Upskill in machine learning (beginners) (not running this year)
- Philosophy Fellowship: For grad students and PhDs in philosophy
- PIBBSS: For social scientists and natural scientists
- Lists/collections (see also reading lists from the above)
- Lots of Links by AI Safety Support
- A collection of AI Governance-related Podcasts, Newsletters, Blogs, and more (Alex Lintz, 2 Oct 2021)
- Resources that (I think) new alignment researchers should know about (LessWrong post by Akash, 29 Oct 2022)
- Resources I send to AI researchers about AI safety (LessWrong post by Vael Gates, 14 Jun 2022)
- List of AGI safety talks gathered by BlueDot Impact
- Forums
- AI Alignment Forum - quite technical, restricted posting
- LessWrong - lots of AI content, but also focuses on other topics
- Effective Altruism Forum - this platform
- A few highlighted blogs
- Cold Takes by Holden Karnofsky
- Planned Obsolescence by Ajeya Cotra and Kelsey Piper
- AI Impacts
- Epoch AI
Closing notes
- Please suggest additions by commenting!
- Please post reflections and thoughts on the different resources (or your personal highlights).
- Links to no longer active newsletters can be found in this footnote.[3]
- Thanks again to everyone.
- ^
The closest thing to this that I’m aware of is Lots of Links by AI Safety Support, which is great, but you can’t comment on it to add more and share reflections, which I think is a bummer. There’s probably more. (Relevant xkcd.)
- ^
Thanks to folks who directed me to some of the resources listed here!
Note also that in some cases, I'm quoting near-verbatim from assorted places that directed me to these or from the descriptions of the resources listed on their websites.
- ^
AI Safety Support (AI Safety Support)
Opportunities in AGI safety (BlueDot Impact)
ML Safety Newsletter (Center for AI Safety)
Alignment Newsletter (Rohin Shah)
This week in security (@zackwhittaker)
Jeremy @ 2023-05-01T17:59 (+10)
Zvi does a pretty in-depth AI news round up every week or two now, plus some individual posts on AI topics. Not exclusively about safety, but often gives his own safety-informed perspective on capabilities news, etc. https://thezvi.substack.com/
DLMRBN @ 2023-06-01T17:17 (+3)
On AI x-risk communication efforts: https://xriskobservatory.substack.com/
Henry Papadatos @ 2023-05-25T22:05 (+2)
The navigating AI risks newsletter could be relevant as well: "Welcome to Navigating AI Risks, where we explore how to govern the risks posed by transformative artificial intelligence. " https://navigatingairisks.substack.com/
EuanMcLean @ 2024-05-23T11:46 (+1)
This qualitative survey of AI safety experts I just finished, I think it might be a useful resource for people just starting their career in AI safety! https://www.lesswrong.com/s/xCmj2w2ZrcwxdH9z3
Jakub Kraus @ 2023-06-02T04:06 (+1)
Probably worth adding a section of similar collections / related lists. For instance, see Séb Krier's post and https://aisafety.video/.
Apart Research has a newsletter that might be on hiatus.