Changes to the 80k podcast
By Michelle_Hutchinson @ 2025-11-10T18:41 (+72)
This is a brief update about what’s been happening on the 80k podcast team over the last 6 months, because we’ve undergone quite a few changes. We’re also hiring (more details below).
As background: The podcast has been running for 8(!) years, run and predominantly hosted by Rob Wiblin. Over that time the podcast has grown to ~127,000 subscribers across Spotify, YouTube and Apple Podcasts, the production value and knowledge of how to execute a great podcast have very much improved, and Luisa Rodriguez joined Rob as a host. But, given capacity constraints, 80k hasn’t invested significant resources into testing out ways to substantially scale the podcast’s impact. Rob hosted while also needing to manage the team and leading on strategy. Alongside this, he has led projects like selecting and appointing a new board for 80,000 Hours as we spun out of EV. Over the coming year, with me now focused on strategy for the podcast, 80k aims to invest notably more in experimenting with potential ways to grow the podcast team and scale its impact.
There are a couple of reasons we think it’s worth investing significant resources into growing the podcast team. The first is that the podcast has had good impact returns:
- In OpenPhil’s 2023 survey about what caused people to be interested in GCR work, 80k is mentioned as a leading influence, and 45% of the people who said they were significantly influenced by 80k cited the podcast as an important part of how 80k had influenced them/their trajectory. [1]
- The EA survey tells a similar story. For example, in the 2024 survey, the 80k podcast comes out similarly to things like EAG on influencing personal ability to have an impact.
The second reason is timing. Along with the rest of 80k, the podcast team thinks that our largest vector for improving the longrun future right now is focusing on the transition to transformative AI. Right now, AI progress is coming more into the public consciousness and a greater number of societal decision makers are needing to make choices which impact and/or are impacted by AI developments, and are needing to do so without having a background in the technology. Having been learning about and working on this for a decade, we’re relatively well placed to inform people about how the technology might affect the world, and the risks associated with it. Our aim is that the discussions and issues we highlight on the podcast will help guide decisions people take in their current roles, as well as decisions about where they work.
In order to capitalise on those, we’d like to increase the output of episodes and experiment with more formats. To achieve this, we need to hire additional capacity. I joined the team to push on these goals and allow Rob to focus on hosting. We started by running a hiring round for additional hosts, which we’re getting towards the end of, though it’s likely we will want to hire for further new hosts in 2026. A large part of what goes into these episodes isn't just having the conversation and finessing the audio -- there's researching topics, inputting on the episode flow, and publicising them. Right now, the hosts are mostly responsible for these but we don't think that's necessary. We’re looking to hire people with complementary skills to take on a large proportion of these responsibilities, to allow the team to really ramp up our output. To give a sense of the types of work:
- Coming up with topics for episodes that are important for our audience to hear and finding potential guests to cover those topics
- Suggesting and editing questions for an interview, and editing episodes post recording to streamline episodes and make them more engaging
- Figuring out titles and framings for episodes, and drafting copy to accompany them on other platforms (such as X) to help them reach the right people
If you would like to join our small ambitious team to produce content to shape humanity’s longrun trajectory as it navigates transformative AI, please consider applying. You can read more about the roles we’re planning to add and what we’re looking for on our website.
To galvanise our team around a central mission, and help ourselves to remain ambitious and remember what we can achieve, I wrote a brief vision for the podcast over the coming years. I find these types of documents a bit tricky to engage with because making them truly aspirational is often in direct tension with realism, so they always feel like a work in progress to me. But I thought people might like to get a sense of what the podcast team is currently aiming at. Since it was written as an internal doc for people with a lot of context, I’ve made an abridged version with some tweaks and some explanation (denoted by square brackets) which hopefully makes it more intelligible and helps avoid confusions. I wanted to get this out fairly swiftly though, so I’m afraid it still reads like an internal doc.
What does all this mean for what you might expect to be able to listen to and watch over the coming months?
- We’re hoping to put out decidedly more episodes this quarter than we did last quarter, including interviews with Ajeya Cotra, Rob Long, Max Harms, Marius Hobbhan and Dean Ball.
- Rob will be experimenting more with voice essays, starting with one about how his AGI timelines have shifted given the developments over the year.
- In the new year we hope to add a third host to the team and potentially try out other new formats such as debates.
- We don’t expect all our episodes to focus on AI, though we do expect it to be around 80% for the foreseeable future.
- ^
People could cite multiple programs, but this was the highest of any 80k program, and 80k overall is consistently a leading influence on people in the OpenPhil survey. (See their public 2020 results.)
Matt_Lerner @ 2025-11-11T15:31 (+106)
It's great to see the podcast expanding. I think the ship has already sailed on this, but it feels important for me to flag two experiences I've had since the podcast's "shift to AI."
- I listen much less than I used to. This is partly because I end up thinking plenty about AI at work, but also because podcasts occupy a middle ground between entertainment and informativeness for me. Though I think AI is critically important, it is not something I get a real kick out of thinking and hearing about.
- I share episodes with non-EAs much less than I used to. Most normies I know are sick of hearing about AI and, moreover, there's no longer any content to engage people who don't want to listen to a three-hour podcast about AI. I think that's a shame, since many of those people would have happily listened to a three-hour podcast about e.g. vaccines, subscribed, and then learned about AI at a later date.
Christoph Hartmann 🔸 @ 2025-11-12T14:45 (+24)
This also applies to the 80k brand as a whole. I used to recommend it to people interested in having an impact with their career but ever since 80k pivoted to an AI career funnel I recommend it to fewer people and always with the caveat of "They focus only on AI now, but there is some useful content hidden beneath"
Ozzie Gooen @ 2025-11-11T20:15 (+18)
"Though I think AI is critically important, it is not something I get a real kick out of thinking and hearing about."
-> Personally, I find a whole lot of non-technical AI content to be highly repetitive. It seems like a lot of the same questions are being discussed again and again with fairly little progress.
For 80k, I think I'd really encourage the team to focus a lot on figuring out new subtopics that are interesting and important. I'm sure there are many great stories out there, but I think it's very easy to get trapped into talking about the routine updates or controversies of the week, with little big-picture understanding.
Yarrow Bouchard 🔸 @ 2025-11-12T01:42 (+15)
My suggestion along these lines would be to try to get guests on who come with a different perspective on transformative AI or AGI than most of the 80,000 Hours Podcast's past guests or most people in EA. Toby Ord's episode was excellent in this respect; he's as central to EA as it gets, yet he was dumping cold water on the scaling trends many people in EA take for granted.
Some obvious big names that might be hard to get: François Chollet, Richard Sutton, and Yann LeCun (the links go to representative podcast clips for each one of them).
A semi-big name who will probably be easier to get: Jeff Hawkins of Numenta.
A less famous person who might be a good stand-in for Richard Sutton's perspective on AI is Edan Meyer, an academic AI researcher.
With some research and asking around, you could probably generate more ideas for guests along these lines.
I think one good way to get more clarity on the big picture and stimulate more creative thinking is to bring people into the conversation who have more diverse viewpoints. Even if you were to come at it from the perspective of being 95% certain that LLMs will scale to AGI within 10 years (which AFAIK is a big exaggeration of the 80,000 Hours team's real views), one really useful part of having guests like this one would be prompting the hosts and the audience to think about why, exactly, these guests are wrong in their LLM skepticism.
I think even in cases where you are 95% sure you're right, talking to brilliant, eloquent experts who disagree can only serve to sharpen your thinking and put you in a better position to think about and articulate your case. Conversely, I think when you're only talking to people who agree with you, you don't develop an ability to make a persuasive case to people who don't already agree. You take for granted things other people don't take for granted, and you're maybe not even aware of other people's objections, qualms, and concerns. Maybe the most important part of persuasion is showing people you know what they have to say and that you have an answer to it.
A lot of the stated goals in the Google Doc come down to persuasion, so this seems in line with your goals.
Michelle_Hutchinson @ 2025-11-12T17:20 (+4)
Thanks for all the suggestions!
Michelle_Hutchinson @ 2025-11-12T17:20 (+4)
Thanks for the nudge. I agree it seems crucial to try to find things that are actually different to cover - both for the sake of being interesting and more importantly to actually have an impact. I'd love to hear any particular suggestions you have about things that seem underexplored and important to you!
FJehn @ 2025-11-12T12:31 (+13)
I had a similar experience. I recommended the podcast to dozens of people over the years, because it was one of the best to have fascinating interviews with great guest on a very wide range of topics. However, since it switched to AI as the main topic, I have recommended it to zero people and I don't expect this to change if the focus stays this way.
Michelle_Hutchinson @ 2025-11-12T16:37 (+2)
Useful to know, thanks
Michelle_Hutchinson @ 2025-11-11T16:00 (+5)
Thanks for letting us know! That's useful data.
david_reinstein @ 2025-11-12T20:22 (+4)
Do you see other podcasts filling the long-form, serious/in-depth, EA-adjacent/aligned niche in areas other than AI? E.g., GiveWell has a podcast, but I'm not sure it's the same sort of thing. There's also Hear This Idea, often Clearer Thinking or Dwarkesh Patel cover relevant stuff.
(Aside, was thinking of potentially trying to do a podcast involving researchers and research evaluators linked to The Unjournal; if I thought it could fill a gap and we could do it well, which I'm not sure of.)
Matt_Lerner @ 2025-11-12T21:29 (+29)
No, I really don't. Sometimes you see things in the same territory on Dwarkesh (which is very AI-focused) or Econtalk (which is shorter and less and less interesting to me lately). Rationally Speaking was wonderful but appears to be done. Hear This Idea is intermittent and often more narrowly focused. You get similar guests on podcasts like Jolly Swagman but the discussion is often at too low of a level, with worse questions asked. I have little hope of finding episodes like those with Hannah Ritchie, Christopher Brown, Andy Weber, or Glen Weyl anywhere else anytime soon. It's actually a big loss in my life and (IMO) leaving many future potential EAs and AI people on the table.
Vasco Grilo🔸 @ 2025-11-13T19:46 (+2)
Hi Matt. Since you mentioned "vaccines", you may be interested in the podcast Hard Drugs.
Hard Drugs is a show by Saloni Dattani and Jacob Trefethen about medical innovation: how to speed it up, how to scale it up, and how to make sure lifesaving tools reach the people who need them the most. It is brought to you by Works in Progress and Open Philanthropy. Listen on your favorite podcast app or subscribe to our YouTube channel.
david_reinstein @ 2025-11-12T20:24 (+2)
Here's some suggestions from 6 minutes of ChatGPT thinking. (Not all are relevant, e.g., I don't think "Probable Causation" is a good fit here.)
sn @ 2025-11-12T15:03 (+14)
I've had a drop off of non EA friends who are willing to listen to ai episodes. I think the ai stuff is repetitive. I liked the podcast more and recommended it more before the shift to focus on AI and I am an AI professional interested in safety. I think the priority should be interesting conversations.
Separately, most podcast listeners I know subscribe to podcasts. There is an effect where publishing too much may cause people to unsubscribe, when it starts to feel like spam. I doubt the main thing preventing further expansion is quantity. I don't think that a third host would meaningfully help things.
Michelle_Hutchinson @ 2025-11-12T17:18 (+5)
With respect to numbers of episodes:
- We're considering potential new hosts publishing on new feeds. They might try a different format, or they might try the same format but a different target audience
- Over the last ~year, most of our growth has been on youtube, rather than audio feeds. As far as we can tell, on youtube it's very uncommon to find new episodes directly via a subscription, and relatedly there seems to much less of an effect of reduction in engagement if you have episodes more than once per week.
This is all pretty tentative. For example, we might find that the audiences we can reach on youtube turn out to be much worse than via other means. That might push us towards trying to increase the quality rather than quantity of episodes. But it would also help enable us to improve quality if we had more capacity - Rob has historically felt the need to rush to get episodes out, and could usefully spend decidedly more time researching / editing episodes.
sn @ 2025-11-12T17:43 (+1)
Growth coming from YouTube recommendations makes sense. In that case I agree that more episodes is not a bad thing.
Michelle_Hutchinson @ 2025-11-12T16:37 (+2)
Thanks for the feedback!
HillaryfromCanada @ 2025-11-14T16:57 (+5)
Adding some weight to others' comments that since 80k went whole-hog for AI-more-AI-nothing-but-AI, what was initially interesting & compelling AI content for me to listen to as part of a broader repertoire of distinctly EA takes on things has felt like a firehose and there isn't interesting content I look to the podcase for now. I miss the other areas of content a lot.
Encountering these, which I'd listen to in 30-45min chunks over a few days, was indescribably useful. The ones with Ajeya Cotra on world-view diversfication, Rachel Glennerster on market shaping, Karen Levy on program dev & eval, and Hugh White on Donald Trump/US change, were so genuinely novel and informative to me that the perspectives they shared are now baked into how I think about things. The podcast change since then to 1000 angles of AI risk has nowhere near this value.
Editing to add something less crabby:
Some areas of AI risk that would be substantially interesting and useful and re-engage me would be around building out an actual understanding of AI risk. AI discourse given any attention here has been representative of a dangerously homogeneous group for something prioritized for its existential level of risk, global impact, etc. (mostly white men, almost entirely W.E.I.R.D. countries, middle-class, narrowly technical interest, etc.). More or less a mirror of the same people causing the risk. For novel + valuable content, I want to know perspectives that can help fill out even a bit more of the ENTIRE REST OF HUMANITY perspectives on this one -- countries/regions, ethnicities, life stages, genders, walks-of-life, socio-econ statuses, faiths, sectors, families, education experiences. I have a sense we can't possibly have a good grasp of what the major risks are if our understanding is based exclusively on what's most valued to the most narrow group of people. It would also open up so much rich space for new problem frames --> new solutions. I would avidly listen to this kind of content. The podcast team expansions would ideally reflect people with the abilities to build this out...
Michelle_Hutchinson @ 2025-11-15T13:31 (+2)
Thanks for letting me know your experience with the podcast. I'm sorry you're finding it less valuable right now. To be clear, things like the Ajeya and Hugh White episodes are centrally the kinds of episodes we'll still be putting out a lot of (in fact, we've recorded another episode with Ajeya which we're currently getting towards releasing). We'll likely still do occasional episodes like the Levy and Glennerster ones, but fewer.
I agree that AI risk is an incredibly large, complex area and that it's going to take a lot more work for us to build out a full understanding of it!
AnotherEAJobSeeker @ 2025-11-13T13:46 (+1)
While I really like the insight that can come from these deep dives and I understand the complexity of topics and risks of oversimplification, I find the podcast episodes really long and hard to digest.
This may not be the case for all listeners, but I think that a lot of people listen to podcasts during commute, or while doing other activities that do not require their full attention. For more complex topics, one may prefer to pay some more attention, and maybe avoid multi-tasking, or choose to listen while doing really "low-brain" activities. But those times are limited, and I can say for myself that finding 2 or even 4 hours where I can give my full attention to listen to deep dives into a specific topic can be challenging. This has at times made me zone out during 80k podcast episodes, or even fully avoid starting to listen to some of them. Again, I understand the downsides, but I think it could be beneficial to edit conversations into more digestable lengths of 30 min to 1h, maybe per sub-topic. This would in my view make it easier for new listeners to get involved in the podcast, and help spread the podcast to an audience that is less knowledgeable of the topics being covered, and that maybe is not (yet) motivated enough to dedicate 2 hours to these new topics.
Michelle_Hutchinson @ 2025-11-14T10:24 (+3)
Thanks for letting us know about your experience with it!
Yes, length is one of the things we're interested in experimenting with. We've also started putting a bit more work into producing written 'interview in a nutshell' sections on the transcripts, so that people can get the key insights swiftly (or use it to decide whether to listen to the full episode).
MM25 @ 2025-11-12T02:27 (+1)
Imbuing the values of 80,000 hours into formats with orders of magnitudes more reach would likely be extremely beneficial. Looking forward to seeing the organization evolve and adapt alongside an increasingly relevant technological landscape.
Michelle_Hutchinson @ 2025-11-12T17:22 (+2)
Are there particular formats you have in mind?
If you haven't come across it yet, you might be interested in the work 80k's video team does.