AMA: Tom Ough, Author of ‘The Anti-Catastrophe League’, Senior Editor at UnHerd

By Toby Tremlett🔹, Tom Ough @ 2025-07-31T11:24 (+58)

Tom will answer the questions in the comments between 7 and 9 pm UK time on August 6th.

Tom Ough is a Senior Editor at UnHerd, co-host of the Anglofuturism podcast, and the author of a new book, ‘The Anti-Catastrophe League’. For the last few years, he has been writing in a range of publications about global catastrophic risks, and (sometimes relatedly) how Britain can prosper by adopting new technologies. Previously, he worked as a journalist at The Telegraph, where he wrote for nearly seven years.

You can see more of his articles on his website, and a few of his most EA-flavoured pieces below:

Recently, Tom published The Anti-Catastrophe League, a book profiling the individuals and organisations working to reduce global catastrophic risks. For readers of the EA Forum, many of the people profiled will be familiar, but the range of reporting in the book ensures that you’ll learn something new.

The Anti-Catastrophe League

Over to Tom, for a summary of his book:

INTRODUCTION: Defusing Doomsday

I introduce the concepts of GCRs and X-risk, relating an interview with Andy Weber, the American diplomat who tracked down the nuclear weapons material that post-Soviet Russia had lost control of.

CHAPTER 1: The Double Asteroid Redirection Test

A history of humanity’s relationship with asteroids, beginning with period (early 19th-century France) when it was unfashionable to believe in their existence, and continuing to the NASA-led deflection of an asteroid in 2022.

CHAPTER 2: A Raging Star

A disputed GCR: solar storms. I interview John Kappenman, a doubted prophet of doom; Kappenman thinks a bad solar storm would be very difficult for civilisation to recover from. I touch on prepping and EMPs.

CHAPTER 3: Stoppering a Supervolcano

Continuing the theme of natural risks, I broach the concept of volcano geoengineering. I interview Anders Sandberg about tampering with supervolcanoes in order to stop them causing volcanic winter.

CHAPTER 4: ‘We Will Die, But We Will Sink Them All!’

Nuclear weapons. I explain how low-budget science snowballed into the most powerful weapons ever created. I note some of the parallels and differences between the development of nuclear weapons and the development of AI. I relate, via an interview, the story of Rose Gottemoeller, who was a schoolgirl during the Cuban Missile Crisis and went on to lead the US team that negotiated the last nuclear arms deal with Russia.

CHAPTER 5: Feeding the Benighted

It might still go pear-shaped. What if there’s a nuclear, volcanic, or asteroid winter? I interview David Denkenberger of ALLFED and discuss the psychological burden of his line of work as well as the logistics of feeding a darkened world.

CHAPTER 6: Tearing off the Hairshirt

Climate. I bemoan the neglect of nuclear power. I discuss various attempts to create clean firm power. I touch on Casey Handmer’s Terraform Industries, which synthesises fuel from air, and interview Andrew Song, a renegade solar geoengineer.

CHAPTER 7: Supercritical

A deep dive, if you’ll pardon the pun, on geothermal power, which has imported some techniques and machinery from oil and gas, and finally looks like it could make meaningful contributions to baseload power. I cover the deepest hole in the world (the Kola Superdeep Borehole); the US government’s geothermal field lab; and site in Cornwall that I visited in 2022 or so. I touch on Quaise (transparent lasers that penetrate bedrock), interviewing two key figures (its founder, a converted oil and gas man; and its guiding light, as it were, a former fusion scientist), noting the technical challenges they face.

CHAPTER 8: The Misfit Prodigy

A biography of the Future of Humanity Institute, which, from 2005, pondered more speculative risks to humanity. I interview Nick Bostrom and many others. This chapter is an extended version of the article I wrote for Asterisk in 2024.

CHAPTER 9: Disease X

I posit a scenario in which an engineered pathogen escapes a lab and causes a pandemic more devastating than Covid-19. Imagining how we might defend ourselves, I refer to far-UVC light and to self-sterilising surfaces, among other measures.

CHAPTER 10: The West Siberian Hat Factory – and Worse

Bioweapons programmes have some history, which I nod to. I also talk about diplomatic efforts to head off pandemics.

CHAPTER 11: The Race for Alignment

Here I trace the influence of FHI on early work on AI alignment. I document the intellectual evolution of the field, interviewing Lee Sharkey and Neel Nanda on mech interp, and moving on to Buck Shlegeris’ breakaway from mech interp and his work on AI control.

CHAPTER 12: Poking the Shoggoth

Relying on insider accounts from 10 Downing Street and the UK AI Security Institute, I write about the nascent field of AI governance.

CHAPTER 13: Whistleblowers

I discuss the act of whistleblowing and interview Daniel Kokotajlo, co-author of the AI 2027 scenario.

CHAPTER 14: The Methuselarity

Here I write about the one that’ll get us all in the end: ageing. I interview some of the people at the cutting edge of longevity research.

CONCLUSION: Existential Hope

Give the ending away? Not likely! :)

What might you ask Tom?

Here are a few ideas for questions to get you thinking. But as always, ask anything!


PhilZ @ 2025-08-05T09:03 (+13)

Thanks for doing the AMA, Tom! How is EA perceived in UK journalism circles (if it's perceived at all)? What did people in those circles make of you writing a book on GCRs? 

Tom Ough @ 2025-08-06T18:43 (+4)

Thanks for having me. I think it might still be the case that most British journalists don't know what EA is. Journalists are naturally sceptical and some will think immediately of SBF. There's also the contrarian school of thought that longtermists are a bunch of sci-fi fantasists doing the bidding of Big Tech. I know at least some of them are kindly-disposed towards EAs, even if they don't share the EA worldview.

As for what my fellow journalists made of the project – there's probably a bit of a selection effect here! Maybe ask them when I'm not around...

Alejandro Acelas 🔸 @ 2025-08-04T11:45 (+6)

What are some groups or communities outside mainstream EA that would be worth engaging in our efforts to reduce existential risks? What are some specific areas or projects we could work on together?

Here's a few other questions that approach that from different angles:
* Who's the audience for your new book? Within them, who would be most keen to help reduce x-risk, even if it's not through EA-affiliated orgs or projects?
* What points in common does EA have with the UK Progress community? What can we do on the margin to work more closely together on our shared priorities?

Tom Ough @ 2025-08-06T18:58 (+3)

I think that EA, to the extent it's a coherent entity, has a pretty good mental map of who its potential collaborators are. I expect that mental map is more detailed than one I could knock up off the top of my head, so it probably won't be very useful for me to speculate on EA's behalf. I'll try anyway. The task of anyone trying to do anything is usually to convince governments to take their thing seriously, e.g. by setting up wastewater monitoring. Particularly when it comes to pandemics, there's probably lots of useful stuff that just hasn't quite been invented yet. Far-UVC, for instance, is a relatively recent innovation. Perhaps there are people in university science departments sitting on similarly good ideas.

As for EA x Progress, there's certainly overlap. Of course there are lots of different kinds of EA and lots of different kinds of progress-head. Both groups think of problems in fairly numerical terms. Both are animated by big ideas. I think there's probably some EA-flavoured work to be done on how good it'd be for the world if the West got its act together in economic terms. I also think there's a lot of overlap in terms of interest in AI, and that, between them, there's a plausible EA-Progress double act that tries to make AI go well without going badly, so to speak.

SarahBluhm @ 2025-08-06T12:57 (+5)

What advice would you give to a young EA interested in journalism as a path to impact? As a sub-case, what about a journalism career for someone who wants to report on niche topics like insect welfare, digital minds, wild animal suffering, etc.?

Tom Ough @ 2025-08-06T19:28 (+5)

The single attribute most likely to make the difference between the success and failure of a journalistic career is the ability to bring in original reporting: hitherto-untold stories that editors want to commission and readers want to read. I'd advise that journalists wanting to write about the topics you refer to (I feel I'm better-placed to advise young journalists than young EAs) think about them in those terms. As an editor, I'm less interested in commissioning pieces on interesting topics than I am in commissioning original treatments of interesting topics — especially when those treatments involve new reporting.

Tom Ough @ 2025-08-06T19:29 (+1)

P.S. If you are asking on your own behalf, then: good luck!

Agnes Stenlund @ 2025-08-06T12:48 (+5)

Any tips for writing about EA ideas in general? Curious about common mistakes you see people make, like commonly used framings or word choices that don't resonate with a broader audience.

Tom Ough @ 2025-08-06T19:09 (+4)

Hmm. I think it's always important for writers to think about the ways that human brains like to ingest information: via stories, e.g. stories about unusual people whose place in the world is at stake. Stephen Pinker and Will Storr are both quite illuminating on this kind of thing. EA writing, in its natural form, has lots going for it, but it can be quite prolix and abstract for lay audiences — and indeed for EAs! 

A classic bit of writerly advice is to tell stories — and I use that term in the most capacious way possible — the way you would in the pub, i.e. you reduce it to the bits people find the most interesting, knowing that you've got to be brief, and that you speak in a conversational way. This approach is particularly valuable when the topic is unfamiliar, complex or abstract, as many EA ideas can be at first blush.

Angelina Li @ 2025-08-06T15:58 (+4)

These chapter titles are amazing!

You cover a lot of different topics within the GCR space in your book. Has the reception to any chapter surprised you?

Tom Ough @ 2025-08-06T18:24 (+3)

Thank you! It helps to have such lively subject matter. It's pretty early in the book's life, so I expect to have a better idea of reception as time passes. I note, though, that a reviewer implied that he didn't find geothermal power as exciting as I do. He wrote (enjoyably dryly) that "Such things as the possibility of geothermal energy exist beyond my natural curiosities". It is strange to imagine that not everyone finds geothermal ravishingly exciting, but ours is a strange world.

Anna Weldon @ 2025-08-06T12:46 (+4)

What role (if any) should EA play in shaping governance norms for geoengineering? Are there strong points of leverage that could be moved now? Relatedly, should anticipatory governance for geoengineering come from existing institutions or will we need bespoke institutions with narrower mandates and deeper technocratic knowledge?

Tom Ough @ 2025-08-06T20:19 (+3)

Very good questions. They go a little beyond my expertise. It's striking that, in Britain, recent government success stories (ARIA, AISI, the vaccines taskforce) have existed outside the normal bureaucratic structures. That principle probably holds for geoengineering projects. But geoengineering projects would have the additional complexity of affecting everyone, which seems to demand accountability — though, in, practice, I'm not convinced that there'll ever be a situation where everyone on the planet gets a vote on something like stratospheric aerosol injection. I gather the Degrees Initiative is trying to address that question of accountability. I also think that advocates for geoengineering should be wary of appearing Astroturfed.

Oscar Sykes @ 2025-08-06T17:45 (+3)

Who is the intended audience for the book? Is there something your hoping to achieve with the book or did you just want to write a book?

Tom Ough @ 2025-08-06T19:20 (+4)

I'd call the book a work of journalism rather than activism, give or take the odd bit of loose talk about how stupid it is that we don't build more nuclear power stations. For that reason, I don't have any particular goals in mind. That said, I would certainly like to live in a world that's safe from GCRs! 

For the same reason, I don't have a well-formed idea of an desired audience. (Naturally, I hope that millions of highly discerning readers find something in it worthwhile.) I'm sure my publisher has ideas regarding who's most likely to buy it. It's been said it's a book for "dads", though I want to make it clear to any mothers who happen to be EA Forum users that they are very welcome to read it too!

Oscar Sykes @ 2025-08-06T17:39 (+3)

In this book you interview lots of people in AI. As somewhat of an outsider, do you have any views on things you think the AI safety community is wrong about, ideas that seem far-fetched, or things they should be focusing more on?

Tom Ough @ 2025-08-06T19:48 (+2)

Thanks for your questions. I worry that even if AI ends up being safe in an X-risk sort of way, it might neverthless create a world we wouldn't want to bring into existence. I think recommender algorithms are a good example of this: they are already extractive, and it's easy to imagine that more powerful AI will intensify such dynamics. I would like to see more attention being given to AI that is not only "helpful, honest and harmless", but conducive to fulfilment. What that AI looks like, I have no idea, and I don't want to invoke paternalism and nannying. But I'll give a hat-tip to my friends at the Meaning Alignment Institute, who have done a lot of work on this.

Oscar Sykes @ 2025-08-06T17:29 (+3)

How did you end up interested in both Effective Altruism and Anglofuturism? They seem like pretty different communities. What are some things you think EA gets wrong and Anglofuturism gets right and vice versa?

Tom Ough @ 2025-08-06T20:55 (+4)

Let me jump straight to the more interesting conceptual question! I think that EA, for all its interest in the future, is not very good at envisaging futures people actually want to live in, or might feel at home in. I think Anglofuturism does a much better job of that. I think the more tasteful versions of Anglofuturism draw on Lindy-er aesthetics than does EA, and are more responsive to the natural human inclination towards kinship.

This might, to an EA, be what Anglofuturism gets wrong. It depends on your moral calibration. If you are truly impartial then you would probably be unimpressed by a worldview that is informed by kinship. An EA could also accuse Anglofuturism of being overly whimsical in a world that's on fire.

I'll sign off here with sincere thanks to everyone who's participated in this AMA. 

Angelina Li @ 2025-08-06T15:58 (+2)

If you were to write a second book about GCRs, what would you focus on next?

Tom Ough @ 2025-08-06T18:15 (+1)

It feels less obvious to me than it used to that the West is on the right track and that progress, whatever progress might be, is broadly deterministic. You can see evidence of this concern in my more recent writing. I don't know that 'the West not being on track' counts as GCR in the classic taxonomy, but I think that's the most likely topic I'd next write a book about.

Toby Tremlett🔹 @ 2025-08-06T13:32 (+2)

I'll crosspost a few questions we are collecting from elsewhere on the internet in this thread:

Toby Tremlett🔹 @ 2025-08-06T13:33 (+3)
Tom Ough @ 2025-08-06T18:34 (+1)

Restored by scientists to life and long-lost vitality, it can only be Aerodactyl.

Toby Tremlett🔹 @ 2025-08-06T13:34 (+2)
Tom Ough @ 2025-08-06T18:30 (+1)

Great spot. I originally proposed to call the book 'The Doom Busters', a reference that I think only the most Boomerish readers would have picked up on. My agent wisely advised me to come up with something that doesn't require a working knowledge of Fifties war films. I imagined the assonance of 'Anti-Catastrophe' might go down better with publishers than my original pun. The resonance with 'The Anti-Death League' wasn't deliberate, but I will have seen that title before, so maybe it was rattling around my head somewhere.

Oscar Sykes @ 2025-08-06T17:32 (+1)

Do you think religion is an important part of British culture and heritage? Could I be a genuine Anglofuturist as an atheist?

Tom Ough @ 2025-08-06T20:30 (+2)

To your first question: I think it's pretty clearly the case that religion, specifically Christianity and Anglicanism, have been central to British culture and heritage for a very long time. It's only very recently that Christianity has lost its place at the heart of British culture. (I'd like to read Bijan Omrani's book on this topic.)

To your second: I'm one of a few people who've banged on about Anglofuturism, so I won't speak for all of them. Speaking for myself, though, I'm not religious, and I have no sense that being religious is a requirement for Anglofuturism. (My podcast co-host Calum is a God-fearing man and might disagree.) Naturally, Anglofuturist depictions of the future invoke the country's heritage, including its religious heritage, but I don't think you have to be religious to enjoy the idea of a Bishopric of Mars.

Oscar Sykes @ 2025-08-06T17:30 (+1)

Someone told me you don't like reading books, is that correct? If so, what made you want to write a book regardless?

Tom Ough @ 2025-08-06T18:21 (+2)

Ha! That's not true. I like reading books and am perpetually trying to spend more time doing it. I certainly like reading some books more than I do others! I think books are less obviously a good medium than they used to be, at least for topics that move quickly. Maybe that's where this impression came from. But I still think there's something to be said for them: once a publisher has commissioned a book, you can focus on a single project for much longer than is typically possible in journalism.