Ajeya's Quick takes

By Ajeya @ 2025-10-02T18:20 (+6)

null
Ajeya @ 2025-10-02T18:20 (+69)

I bet a number of generalist EAs (people who are good at operations, conceptual research / analysis, writing, generally getting shit done) should probably switch from working on AI safety and policy to working on biosecurity on the current margin.

While AI risk is a lot more important overall (on my views there's ~20-30% x-risk from AI vs ~1-3% from bio), it seems like bio is a lot more neglected right now and there's a lot of pretty straightforward object-level work to do that could take a big bite out of the problem (something that's much harder to come by in AI, especially outside of technical safety).

If you're a generalist working on AI because it's the most important thing, I'd seriously consider making the switch. A good place to start could be applying to work with my colleague ASB to help our bio team seed and scale organizations working on stuff like pathogen detection, PPE stockpiling, and sterilization tech. IMO switching should be especially appealing if:

To be clear, bio is definitely not my lane and I don't have super deep thinking on this topic beyond what I'm sharing in this quick take (and I'm partly deferring to others on the overall size of bio risk). But from my zoomed-out view, the problem seems both very real and refreshingly tractable.

elifland @ 2025-10-03T16:06 (+27)

Is the 1-3% x-risk from bio including bio catastrophes mediated by AI (via misuse and/or misalignment? Is it taking into account ASI timelines?

Also, just comparing % x-risk seems to miss out on the value of shaping AI upside / better futures, s-risks + acausal stuff, etc. (also are you counting ai-enabled coups / concentration of power?). And relatedly the general heuristic of working on the thing that will be the dominant determinant of the future once developed (and which might be developed soon).

Ajeya @ 2025-10-06T17:28 (+4)

Is the 1-3% x-risk from bio including bio catastrophes mediated by AI (via misuse and/or misalignment? Is it taking into account ASI timelines?

I'm largely deferring to ASB on these numbers, so he can potentially speak in more detail, but my guess is this includes AI-mediated misuse and accident (people using LLMs or bio design tools to invent nastier bioweapons and then either deliberately or accidentally releasing them), but excludes misaligned AIs using bioweapons as a tactic in an AI takeover attempt. Since the biodefenses work could also help with the latter, the importance ratio here is probably somewhat stacking the deck in favor of AI (though I don't think it's a giant skew, because bioweapons are just one path to AI takeover).

ASB has pretty short ASI timelines that are broadly similar to mine and these numbers take that into account.

Also, just comparing % x-risk seems to miss out on the value of shaping AI upside / better futures, s-risks + acausal stuff, etc. (also are you counting ai-enabled coups / concentration of power?). And relatedly the general heuristic of working on the thing that will be the dominant determinant of the future once developed (and which might be developed soon).

If you feel moved by these things and are a good fit to work on them, that's a much stronger reason to work on AI over bio than most people have. But the vast bulk of generalist EAs working on AI are working on AI takeover and more mundane misuse stuff that feels like it's a pretty apples-to-apples comparison to bio.

David Thorstad @ 2025-10-05T07:32 (+11)

There's not very much evidence for existential risk from biological causes. I've had a very hard time getting anyone to even tell me what they are concerned about, and when they do it is not very plausible.

ASB @ 2025-10-05T14:07 (+17)

Mirror life is a concrete example of something I would consider an existential risk if we were unprepared. I like Niko and Fin's writeup: https://press.asimov.com/articles/mirror-life