Forecasting Biosecurity Risks from LLMs

By Forecasting Research Institute @ 2025-07-01T12:43 (+10)

This is a linkpost to https://forecastingresearch.org/ai-enabled-biorisk

Forecasting Research Institute just released a new pre-print: Forecasting biosecurity risks from large language models and the efficacy of safeguards

Here's an overview of the paper:

As AI capabilities improve, concerns have grown about the potential biosecurity risks posed by frontier large language models (LLMs). This study systematically assesses expert beliefs about these risks through surveys of 46 domain experts in biosecurity and biology, along with 22 expert forecasters (superforecasters). The median expert forecasts that if AI were to meet specific performance benchmarks, such as matching expert teams on a virology troubleshooting test, the annual risk of a human-caused epidemic causing over 100,000 deaths would rise from 0.3% to 1.5%. 

However, experts and superforecasters significantly underestimate AI progress, predicting these capabilities won't emerge until at least 2030. In fact, new research in collaboration with SecureBio indicates that some of these capabilities have already been achieved.

Experts generally believe that mitigation measures can substantially reduce these risks. When asked about mitigation measures, including mandatory screening of synthetic nucleic acid orders and AI model safeguards, experts reduce their risk forecasts back close to baseline levels. The results suggest that while biological risks from LLMs may be serious, there are promising avenues for mitigation.

Read the full pre-print here: https://forecastingresearch.org/s/ai-enabled-biorisk.pdf