Assessing Near-Term Accuracy in the Existential Risk Persuasion Tournament
By Forecasting Research Institute @ 2025-09-02T12:22 (+41)
This is a linkpost to https://forecastingresearch.org/near-term-xpt-accuracy
Forecasting Research Institute just released a new report: Assessing Near-Term Accuracy in the Existential Risk Persuasion Tournament
In June–October 2022, we convened 169 people to participate in the “Existential Risk Persuasion Tournament” (XPT). The XPT participants included both superforecasters with proven forecasting track records and domain experts with subject-matter expertise. The tournament incentivized accurate forecasting and persuasive argumentation about long-term risks humanity may face, including risks from artificial intelligence (AI), climate change, nuclear war, and pandemics. This report analyzes respondents’ forecasting accuracy on 38 near-term questions that resolved by mid-2025.
The study finds overall performance parity between superforecasters and domain experts, with both groups underestimating AI progress and overestimating improvements in climate technology. Both superforecasters and domain experts substantially outperformed a baseline of educated members of the general public.
Read the full report here: https://forecastingresearch.org/near-term-xpt-accuracy
Noah Birnbaum @ 2025-09-09T06:54 (+1)
In the report, it says: "A natural question is whether more accurate near-term forecasters made systematically different long-term risk predictions. Figure 4.1 suggests that there is no meaningful relationship between near-term accuracy and long-term risk forecasts."
It then says: "Overall, our findings challenge the hope that near-term accuracy can reliably identify forecasters with more credible long-term risk predictions."
One interpretation here (that I take this report to be offering) is that short term prediction accuracy don't extrapolate to long term prediction accuracy in general. However, another interpretation that I see as reasonable (maybe somewhat but not substantially less) is merely that Superforecasters aren't very good at predicting things that require lots of technical information (i.e. AI capabilities); after all (from my knowledge), very little work has been done to show that superforecasters are actually as good at predictions in technical subjects (almost all of the initial work was done in economics and geopolitics), and maybe there are some object level reasons to think that they wouldn't(?)
Would be interested in hearing more thoughts/to be corrected if wrong here.
Also: "This research would not have been possible without the support of the Musk Foundation, Open Philanthropy, and the Long-Term Future Fund." Musk Foundation, huh? Interesting.