Superforecasting the premises in “Is power-seeking AI an existential risk?”
By Joe_Carlsmith @ 2023-10-18T20:33 (+114)
This is a crosspost, probably from LessWrong. Try viewing it there.
nullDr. David Mathers @ 2023-10-20T09:09 (+27)
Maybe there's just nothing interesting to say (though I doubt it), but I really feel like this should be getting more attention. It's an (at least mostly, plausible some of the supers were EAs) outside check on the views of most big EA orgs about the single best thing to spend EA resources on.
SummaryBot @ 2023-10-18T21:12 (+1)
Executive summary: Superforecasters' aggregated probabilities differ from the author's original report on the premises of power-seeking AI as an existential risk, with higher probabilities on some premises and lower probabilities on others.
Key points:
- Superforecasters rated premises on AI timelines and incentives higher but alignment difficulty and impact of failures lower than the original report.
- The overall probability of existential catastrophe by 2070 was 1% versus 5% originally.
- The author has not substantially updated given unpersuasive arguments and uncertainty about deferring to group probabilities.
- Engaging with reasoned disagreements is an open question in forecasting methodology.
- The author sees this as similar to determining how much to update based on disagreements from markets and economists.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Paul_Christiano @ 2023-10-21T17:28 (+4)
I think the "alignment difficulty" premise was given higher probability by superforecasters, not lower probability.