Foresight for AGI Safety Strategy

By jacquesthibs @ 2022-12-05T16:09 (+14)

This is a crosspost, probably from LessWrong. Try viewing it there.

null
jacquesthibs @ 2022-12-05T20:09 (+6)

Here’s a comment I wrote on LessWrong in order to provide some clarification:

———

So, my difficulty is that my experience in government and my experience in EA-adjacent spaces has totally confused my understanding of the jargon. I'll try to clarify:

My understanding of forecasting is that you would optimally want to predict a distribution of outcomes, i.e. the cone but weighted with probabilities. This seems strictly better than predicting the cone without probabilities since probabilities allow you to prioritize between scenarios. 

Yes, in the end, we still need to prioritize based on the plausibility of a scenario.

I understand some of the problems you describe, e.g. that people might be missing parts of the distribution when they make predictions and they should spread them wider but I think you can describe these problems entirely within the forecasting language and there is no need to introduce a new term. 

Yeah, I care much less about the term/jargon than the approach. In other words, what I'm hoping to see more of is to come up with a set of scenarios and forecasting across the cone of plausibility (weighted by probability, impact, etc) so that we can create a robust plan and identify opportunities that improve our odds of success.