Literature review of Transformative Artificial Intelligence timelines

By Jaime Sevilla @ 2023-01-27T20:36 (+148)

This is a crosspost, probably from LessWrong. Try viewing it there.

null
Lizka @ 2023-02-08T13:12 (+33)

Some excellent content on AI timelines and takeoff scenarios has come out recently:

I'm curating this post, but encourage people to look at the others if they're interested. 

Things I really appreciate about this post: 

Other notes: 

  1. I do wish it was easier to tell how independent these different approaches/models are. I like the way model-based forecasts and judgement-based forecasts are separated, which already helps (I assume that e.g. the Metaculus estimate incorporates others' and the models).
  2. I think some of the conversations people have about timelines focus too much on what the timelines look like and less on "what does this mean for how we should act." I don't think this is a weakness of this lit review — this lit review is very useful and does what it sets out to do (aggregate different forecasts and explain different approaches to forecasting transformative AI) — but I wanted to flag this. 
Jaime Sevilla @ 2023-02-08T15:25 (+4)

Thank you Lizka, this is really good feedback.

Ozzie Gooen @ 2023-01-30T00:58 (+11)

This seems pretty neat, kudos for organizing all of this! 

I haven't read through the entire report. Is there any extrapolation based on market data or outreach? I see arguments about market actors not seeing to have close timelines, as the main argument that timelines are at least 30+ years out.

Jaime Sevilla @ 2023-01-30T13:11 (+2)

Extracting a full probability distribution from eg real interest rates requires multiple assumptions about eg GDP growth rates after TAI, so AFAIK nobody has done that exercise.

Ozzie Gooen @ 2023-01-31T21:57 (+5)

Yea, I assume the full version is impossible. But maybe there are at least some simpler statements that can be inferred? Like, "<10% of transformative AI by 2030."

I'd be really curious to get a better read on what market specialists around this area (maybe select hedge fund teams around tech disruption?) would think.

Jaime Sevilla @ 2023-01-31T22:30 (+6)

I don't think it's impossible - you could start from Harperin's et al basic setup [1] and plug in some numbers about p doom, the long rate growth rate etc and get a market opinion.

I would also be interested in seeing the analysis of hedge fund experts and others. In our cursory lit review we didn't come across any which was readily quantifiable (would love to learn if there is one!).

[1] https://forum.effectivealtruism.org/posts/8c7LycgtkypkgYjZx/agi-and-the-emh-markets-are-not-expecting-aligned-or

Daniel_Eth @ 2023-01-30T02:41 (+4)

I notice that some of these forecasts imply different paths to TAI than others (most obviously, WBE assumes a different path than the others). In that case, does taking a linear average make sense? Consider if you think WBE is likely moderately far away, versus other paths are more uncertain and may be very near or very far. In that case, a constant weight on the WBE probability wouldn't match your actual views.

Jaime Sevilla @ 2023-01-30T13:21 (+2)

I am not sure I follow 100%: is your point that the WBE path is disjunctive from others?

Note that many of the other models are implicitly considering WBE, eg the outside view models.

Daniel_Eth @ 2023-02-01T00:00 (+4)

Yeah, my point is that it's (basically) disjunctive.

Vasco Grilo @ 2023-02-05T08:23 (+2)

Thanks, this is just great!

The medians for the model-based and judgement-based timelines are 2089 and 2045 (whose mean is 2067). These are 44 years apart, so I wonder whether you thought about how much weight to give to each type of model.