[Link post] Paths To High-Level Machine Intelligence

By Daniel_Eth @ 2021-09-22T02:43 (+23)

This is a linkpost for https://www.lesswrong.com/posts/amK9EqxALJXyd9Rb2/paths-to-high-level-machine-intelligence

In this post, we map out cruxes of disagreement relevant for AI timelines and paths to high-level machine intelligence (HLMI).

We examine both 1) hardware progression and 2) AI progression and requirements. While (1) is relatively straightforward, the bulk of the post focusses on (2). For (2), we consider three methods of estimating AI timelines: an inside-view "gears level" model of specific pathways to HLMI, analogies between HLMI and other developments, and extrapolations of automation and progress in AI subfields.

For the inside-view estimate, the pathways we consider include: current deep learning plus "business-as-usual" advances, hybrid statistical-symbolic AI, whole brain emulation, and so on. For each pathway, we consider hardware requirements (compared to hardware available, found in (1)), and software requirements (dependent on various cruxes, such as the creation of an adequate environment or sufficient brain-scanning technology).

The other two estimation methods are likewise dependent on further cruxes, such as whether or not algorithmic progress is mainly driven by hardware progress.

This post is part of a project in collaboration with David Manheim, Aryeh Englander, Sammy Martin, Issa Rice, Ben Cottier, Jérémy Perret, Ross Gruetzemacher, and Alexis Carlier.

We think three main groups of people would benefit from reading the post:

Again, here's a link to the post: https://www.lesswrong.com/posts/amK9EqxALJXyd9Rb2/paths-to-high-level-machine-intelligence