Clarifying and predicting AGI

By richard_ngo @ 2023-05-04T15:56 (+69)

This post is a slightly-adapted summary of two twitter threads, here and here.

The t-AGI framework

As we get closer to AGI, it becomes less appropriate to treat it as a binary threshold. Instead, I prefer to treat it as a continuous spectrum defined by comparison to time-limited humans. I call a system a t-AGI if, on most cognitive tasks, it beats most human experts who are given time t to perform the task.

What does that mean in practice?

Some clarifications:

And very briefly, some of the intuitions behind this framework:

Predictions motivated by this framework

Here are some predictions—mostly just based on my intuitions, but informed by the framework above. I predict with >50% credence that by the end of 2025 neural nets will:

The best humans will still be better (though much slower) at:

FWIW my actual predictions are mostly more like 2 years, but others will apply different evaluation standards, so 2.75 (as of when the thread was posted) seems more robust. Also, they're not based on any OpenAI-specific information.

Lots to disagree with here ofc. I'd be particularly interested in:


yefreitor @ 2023-05-07T05:04 (+3)

Some projects take humans much longer (e.g. proving Fermat's last theorem) but they can almost always be decomposed into subtasks that don't require full global context (even tho that's often helpful for humans).

At least for math, I don't think this is the right way to frame things: finding the right decomposition is often the hard part! "Average math undergrad"-level mathematical reasoning at vastly superhuman speed probably gets you a 1-year artificial mathematician, but I doubt it gets you a 50-year one.