AI may attain human level soon

By Vishakha Agrawal, Algon @ 2025-04-23T11:10 (+2)

This is a linkpost to https://aisafety.info/questions/NM3C/2:-AI-may-attain-human-level-soon

This is an article in the new intro to AI safety series from AISafety.info. AISafety.info writes AI safety intro content. We'd appreciate any feedback.

The most up-to-date version of this article is on our website, along with 300+ other articles on AI existential safety.

The main companies building AI, such as OpenAI, Anthropic, and Google DeepMind, are explicitly aiming for artificial general intelligence: AI that has the same kind of general reasoning ability that humans have, and can do all the same tasks and jobs (at least the ones that don’t require a physical body).

We’re not quite there yet, and predicting technological progress is hard. But there are a few reasons to think we may see human-level AI soon — in decades or even just a few years.

Given that intuition, modeling, and experts all point in a similar direction, it’s reasonable to plan for the possibility of AI reaching human-level soon. But human-level isn’t the limit — systems can potentially become far smarter than us.

  1. ^
  2. ^

Yarrow @ 2025-04-23T22:54 (+1)

There’s a big difference between behaviours that, if a human can do them, indicate a high level of human intelligence versus behaviours that we would need to see from a machine to conclude that it has human-level intelligence or something close to it.

For example, if a human can play grandmaster-level chess, that indicates high intelligence. But computers have played grandmaster-level chess since the 1990s. And yet clearly artificial intelligence (AGI) or human-level artificial intelligence (HLAI) has not existed since the 1990s.

The same idea applies to taking exams. Large language models (LLMs) are good at answering written exam questions, but their success on these questions does not indicate they have an equivalent level of intelligence to humans who score similarly on those exams. This is just a fundamental error, akin to saying IBM’s Deep Blue is AGI.

If you look at a test like ARC-AGI-2, frontier AI systems score well below the human average.

On average, it doesn’t appear like AI experts do in fact agree that AGI is likely to arrive within 5 or 10 years, although of course some AI experts do think that. One survey of AI experts found their median prediction is a 50% chance of AGI by 2047 (23 years from now) — which is actually compatible with the prediction from Geoffrey Hinton you cited, who’s thrown out 5 to 20 years with 50% confidence as his prediction.

Another survey found an aggregated prediction that there’s a 50% chance of AI being capable of automating all human jobs by 2116 (91 years from now). I don’t know why those two predictions are so far apart. 

(Edit on 2025-05-03 at 09:21 UTC: Oops, those are actually responses to two different questions from the same survey — the 2023 AI Impacts survey — not two different surveys. The difference of 69 years between the two predictions is wacky. I don't know why there is such a huge gap.)

If it seems to you like there’s a consensus around short-term AGI, that probably has more to do with who you’re asking or who you’re listening to than what people, in general, actually believe. I think a lot of AGI discourse is an echo chamber where people continuously hear their existing views affirmed and re-affirmed and reasonable criticism of these views, even criticism from reputable experts, is often not met warmly.

Many people do not share the intuition that frontier AI systems are particularly smart or useful. I wrote a post here that points out, so far, AI does not seem to have had much of an impact on either firm-level productivity or economic growth, and has achieved only the most limited amount of labour automation.

LLM-based systems have multiple embarrassing failure modes that seem to reveal they are much less intelligent than they might otherwise appear. These failures seem like fundamental problems with LLM-based systems and not something that anyone currently knows how to solve.