Human-level is not the limit

By Vishakha Agrawal, Algon @ 2025-04-23T11:16 (+3)

This is a linkpost to https://aisafety.info/questions/NM3A/3:-Human-level-is-not-the-limit

This is an article in the new intro to AI safety series from AISafety.info. AISafety.info writes AI safety intro content. We'd appreciate any feedback.

The most up-to-date version of this article is on our website, along with 300+ other articles on AI existential safety.

We’ve built technologies like skyscrapers that are much larger than us, bulldozers that are much stronger, airplanes that are much faster, and bridges that are much sturdier. Similarly, there are good reasons to think that machines can eventually become much more capable at general cognitive problem-solving.

On an abstract level, our intelligence came from evolution, and while evolution results in well-optimized systems, it can’t “think ahead” or deliberately plan design choices. Humans are just the first generally intelligent system that worked, and there’s no reason to expect us to be close to the most effective design. Moreover, evolution works under heavy constraints that don’t affect the creators of AI.

Advantages that AI “brains” can eventually gain over ours include:

Adding this all up, eventually, it becomes wrong to think of an advanced AI system as if it’s a single human genius — it becomes more like a hive of thousands or millions of supergeniuses in various fields, moving with perfect coordination and sharing information instantly. A system with such advantages wouldn’t be infinitely intelligent, or capable of solving any problem. But it would hugely outperform us in many important domains, including science, engineering, economic and military strategy, and persuasion.

This is often called Superintelligence — and although it might sound like a far future concern, we could see it a short time after AI reaches human level.