Moravec’s paradox and its implications

By Vasco Grilo🔸 @ 2025-04-29T16:25 (+12)

This is a linkpost to https://epoch.ai/gradient-updates/movarec-s-paradox

This is a crosspost for Moravec’s paradox and its implications by Ege Erdil, which was originally published on Gradient Updates on 27 December 2024.

Since the birth of the field of artificial intelligence in the 20th century, researchers have observed that the difficulty of a task for humans at best weakly correlates with its difficulty for AI systems. For example, humans find it difficult to multiply ten-digit numbers in their heads but easy to draw boxes around each individual cat in a photograph. In contrast, for AI systems the difficulty is reversed: they could do the former task in the 1950s, and it took until the 2010s for segmentation algorithms to match human performance on the latter task.

The specific observation that it’s easy to build AI systems that perform formal reasoning tasks but difficult to build AI systems whose perception and motor skills are comparable to a human is called Moravec’s paradox. Moravec himself offered an evolutionary explanation for the paradox: we should expect cognitive skills that have been around for longer to be more difficult to reproduce in AI systems, because evolution is likely to have applied significantly more optimization pressure on older skills.

I think this explanation captures something important, but it’s also inadequate: for example, basic communication skills are quite old, but computers are much better at communicating with each other compared to humans if you just compare communication bandwidth. In this issue, I’ll offer a more fleshed-out speculative picture of the principles behind Moravec’s explanation and use this picture to project what tasks we can expect to get automated earlier relative to other tasks in the economy. This issue is by necessity grounded in less evidence than we might prefer, but I think the speculative picture is still valuable.

Subscribe to Gradient Updates

How does the brain work?

In humans, most cognitive tasks are done by the brain. Therefore, it’s important to understand how the brain works at an abstract level before we speculate about what brains would tend to find easy or hard.

The brain appears to be a neural network with around 100 billion neurons and 100 trillion synapses. We think each synapse does something similar in computational complexity to a fused multiply-add operation for each firing, though because synapse firings are discrete events they are not exact equivalents to artificial neuron connections. Each synapse fires between 1 and 100 times per second on average. This means the brain performs the equivalent of between 1e14 and 1e16 FLOP per second, similar in raw processing power to an H100 GPU. For more detailed estimates of the brain’s computational power, I recommend reading Joe Carlsmith’s report on the subject.

If we take this model seriously, then the amount of information in the brain is enormous: it’s similar to a neural network with 100 trillion parameters, and even at 8-bit precision per parameter that’s 100 TB of information. Importantly, this is way more than the amount of information that’s present in the human genome, which is only around 3 billion base pairs or 400 MB in total with a significant portion being shared with plants that have no nervous system, let alone a brain. Within the human range, genomes can be compressed down to 4 MB due to how much of the DNA is identical across humans. This is quite small, around how much 1 million English words would take up if they were stored in a raw text file.

In short, there’s nowhere near enough information in the genome to specifically code for the “weights” of the brain, even in a highly distilled format. Given the limited amount of space, the genome could only code for higher-level features such as the brain’s architecture, its training algorithm, and basic biological instincts that are shared across humans. This is the genome information bottleneck argument, and it’s critical because it means we can’t view optimization done by evolution as analogous to the training of a neural network. Evolution is more appropriately viewed as an optimization process over how to design and train a neural network instead of over the weights of a specific network.

If we accept the conclusion of the genome information bottleneck argument, then the brain should be thought of as a neural network that’s trained using within-lifetime learning. Because the brain only performs the equivalent of 1e14 to 1e16 FLOP per second, and a human lives for roughly a billion seconds before the age of 30, we can deduce that the training compute required for the brain is equivalent to 1e23 to 1e25 FLOP.

What explains performance differences between AIs and the human brain?

At Epoch AI, we consider performance differences between AI systems to be driven by three factors: compute, data, and algorithms. If one system is better than another in overall performance, it must be better in some or all of these dimensions.

We have good estimates of the brain’s computational power as seen in the previous section, and also reasonably good estimates of how much data the brain gets from the outside world in various domains. When we compare current frontier AI systems to the brain, current systems seem either on par or better: they use more compute (for both training and inference) and, more importantly, they use much more data than the human brain does. If the brain was at algorithmic efficiency parity with current AI systems, we would have superhuman AIs.

This means the brain must have key algorithmic advantages over current AIs. However, the brain implements multiple algorithms, and not all of them are superior to corresponding AI algorithms. Many tasks that even mid-sized LLMs can do are completely out of reach for humans. For example, many LLMs can read 50,000 words in the span of 10 seconds and then answer complex questions about the text with a reasonable degree of accuracy.

Moravec’s paradox becomes relevant when we want to understand which tasks in the human brain are performed by a highly optimized stack of algorithms with supporting hardware in the rest of the body, e.g. eyes for vision and ears for hearing. In general, these tasks will have two properties at once:

The skills picked up by these two criteria are precisely the perception and sensorimotor skills that Moravec’s paradox was originally about. Skills like basic communication are old but have faced little optimization pressure until the beginning of rapid cultural evolution with humans, because being good at communication in a species where other members are bad at it gives you little selection advantage. On the other hand, general reasoning skills that set humans apart from chimps are too new to have been heavily optimized.

In general, given some background of compute growth, we expect tasks to be automated earlier to the extent they have not faced much evolutionary optimization pressure and to the extent AI systems can be trained on much more data compared to what humans can see in a lifetime. For example, language is a new skill in evolutionary time, so it’s not surprising that we’ve built intelligent chatbots speaking fluent English before autonomous robots.

So why is Moravec’s paradox unintuitive? I think the basic reason is that human capabilities on tasks that have faced heavy optimization pressure during evolution have low variance, and we intuitively judge low-variance tasks to be easy. For example, we wouldn’t expect some people to have a metabolism that’s ten times more energy efficient than other people due to genetics, but their math research productivity could easily be ten times higher for genetic reasons. This makes the lower variance tasks look “easy”, because we judge how hard a task is by how much better the best humans are at it compared to the median person, or how we personally perform compared to the best performers. So we naively think chess is harder than language and language is harder than muscle control, while the truth is the exact opposite.

If this is true, we can use the gap between the median and the best humans or other measures of variance as proxies for how much optimization a task has faced in evolutionary history, at least if we hold training time fixed to control for variance in the amount of data and training compute. This doesn’t always work well, but the basic insight still has merit if we interpret it as one signal among many: Moravec’s paradox implies that the greater the difference between task performance between the median person and the best people at equivalent training time, the earlier we should expect that task to get automated by AI systems.

Which tasks will be automated next according to this picture?

Based on the above discussion, here are some tasks that Moravec’s paradox type logic suggests will be partially or fully automated earlier relative to other tasks in the economy:

In general, I expect the tasks that will be automated in the near future to be tasks that are similar to language in how much human evolution has optimized for them, but that we’ve not yet automated because we don’t have enough high-quality training data to make up for our worse algorithms. Some combination of creating better datasets, generating more synthetic data, and improving algorithms will probably be enough to bridge the remaining gap.

In contrast, I expect any job that places high demands on perception and sensorimotor skills to be difficult to automate. This is not the same as “jobs that are done in person”: for example, teachers and pharmacists generally work in person, but their jobs don’t require complex perception or sensorimotor skills. In contrast, car mechanics, plumbers and surgeons require more dexterity, so I expect these jobs to resist automation before the tasks I’ve listed above are automated.

Concluding thoughts

I find the above comparison with biology quite useful for thinking about the variation of the rate of AI progress across different tasks. There are probably further insights to be squeezed out from it as well. For example, the brain being a neural network with 1e14 parameters but being trained on 1e23 to 1e25 FLOP suggests the brain is a very sparse neural network. It’s plausible that much of the brain’s algorithmic advantage comes from its ability to train a big model with limited data and compute resources through a mixture of synthetic data and efficient prioritization of where to spend its compute budget through sparsity and other techniques.

One view I want to counter is that the difficulty of tasks for humans and AIs is unrelated. I think the brain’s working principles are fundamentally similar to artificial neural networks, though there are significant differences in both hardware and software. The correlation we would naively expect between task difficulty is obscured because of the slow pace and the unique priorities of natural selection.

In the end, predictions based on comparing AI systems with the human brain are always going to be imperfect, but they can still serve as a useful signal for forecasters who combine them with other methods such as extrapolating trends of recent progress in the field. Especially for tasks that AIs are currently incapable of doing at all, I think comparisons with the brain can be quite useful, as in those cases they can be the only source of information that we have.


Stuart Buck @ 2025-04-29T19:44 (+9)

I don’t think Moravec’s paradox is a real paradox in the first place. The reason we think of activities like walking, using your hands, being able to speak language, etc. as easy is because we don’t have any long-term memories from the first few years of life when we had to figure all of that out from scratch. https://x.com/stuartbuck1/status/1798547161489231928

Vasco Grilo🔸 @ 2025-04-30T07:11 (+2)

Interesting point, Stuart! I think when people say a given ability is easy to achieve they are referring to the chance of achieving it conditional on putting some effort into it. Sensorimotor tasks may be hard in the sense of requiring lots of effort (even if we forgot about it), but the vast majority of people achieve a great level conditional on the training effort of a typical upbringing. I guess there would be much greater variance in chess and maths ability conditional on a given amount of training.

Stuart Buck @ 2025-04-30T19:46 (+4)

Not sure if this is responsive, but while most people (barring disability) do reach a decent level of proficiency at navigating the world, I think the variance can still be huge. Look at the feats of Olympic gymnasts or professional basketball players--hardly anyone could reach that level no matter how much they trained. 

Vasco Grilo🔸 @ 2025-04-30T20:34 (+2)

Thanks, Stuart. I think Ege's point is that there is less variation in activities that humans have done for a long time, which does not apply to all sporty activities. I would say most of olympic gymnastics and professional basketball do not qualify, whereas walking and running do. The fastest marathon was run in around 2 h, whereas a random person with age 30 can maybe complete one in around 9 h moving at a walking pace of 4.67 km/h[1] (= 42/9), which is 4.5 (= 9/5) times as much time. In contrast, a random person would not beat the best chess players even if they had 4.5 times as much time (unless they had very little time like 4.5 s against 1 s for the top players).

  1. ^

    Healthy young people can walk all day if needed even if they do not exercise regularly.

SummaryBot @ 2025-04-29T19:15 (+3)

Executive summary: This speculative analysis explores Moravec’s paradox—why tasks humans find easy are often hard for AI—and argues that evolutionary optimization explains this reversal; tasks with less evolutionary pressure, like abstract reasoning or language, are more amenable to near-term automation than perception and motor skills.

Key points:

  1. Moravec’s paradox highlights a key AI development pattern: tasks humans find easy (like perception) are hard for AI, and vice versa, due to differing evolutionary optimization histories.
  2. The genome information bottleneck suggests that evolution optimized not the specific “weights” of the brain but its training processes, implying that much of human intelligence arises from within-lifetime learning.
  3. The brain likely has superior algorithms compared to current AIs, which explains why humans still outperform machines in many sensorimotor tasks despite AIs having more compute and data.
  4. Tasks likely to be automated next include abstract reasoning in research, software engineering, and digital art—areas with low evolutionary optimization and abundant training data.
  5. High-variance performance among humans may signal tasks less shaped by evolution and thus more automatable; conversely, low-variance, perception-heavy tasks (like plumbing or surgery) will be harder to automate.
  6. Using biological analogies cautiously, the author encourages forecasters to combine evolutionary insights with other methods when predicting AI progress, particularly for tasks where current AI is still far behind.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.