Will explosive growth stem primarily from AI R&D automation?

By OscarD🔸 @ 2025-03-28T20:25 (+31)

Epoch researchers recently shared a great blog post titled “Most AI value will come from broad automation, not from R&D”. You should read it! This seems like a big-if-true claim given how much focus goes into the ‘intelligence explosion followed by an industrial explosion’ narrative. Below, I summarise their post and then offer some critiques and responses.

Summary of Epoch

In the post, Ege Erdil and @Matthew_Barnett  argue (contrary to the standard narrative) that automating R&D tasks, in particular AI R&D, will not be central to the economic significance of AI. Instead, they argue general diffusion of AI through the economy by automating non-research jobs will be the more important source of economic growth. This is because:

A key strategic implication of this is that we will likely have widespread AI diffusion contributing a significant fraction of GDP before we have recursive improvement and superintelligence.

Response to Epoch

Epistemic note: I have thought about this probably a lot less than Epoch people, and also have a less relevant background. Which makes me think I should defer at least moderately to them.

But it still seems valuable to give my object-level thoughts. I mainly don’t directly dispute Epoch’s points, instead giving countervailing reasons that push against their conclusion.

Firstly, I think AI R&D may be relatively early to be automated.[1] It seems very likely that many physical jobs (e.g. nursing) will be automated relatively late, but even compared to other fully remote jobs:

Secondly, even if broad diffusion of AI through the economy contributes more to general economic growth, I think R&D might be more important strategically. In particular, even if broad automation of labour throughout the economy leads to higher economic growth, tightly scoped automation of R&D in strategic sectors - AI, chip design, cyber offense/defense, military technology - would lead to a larger increase in national power. Competitive geopolitical pressures will likely force countries to focus first on bolstering their military-industrial might rather than broader consumer welfare.[7] 

Thirdly, the post critiques a ‘software intelligence explosion’ but does not discuss a ‘chip technology’ IE. As Davidson, Hadshar, and MacAskill argue, automating chip technology research is another feedback loop that could, together with software, create accelerating progress. The third type of ‘full stack’ IE involving general capital accumulation and investment in semiconductor manufacturing is closer to what Erdil and Barnett think is likely.

Overall, I still think it is more likely than not that targeted AI R&D automation will be the main game, and broader labour automation will be downstream of this and less strategically important.

—

Thanks to Nathan Barnard, Ryan Greenblatt, and Rose Hadshar for helpful comments.

  1. ^

     Tom Davidson has raised some similar points here.

  2. ^

     However, inference costs tend to decline rapidly over time.

  3. ^

     Other fields with huge salaries, like quant trading, will also presumably be focuses of early remote work automation. This would fit in with the trend of increasingly automated trading anyway.

  4. ^

     @Jackson Wagner  made some similar useful points in a comment on the blog post.

  5. ^

     E.g. consider an architect. This job seems liable to be automated because (I think?) it could be fully done remotely. But my guess is the most intelligent/good at abstract reasoning architects aren’t many multiples more productive than average architects.

  6. ^

     Although, compute to run experiments to work out which ideas are promising will be scarce, so this picture is a simplification, and having lots of costly-to-rule-out bad ideas is still problematic.

  7. ^

     For this factor to be important the government would likely need to play a key role in allocating AI to industries. This level of command and control may not happen


titotal @ 2025-03-29T10:13 (+4)

I feel like the counterpoint here is that R&D is incredibly hard. In regular development, you have established methods of how to do things, established benchmarks of when things are going well, and a long period of testing to discover errors, flaws, and mistakes through trial and error. 

In R&D, you're trying to do things that nobody has ever done before, and simultaneously establish methods, benchmarks, and errors for that new method, which carries a ton of potential pitfalls. Also, nobody has ever done it before, so the AI is always inherently out-of-training to a much greater degree than in regular work. 

OscarD🔸 @ 2025-04-03T12:27 (+2)

Yes, this seems right, hard to know which effect will dominate. I'm guessing you could assemble pretty useful training data of past R&D breakthroughs which might help, but that will only get you so far.