Report on Whether AI Could Drive Explosive Economic Growth

By Tom_Davidson @ 2021-06-25T23:02 (+63)

This is a linkpost for https://www.openphilanthropy.org/blog/report-advanced-ai-drive-explosive-economic-growth 

I've cross-posted the introduction so people can see what it's about. Happy to respond to questions and comments here (though I won't be able to respond for a week).

 

Since 1900, the global economy has grown by about 3% each year, meaning that it doubles in size every 20–30 years. I’ve written a report assessing whether significantly faster growth might occur this century. Specifically, I ask whether growth could be ten times faster, with the global economy growing by 30% each year. This would mean it doubled in size every 2–3 years; I call this possibility ‘explosive growth’.

The report builds on the work of my colleague, David Roodman. Although recently growth has been fairly steady, in the distant past it was much slower. David developed a mathematical model for extrapolating this pattern into the future; after calibration to data for the last 12,000 years, the model predicts that the global economy will grow ever faster over time and that explosive growth is a couple of decades away! My report assesses David’s model, and compares it to other methods for extrapolating growth into the future.

At first glance, it might seem that explosive growth is implausible — that it is somehow absurd or economically naive. Contrary to this view, I offer three considerations from economic history and growth theory that suggest advanced AI could drive explosive growth. In brief:

These arguments don’t prove that advanced AI would drive explosive growth, but I think they show that it is a plausible scenario.

For AI to drive explosive growth, AI systems would have to be capable enough to replace human workers in most jobs, including cutting-edge scientific research, starting new businesses, and running and upgrading factories.

We think it’s plausible that sufficiently capable AI systems will be developed this century. My colleague Joe Carlsmith’s report estimates the computational power needed to match the human brain. Based on this and other evidence, my colleague Ajeya Cotra’s draft report estimates when we’ll develop human-level AI; she finds we’re 80% likely to do so by 2100. In a previous report I took a different approach to the question, drawing on analogies between developing human-level AI and various historical technological developments. My central estimate was that there’s a ~20% probability of developing human-level AI by 2100. These probabilities are consistent with the predictions of AI practitioners.

Overall, I place at least 10% probability on advanced AI driving explosive growth this century.

The report also discusses reasons to think growth could slow; I place at least 25% probability on the global economy only doubling every ~50 years by 2100.

This research informs Open Phil’s thinking about what kinds of impact advanced AI systems might have on society, and when such systems might be developed. This is relevant to how much to prioritize risks from advanced AI relative to other focus areas, and also to prioritizing within this focus area.

We elicited a number of reviews of drafts of the report.

The structure of this blog post is as follows:

Note, many issues discussed in the report are not included in this blog. I’d recommend that readers with a background in these issues read the report instead.

(Read the rest of this post.)


alexlintz @ 2021-06-30T14:14 (+15)

I did my masters' thesis evaluating Kremer's paper from the 90's which makes the case for the more people->more growth->more people feedback loop. It essentially supports Ben's post from awhile ago (https://forum.effectivealtruism.org/posts/CWFn9qAKsRibpCGq8/does-economic-history-point-toward-a-singularity) [fyi I did work with ben on this project] in arguing that, with radiocarbon data (which I hold is much better than the guesstimate data Kremer uses), the more people->more growth relationship doesn't seem to hold. In terms of population it seems growth was much less steady than previously assumed. There are basically a few jumps, lot's of stagnation (e.g. China's population seems to have stagnated for thousands of years after the Neolithic revolution), and no clear overall pattern in long-term growth until the past few hundred years.

There are tons of caveats to my results listed in the thesis and I haven't read your paper so I'm not sure how much it even matters but I hope this contributes something! I'll add one more caveat: The paper is not super well-done (hence my previous hesitancy to post). I was sick for much of my thesis-writing period and also working part-time so much of it was rushed through toward the end. If it seems useful I can dredge up my notes on what I think might be wrong with it and send you the data (I actually have decently clean replication files in R). If I remember correctly the main results all hold it's mostly just minor things which need fixing. I've been meaning to clean it up and post it properly but I'm not sure whether that's ever going to happen, hence my posting it now.

With all that in mind, here's the thesis! https://docs.google.com/document/d/1pVzrTikeoRRO3WvU5x01nOEyf_USPUg-FcrqTGwUVR8/edit#

Feel free to reach out if you'd like to have a chat about this!

Ben_Snodin @ 2021-07-30T07:51 (+3)

Thanks for this, I think it's really brilliant, I really appreciate how clearly the details are laid out in the blog and report. It's really cool to be able to see external reviewer comments too.

I found it kind of surprising that there isn't any mention of civilizational collapse etc when thinking about growth outcomes for the 21st century (e.g. in Appendix G, but also apparently in your bottom line probabilities in e.g. Section 4.6 "Conclusion" -- or maybe it's there and I missed it / it's not explicit).

I guess your probabilities for various growth outcomes in Appendix G are conditional on ~no civilizational collapse (from any cause) and ~no AI-triggered fundamental reshaping of society that unexpectedly prevents growth? Or should I read them more as "conditional on ~no civilizational collapse etc other than due to AI", with the probability mass for AI-triggered collapse etc being incorporated into your "AI robots don't have a tendency to drive explosive growth because none of our theories are well-suited for this situation" and/or "an unanticipated bottleneck prevents explosive growth"?

Tom_Davidson @ 2021-10-12T21:54 (+2)

Great question!

I would read Appendix G as conditional on "~no civilizational collapse (from any cause)", but not conditional on "~no AI-triggered fundamental reshaping of society that unexpectedly prevents growth". I think the latter would be incorporated in "an unanticipated bottleneck prevents explosive growth".

JuanGarcia @ 2021-06-26T20:39 (+3)

I suppose this was briefly touched upon as part of Objection number 1, but could you comment on the apparent coupling between economic growth and energy use? See for example: https://www.mckinsey.com/industries/electric-power-and-natural-gas/our-insights/the-decoupling-of-gdp-and-energy-growth-a-ceo-guide#

Is there reason ro believe AI could produce a decoupling of the two?

Tom_Davidson @ 2021-07-06T17:43 (+4)

Hey - interesting question! 

This isn't something I looked into in depth, but I think that if AI drives explosive economic growth then you'd probably see large rises in both absolute energy use and in energy efficiency.

Energy use might grow via (e.g.) massively expanding solar power to the world's deserts (see this blog from Carl Shulman). Energy efficiency might grow via replacing human  workers with AIs (allowing services to be delivered with less energy input), rapid tech progress further increasing the energy efficiency of existing goods and services, the creation of new valuable products that use very little energy (e.g. amazing virtual realities), or in other ways.