New 80k problem profile: extreme power concentration

By rosehadshar @ 2025-12-12T13:05 (+88)

I recently wrote 80k’s new problem profile on extreme power concentration (with a lot of help from others - see the acknowledgements at the bottom).

It’s meant to be a systematic introduction to the risk of AI-enabled power concentration, where AI enables a small group of humans to amass huge amounts of unchecked power over everyone else. It’s primarily aimed at people who are new to the topic, but I think it’s also one of the only write-ups there is on this overall risk,[1]so might be interesting to others, too.

Briefly, the piece argues that:

That’s my best shot at summarising the risk of extreme power concentration at the moment. I’ve tried to be balanced and not too opinionated, but I expect many people will have disagreements with the way I’ve done it. Partly this is because people haven’t been thinking seriously about extreme power concentration for very long, and there isn’t yet a consensus way of thinking about it. To give a flavour of some of the different views on power concentration:

So you shouldn’t read the problem profile as an authoritative, consensus view on power concentration - it’s more a waymarker, my best attempt to give an interim overview of a risk which I hope we will develop a much clearer understanding of, hopefully soon.

Some salient things about extreme power concentration that I wish we understood better:

(For more musings on power concentration, you can listen to this podcast, where Nora Ammann and myself discuss our different takes on the topic.)

If you have thoughts on any of those things, please comment with them! And if you want to contribute to this area, consider:

Thanks to Nora Ammann, Adam Bales, Owen Cotton-Barratt, Tom Davidson, David Duvenaud, Holden Karnofsky, Arden Koehler, Daniel Kokotajlo, and Liam Patell for a mixture of comments, discussion, disagreement, and moral support.

  1. I think AI-enabled coups, gradual disempowerment and the intelligence curse are the best pieces of work on power concentration so far, but they are all analysing a subset of the scenario space. I’m sure my problem profile is, too - but it is at least trying to cover all of the ground in those papers, though at a very high level. ↩︎

  2. A few different complaints about the distinction that I’ve heard:

    • Most takeover scenarios will involve both human and AI power-seeking, so it will look blurry at the time
    • Even if it eventually becomes clear that either a human or an AI is in charge, it’s not particularly important - either way we’ve lost most of the value of the future ↩︎
  3. (This is just an opportunistic breakdown based on the papers I like. I’d be surprised if it’s actually the best way to carve up the space, so probably there’s a better version of this question.) ↩︎

  4. This is a form run by Forethought, but we’re in touch with other researchers in the power concentration space and intend to forward people on where relevant. We’re not promising to get back to everyone, but in some cases we might be able to help with funding, mentorship or other kinds of support. ↩︎


Eli Rose🔸 @ 2025-12-13T22:02 (+41)

Thanks for writing this!!

This risk seems equal or greater to me than AI takeover risk. Historically the EA & AIS communities focused more on misalignment, but I'm not sure if that choice has held up.

Come 2027, I'd love for it to be the case that an order of magnitude more people are usefully working on this risk. I think it will be rough going for the first 50 people in this area; I expect there's a bunch more clarificatory and scoping work to do; this is virgin territory. We need some pioneers.

People with plans in this area should feel free to apply for career transition funding from my team at Coefficient (fka Open Phil) if they think that would be helpful to them.

MichaelDickens @ 2025-12-12T14:44 (+19)

Agreed that extreme power concentration is an important problem, and this is a solid writeup.

Regarding ways to reduce risk: My favorite solution (really a stopgap) to extreme power concentration is to ban ASI [until we know how to ensure it's safe], a solution that is notably absent from the article's list. I wrote more about my views here and about how I wish people would stop ignoring this option. It's bad that the 80K article did not consider what is IMO the best idea.

rosehadshar @ 2025-12-15T08:49 (+4)

Thanks for the comment Michael.

A minor quibble is that I think it's not clear you need ASI to end up with dangerous levels of power concentration, so you might need to ban AGI, and to do that you might need to ban AI development pretty soon.

I've been meaning to read your post though, so will do that soon.

Will Aldred @ 2025-12-12T17:24 (+4)

Nice!

One quibble: IMO, the most important argument within ‘economic dominance,’ which doesn’t appear in your list (nor really in the body of your text), is Wei Dai’s ‘AGI will drastically increase economies of scale’.

rosehadshar @ 2025-12-15T08:58 (+4)

Thanks for the quibble, seems big if true! And agreed it is not something that I was tracking when writing the article.

A few thoughts:

  • I am fairly unsure if the economies of scale point is actually right. Some reasons for doubt:
     
    • Partly I'm thinking of Drexler's CAIS arguments and intuitions that ecosystems of different specialised systems will outcompete monoculture
    • Partly I'm looking at AI development today
    • Partly the form of the economies of scale argument seems to be 'one constraint on human economies of scale is coordination costs between humans. So if those are removed, economies of scale will go to infinity!' But there may well be other trade offs that you reach at higher levels. For example, I'd expect that you lose out on things like creativity/innovation, and that you run higher risks of correlated failures, vulnerabilities etc.
  • Assuming it is true, it doesn't seem like the most important argument within economic dominance to me:
    • The most natural way of thinking about it for me is that AGI increasing economies of scale is a subset of outgrowing the world (where the general class is 'having better AI enables growing to most of the economy', and the economies of scale sub-class is 'doing that via using copies of literally the same AI, such that you get more economies of scale'
    • Put another way, I think the economies of scale thing only leads to extreme power concentration in combination with a big capabilities gap. If lots of people have similarly powerful AI systems, and can use them to figure out that they'd be best off by using a single system to do everything, then I don't see any reason why one country would dominate. So it doesn't seem like an independent route to me, it's a particular form of a route that is causally driven by another factor.

Interested in your takes here!

Sharmake @ 2025-12-12T15:33 (+4)

Nice write-up on the issue.

One thing I will say is that I'm maybe unusually optimistic on power concentration compared to a lot of EAs/LWers, and the main divergence I have is that I basically treat this counter-argument as decisive enough to make me think the risk of power-concentration doesn't go through, even in scenarios where humanity is basically as careless as possible.

This is due to evidence on human utility functions showing that most people have diminishing returns on utility on exclusive goods to use personally that are fast enough that altruism matters much more than their selfish desires on stellar/galaxy wide scales, combined with me being a relatively big believer in quite a few risks like suffering risks being very cheap to solve via moral trade where most humans are apathetic on.

More generally, I've become mostly convinced of the idea that a crucial positive consideration on any post-AGI/ASI future is that it's really, really easy to prevent most of the worst things that can happen in those futures under a broad array of values, even if moral objectivism/moral realism is false and there isn't much convergence on values amongst the broad population.