Thoughts on AGI and world government

By Forethought, William_MacAskill, rosehadshar @ 2026-01-29T07:22 (+7)

This is a linkpost to https://www.forethought.org/research/agi-and-world-government

This note was written as part of a research avenue that I don’t currently plan to pursue further. It’s more like work-in-progress than Forethought’s usual publications, but I’m sharing it as I think some people may find it useful.

Introduction

At some point a company, country, or coalition of countries will successfully build AGI. What happens then?

There are many possibilities, including:

Another possibility, if there’s a large enough intelligence explosion, is that the first project to build AGI organically becomes a de facto world government.

This possibility is worth taking pretty seriously, given the stakes and the fact that an intelligence explosion is fairly likely.

In this note, we’ll briefly outline the argument for expecting the first AGI project to evolve into a world government, and then give some weakly held implications for AGI governance. 

We argue that taking this scenario seriously makes it more desirable that:

  1. The first project to develop AGI is:
    • Government-led rather than private.
    • Multilateral rather than single-government.
    • Governed by a coalition of democratic countries rather than all countries.
    • Governed by an explicitly interim and time-bound arrangement, with definitive governance arrangements to be made at a later date.
  2. Different countries in the coalition are given fixed voting power, and neither one-person-one-vote nor one-country-one-vote are used.
  3. Countries that are not part of the project receive major benefits from the development of AGI and credible reassurances that they won’t have their sovereignty violated later on.

An important caveat is that we’re just arguing that taking the world government scenario seriously makes these features more desirable than they would otherwise be. We’re not making an argument that they are desirable all things considered (which would require taking many other factors into account).[1]

Why expect the first AGI project to evolve into a world government?

Here’s the basic argument for expecting the first AGI project to become a de facto world government:

In this intelligence explosion scenario, there is a point in time when the first project to build AGI determines what happens next for the world. The project might choose to give power back to other actors (e.g. by open sourcing the models, or giving the model weights to political leadership) — but that would be the project’s choice.

How likely this is to happen depends on the speed, scale and concentration of the intelligence explosion. All other things being equal, the faster the rate of AI capabilities progress, the longer that rapid progress can be sustained (and so the greater the capabilities the resulting superintelligence has), and the greater the extent to which the intelligence explosion can occur without relying on third parties outside of the project, the more powerful the leading AGI project will be compared to the rest of the world. Unfortunately, we don’t currently know how fast, sustained and concentrated any intelligence explosion will be, but given the state of our evidence we cannot rule out that it will be very fast, very sustained, and very concentrated. 

It also depends on what type of organisation develops AGI. AGI could be developed by a private company, a single government-led project, or an international consortium of governments. Of these, a private company is least likely to achieve de facto world government status, because their government starts off with far greater hard power than the company, can monitor the activities of the company, and, when it’s clear that the company is becoming extremely powerful, can step in and forcibly take control of the company (or threaten to do so).

The same constraints do not bind government-led AGI projects. However, other countries could potentially maintain the balance of power by making credible threats (of war, or of restricting essential semiconductor manufacturing components) against the leading country and thereby getting access to the model weights. This becomes somewhat less likely to happen if the leading project is a multilateral consortium of governments because such a consortium would have greater hard power, could include the whole of the semiconductor supply chain, and would reduce the number of potentially adversarial countries.

(Weakly held) implications for AGI governance

To the extent that we take the possibility that the first AGI project evolves into world government seriously, we think that the following things become more desirable:

  1. The first AGI project is government-led, rather than private.
    1. Corporate governance structures are (a) not designed to govern political power, (b) not tested at governing political power.
      1. A privately-developed AGI by default will be aligned to the CEO or to the company’s governance regime. If the former, de facto autocracy is likely. If the latter, it is at least a major risk: the CEO could potentially outwit or collude with the Board and largest shareholders, or simply start ignoring their demands post-AGI, and thereby become de facto dictator. And, even if that doesn’t happen, power over the de facto world government would essentially be in the hands of the company’s largest shareholders — who probably represent a small fraction of society.
      2. In contrast, democratic governance is the best approach to political power that has actually been tried. Governments also have far more legitimacy than companies to exercise political power (though more on this below).
    2. What’s more, if the first AGI project is private, we expect that the relevant government will intervene, and we’ll end up with a government-led project anyway, but one that was set up in haste and without multilateral involvement.
  2. The project is multilateral, rather than single-government.   
    1. If the first AGI project is to evolve into a world government, then avoiding the risk of the project becoming an autocracy is extremely important.[3] Having multiple governments with some meaningful control over the project reduces this risk considerably: even if one government becomes more authoritarian, the others can oppose this.[4]
    2. Moreover, if the project becomes a world government, it seems desirable for many governments and people to have a stake in the project, and for all people to receive benefits from it.
  3. The project is governed by a coalition of democratic countries, rather than as a global democracy.  Here are the arguments for this, from least to most controversial:
    1. Global democratic governance is unlikely to be feasible, because it would involve the US giving up a lot of power. Pushing hard for global democratic governance may make a multilateral project of any kind less likely, increasing the chances that the US government goes it alone, and that we end up with something like autocracy.
    2. From the perspective of ensuring a flourishing future over the long term, the gains from global democratic governance may be quite small.
      1. For one thing, most beings with moral status wouldn’t be represented by either a coalition of democratic countries or global democratic governance — as most beings are future beings (also, animals and digital minds). So there aren’t big gains on that front.
      2. For another, going from a coalition of democratic countries to all countries matters much less than going from autocracy to a coalition of democratic countries, in terms of increased moral diversity.[5] The gain of going from hundreds of millions of people being represented to 8 billion is only an order of magnitude. In contrast, the gain from going from a single person in charge to a hundred million people being represented is 8 orders of magnitude.
    3. Global democratic governance might increase the risk of authoritarianism. In a survey of citizens from 24 countries,[6] 64% of people said that rule by a strong leader or the military would be a good way of governing their country. Of those countries which would be most likely to take part in a multilateral AGI project,[7] only 31% of people agreed to the same claim.
  4. The project is governed under an explicitly interim arrangement.
    1. For example, the project could be governed by some time-bound governance structure, with a binding agreement that this structure will be renegotiated after a certain number of years (as was the case for Intelsat). The case for this is that designing the ideal world government post-AGI is very hard, and we’ll do a much better job of it after we’ve thought more about it, with the help of AGI and ASI.
  5. Different countries in the coalition are given fixed and weighted voting power, rather than using a one person one vote or one country one vote system.
    1. The reason to fix voting power is that post-AGI, rapid population growth will become possible (whether of digital citizens, or biological ones via artificial wombs and robot child-rearers). If project voting were one-person one-vote, then whichever country grew its population the fastest could seize power.
    2. The reason to weight voting, rather than use one-country-one-vote, is that otherwise small countries would get disproportionate amounts of power, in a way that seems arbitrary and very non-democratic. For example, each of around 100 smaller countries would have at least 100x the voting power per person as the US. And, pragmatically, weighting would also make the arrangement more palatable to the US, making an international project more feasible.
    3. There’s some tension here: if the voting is weighted so that countries are proportionately represented, but fixed so that runaway population growth can’t be used to seize power, then the weights between countries could eventually become very disproportionate.[8] 
  6. Countries that are not part of the project receive major benefits and credible reassurances that they won’t have their sovereignty violated.
    1. The prospect of world government makes it more likely that non-participating countries will take drastic action (stealing model weights, short-cuts on safety, kinetic strikes) in order to prevent that from happening. This puts more importance on ensuring that countries that are not part of the first AGI project receive major benefits from the development of AGI and credible reassurances that they won’t have their sovereignty violated in a post-AGI world. That said, we believe we should still be reluctant to give much in the way of formal governance power to authoritarian countries.

Thanks to many people for comments and discussion.

  1. ^

    For example, pushing for AGI development to be government-led might increase the chance that power becomes extremely concentrated (as governments have fewer checks than companies), or that misaligned AI takes over (if you believe that governments would handle this risk less competently than labs).

  2. ^
  3. ^

    How well the project manages to avoid misalignment risk is also an important design feature, but I think AI project designs vary less on this dimension than on how likely they are to become autocracies.

  4. ^

    Here’s a very simplified model: at any one time, there’s some chance of the leader of a country having authoritarian impulses, or even being a malevolent actor (like Stalin or Mao). But for democratic countries, at least, this chance is fairly low - let’s say 20%. So if there’s one political leader in charge, we have a 20% chance of that leader trying to make the AGI project autocratic. But if there are political leaders from  countries in charge, where  is the number of countries that would need to coordinate in order to make the coalition autocratic, the chance of autocracy becomes 20%^. With 4 countries, the chance becomes much less than 1%.

  5. ^

    In order to have a flourishing future, we want to have a diversity of moral views, and the ability to make compromises between these different moral views. Having the relevant decision-makers be thoughtful and morally reflective is important, too, but having a diversity of moral views ensures that at least some parties are thoughtful and morally reflective.

  6. ^

    Canada, France, Germany, Greece, Italy, Japan, the Netherlands, South Korea, Spain, Sweden, the United Kingdom, Argentina, Brazil, Hungary, India, Indonesia, Israel, Kenya, Mexico, Nigeria, Poland, and South Africa.

  7. ^

    The US, Canada, the UK, the Netherlands, Germany, Japan, South Korea, and Australia. These countries are either leading AI developers (US), key security allies of leading AI developers (Canada, UK, Australia) or critical to the semiconductor supply chain (the Netherlands, Germany, Japan, South Korea). The V-DEM Institute categorises all of these countries as liberal or electoral democracies.

  8. ^

    In general, the ideal design of a de facto world government is a very hard question, which is another reason to make sure that the initial arrangements are temporary.


Ebenezer Dukakis @ 2026-01-29T09:44 (+2)

The reason to fix voting power is that post-AGI, rapid population growth will become possible (whether of digital citizens, or biological ones via artificial wombs and robot child-rearers). If project voting were one-person one-vote, then whichever country grew its population the fastest could seize power.

This seems like a consideration against empowering democracies more broadly, if democracies would be controlled by the internal factions which grow their populations fastest.

It seems plausible to me that if you consider the universe of modern democratic nations, the first principal component of political disagreement within that citizenry is likely to be very intranational. (People often agree more with ideologically similar foreigners than with ideologically dissimilar co-nationals.)

In the same way US citizens often view state politics with an eye to affecting federal politics, citizens in democratic nations might view their national politics with an eye to affecting global governance. You might essentially be left with a single global polity with a single point of failure.

You argue that democracies are designed and tested to govern political power. But this sort of weird hypothetical seems fairly far from the regime that democracies have been designed and tested for.

I would suggest a very different approach: trying to move away from single-point-of-failure to the greatest possible extent, and designing global governance so it can withstand as many simultaneous failures as possible. It's especially important to reduce vulnerability to correlated failures.