Soft Nationalization: How the US Government Will Control AI Labs

By Deric Cheng, Corin Katzke @ 2024-08-27T15:10 (+98)

This is a linkpost to https://www.convergenceanalysis.org/publications/soft-nationalization-how-the-us-government-will-control-ai-labs

Crossposted to LessWrong.

We have yet to see anyone describe a critical element of effective AI safety planning: a realistic model of the upcoming role the US government will play in controlling frontier AI.

The rapid development of AI will lead to increasing national security concerns, which will in turn pressure the US to progressively take action to control frontier AI development. This process has already begun,[1] and it will only escalate as frontier capabilities advance.

However, we argue that existing descriptions of nationalization[2] along the lines of a new Manhattan Project[3] are unrealistic and reductive. The state of the frontier AI industry — with more than $1 trillion[4] in private funding, tens of thousands of participants, and pervasive economic impacts — is unlike nuclear research or any previously nationalized industry. The traditional interpretation of nationalization, which entails bringing private assets under the ownership of a state government,[5] is not the only option available. Government consolidation of frontier AI development is legally, politically, and practically unlikely.

We expect that AI nationalization won't look like a consolidated government-led “Project”, but rather like an evolving application of US government control over frontier AI labs. The US government can select from many different policy levers to gain influence over these labs, and will progressively pull these levers as geopolitical circumstances, particularly around national security, seem to demand it.

Government control of AI labs will likely escalate as concerns over national security grow.  The boundary between "regulation" and "nationalization" will become hazy. In particular, we believe the US government can and will satisfy its national security concerns in nearly all scenarios by combining sets of these policy levers, and would only turn to total nationalization as a last resort.

We’re calling the process of progressively increasing government control over frontier AI labs via iterative policy levers soft nationalization. 

It’s important to clarify that we are not advocating for a national security approach to AI governance, nor yet supporting any individual policy actions. Instead, we are describing a model of US behavior that we believe is likely to be accurate to improve the effectiveness of AI safety agendas.

Part 1: What is Soft Nationalization?

We’d like to define a couple terms used in this article:

We argue that soft nationalization is a useful model to characterize the upcoming involvement of the US government in frontier AI labs, based on our following observations:

  1. Private US AI labs are currently the leading organizations pushing the frontier of AI development, and will be among the first to develop AI with transformative capabilities.
  2. Advanced AI will have significant impacts on national security and the balance of global power.
  3. A key priority for the US government is to ensure global military and technological superiority – in particular, relative to geopolitical rivals such as China.
  4. Hence, the US government will begin to exert greater control and influence over the shape, ownership, and direction of frontier AI labs in national security use-cases.

1. Private US labs are currently the leading organizations pushing the frontier of AI development, and will be among the first to develop AI with transformative capabilities.[6]

Substantial evidence points towards the current and continued dominance of US AI labs such as OpenAI, Anthropic, Google, and Meta in developing frontier AI.[7] 

The strongest competitors to private US AI labs are Chinese AI labs, which have strong government support but are limited by Chinese politics,[8] as well as US export controls[9] stymying access to cutting-edge AI chips.

Metrics predicting the gap between US and Chinese AI technological development vary:

2. Advanced AI will have significant impacts on national security and the balance of global power.[13] 

Upcoming Capabilities: Experts forecast that advanced AI will enable a number of capabilities that have significant implications for national security[14], such as:

National Security Outcomes: Transformative capabilities such as these may lead to outcomes that the US would view as critically detrimental for national security[15], such as:

Economic Outcomes: Additionally, advanced AI systems could also result in significant negative outcomes for the US and global economies, including:

3. A key priority for the US government is to ensure global military and technological superiority.

The US government has for decades operated on the assumption that the existing world order depends on its military and technological dominance, and that it is a top national priority to maintain that order.[18] As a result, it views any challenge to this dominance as an unacceptable threat to its national security.

As AI system capabilities are demonstrated to matter for national security, the US government will likely continue to escalate its involvement in AI technologies to maintain this superiority, even at the cost of exacerbating its AI arms race with China.[19]

A key takeaway from this observation is that the US government will not choose to slow the pace of frontier AI development absent international agreement that includes geopolitical adversaries like China. The US may choose to moderate certain aspects of AI that demonstrate substantial risk with little advantage, but by default it will avoid actions that inhibit American R&D in AI. Today, unilaterally pausing AI[20] development would be in opposition to the US government’s current goals.

Finally, a relevant priority of the US government is maintaining social and economic stability. As has been demonstrated in numerous economic crises,[21] the US is willing to take drastic action to ensure the stability of the US economy, including the takeover and bailout of multi-billion dollar private corporations.[22] Though it seems to us this priority is of less relevance to the policy levers for soft nationalization, there are plausible scenarios where the US may choose to enact these levers to preserve social and economic stability.

4. Hence, the US government will begin to exert greater control and influence over the shape, ownership, and direction of frontier AI labs in national security use-cases.

The US has already demonstrated that it is pursuing greater control over AI chip distribution – nearly a year before passing the Executive Order on AI, in 2022 the Biden administration began enforcing export controls limiting Chinese access to cutting-edge semiconductors.

We believe that this process of exerting greater control can take a wide range of possible paths, where the US progressively utilizes a wide range of policy levers. These levers will likely be applied to satisfy national security concerns in response to technological and geopolitical developments. Though the total nationalization of frontier AI labs is one possible outcome, we don’t think it is the most likely one.

Why Total Nationalization Is Not The Most Likely Model

In a recent example of AI scenario modeling, Leopold Aschenbrenner’s “Situational Awareness” describes a plausible scenario involving an extremely rapid timeline to superintelligence. He describes superintelligence’s likely impact on the geopolitical landscape, concluding with the prediction that a “Manhattan Project for AI” will be soon organized by the US government. He argues that this project will consolidate and nationalize all existing frontier AI research due to the national security implications of superintelligence.

We argue that “The Project”[23] and other similar descriptions of nationalization[24] represent only a narrow subset of possible scenarios modeling US involvement, and are not the most likely scenarios.

Total nationalization is not the most likely scenario for a few reasons:

  1. American policymakers would likely believe that total nationalization would undermine the US’ technological lead in AI and broader economic interests.
    1. Nationalizing frontier AI development could be seen as jeopardizing the pace of innovation and R&D currently driven by the private sector. It would remove competitors, incentives, and a diversity of approaches from the US AI landscape.
      1. The American model of innovation is built on free-market private competition, and is arguably one of the reasons the US is leading the AI race today.[25] 

      2. Since the 1980s, the United States has seen a significant trend towards increased private sector involvement in various industries,[26] driven by factors such as:

        1. A perception among policymakers that market-based solutions can be more efficient than direct government management.

        2. The belief that private sector competition could foster greater innovation and cost reduction.

      3. US policymakers generally endorse free-market competition on innovation and are reluctant to regulate the AI industry.[27] It would require a massive ideological shift for the US government to nationalize an industry that has critical consequences for the US economy.

  2. The total nationalization of frontier AI labs would face unprecedented practical, legal, and political challenges.
    1. Organizations in control of frontier AI labs such as Microsoft, Google, and Meta are among the largest corporations in the world today, with market capitalizations over $1 trillion each.[28] 

      1. Practically, total nationalization of these corporations is financially and logistically implausible.

      2. Nationalization of only their frontier AI labs is more plausible. However, these corporations are developing their long-term strategies around frontier AI models, and their frontier AI labs are tightly integrated with the rest of their business.

      3. Any form of nationalization would undermine their long-term business models, plummet shareholder value, and upend the global tech industry. It would result in massive legal and political resistance.

    2. The leading chip manufacturer Nvidia, which is a primary driver of frontier AI research by controlling 80% of the AI chip market,[29] has a current market capitalization of $3 trillion.[30] 

      1. Many total nationalization scenarios would involve government ownership of Nvidia. However, it’s challenging to imagine a legally and financially feasible pathway for the US government to gain full ownership of a public corporation of this size.

  3. The US may be able to achieve its national security goals with substantially less overhead than total nationalization via effective policy levers and regulation.
    1. We argue that various combinations of the policy levers listed below will likely be sufficient to meet US national security concerns, while allowing for more minimal governmental intrusion into private frontier AI development.
    2. We expect that such an approach would likely be more appealing for the US government, due to the challenges of total nationalization described above.

Despite these arguments, it’s still possible that the US government may eventually choose total nationalization given the right set of circumstances. We don’t believe that it is possible yet to confidently predict a future set of outcomes, and that over-indexing on any scenario is a mistake.

Rather than committing to a specific model of the future, we believe the most effective analysis today will consider a wide range of scenarios that describe actions the US government will take in response to global circumstances. By enumerating many of the plausible scenarios regarding soft nationalization, we believe AI governance researchers can better ground our research in likely futures and design better interventions.

Upcoming Projects on Soft Nationalization

We are conducting scenario modeling and governance research to describe how upcoming national security concerns will lead to greater US governmental control over frontier AI development. We expect this research will ground AI governance discourse in a realistic understanding of plausible scenarios involving US control of frontier AI.

To execute, we’re spearheading a collaborative research project with the following three parts:

  1. Describing Soft Nationalization: Describe the policy levers and scenarios that encompass soft nationalization
  2. Conducting Further Scenario Research: Evaluate the implications of this research on further scenario modeling topics
  3. Aligning AI Safety with Soft Nationalization: Research how this process can be shaped to achieve the broader goals of AI safety organizations

If you’re interested in collaborating or receiving updates on any of this work, shoot us a message at research@convergenceanalysis.org.

1. Describing Soft Nationalization

In the upcoming quarter, we will publish a report exploring the following:

2. Conducting Further Scenario Research

The results of our soft nationalization report will inform further scenario modeling that builds on our research, on questions such as:

3. Aligning AI Safety with Soft Nationalization

A clear set of scenarios implied by soft nationalization will enable further research into how these outcomes can be shaped to achieve the broader goals of AI safety organizations, such as:

Part 2: Policy Levers for Soft Nationalization

We describe thirteen preliminary sets of policy levers the US government might pull to exert control over frontier AI.  Each set of levers offers a series of options that afford the government increasingly more influence, on a spectrum ranging from standard regulations to more comprehensive government control. 
 

We envision that certain policy levers will be combined and deployed by the US government given a particular societal environment. That is, we believe that given a certain scenario, the US will choose a strategy involving policy levers that exert enough control to sufficiently protect its national security, and that is also legally, politically, and practically feasible.
 

This list of policy levers is an active work in progress and will be explored in detail in a report we’ll publish in the upcoming quarter, considering aspects such as:

Authors Note: We do not advocate or recommend for the application of any of these policy levers. This section is informative in nature – it is intended solely to describe the space of plausible policy levers that may occur. In the future, we may recommend certain levers after conducting further research.

Management & Governance Mechanisms

Government Oversight

The US may seek to implement better tools to monitor the day-to-day operations of key AI labs, including policy levers such as:

Government Management

The US may seek to have direct control over the day-to-day operations of key AI labs, including policy levers such as:

Government Projects & Integrations

The US may seek to integrate the R&D and output of AI labs with its national security goals. This could look like any of these policy levers (in order of increasing interventionism):

Operational Control

Development Limitations

The US may decide to set limitations on large-scale AI R&D for frontier AI labs:

Customer Limitations

The US may require that AI labs report, vet, or restrict its customers to prevent usage of frontier AI by adversaries:

Deployment / Use Limitations

The US may limit the availability of specific use cases of frontier AI models:

Compute Usage Limitations

The US may decide to influence AI development via control over the allocation and availability of compute resources:

Security & Containment Measures

Personnel Requirements

The US may seek to control key personnel within AI labs, by limiting their ability to disseminate sensitive information, to work for geopolitical rivals, or in extreme cases by requiring that they work for the US government:

Research & Information Controls

The US may seek to control the classification or distribution of AI research developed by private AI labs:

Cybersecurity Requirements

The US may require specific digital or physical cybersecurity practices for highly capable AI models to protect against malicious exploitation:

Containment Requirements

The US may require certain practices that allow AI labs or federal agencies to protect, contain, or restrict deployed AI models:

Financial Ownership & Control

Shareholding Scenarios

The US government may consider acquiring stakes of private AI labs, achieving control through market-based mechanisms.

Profit Regulation and Unique Tax Treatment

It’s plausible that leading AI labs may eventually control a sizable percentage of the revenue and valuation of private companies in the US. If this were the case, the US may seek to treat these leading AI labs uniquely from traditional corporations in pursuit of more equitable or economically beneficial outcomes, using levers such as:

Part 3: Scenarios Illustrating Soft Nationalization

In this section, we describe a few preliminary scenarios in which the US exerts control over frontier AI development in response to national security concerns. For each scenario, we illustrate broad strokes of the circumstances that may occur. Then, we describe a plausible package of “soft nationalization” policy levers that the US would be likely to deploy as a comprehensive strategic response.

We present three scenarios with three different “levels” of relative governmental control: low, medium, and high. We will be exploring scenarios such as these in more detail via a report we’ll publish in the upcoming quarter.

It’s important to note that these are hypothetical, illustrative scenarios to demonstrate that our model of soft nationalization may be an effective tool for describing US national security concerns. We do not propose that any of these scenarios are likely to happen, nor do we advocate for any of the suggested policy levers. We don't necessarily believe securitization is the ideal outcome, and that there are still possible scenarios involving international cooperation.

US “Brain Drain”

Governmental Control: Low

In early 2027, China and Saudi Arabia launch motivated, well-funded governmental initiatives to compete in AI technological superiority. In particular, one key branch of their initiative focuses on financial compensation - they offer hugely lucrative compensation packages for top AI researchers, with yearly salaries in the tens of millions, paid upfront. US AI labs are unable to compete with these offers, as most of the value of their compensation packages is in equity and illiquid. The US government does not offer similarly competitive packages.

These initiatives create a wave of talent migration, with hundreds of top AI researchers leaving for well-paid opportunities in countries the US considers to be geopolitical rivals. The exodus raises alarm in both Silicon Valley and Washington about maintaining US technological leadership in AI. In particular, the US government is concerned that top researchers are moving from capitalist, private AI applications to state-organized AI initiatives, which may conflict with US geopolitical goals.

US Governmental Response:

Escalation of an AI Arms Race

Governmental Control: Medium

In late 2029, US intelligence agencies obtain credible information that China has made significant breakthroughs in AI-enabled autonomous weapons systems. Satellite imagery and intercepted communications suggest that China is developing swarms of AI-controlled drones capable of coordinated combat operations without human intervention. These developments threaten to upset the global military balance, allowing the Chinese military to break through missile & air defense systems and undermining US & Taiwanese defensive capabilities. The news leaks to the press, causing public alarm and intensifying the ongoing debate about lethal autonomous weapons. The US is pressured to respond, fearing that China's advancement could embolden it to take more aggressive actions against Taiwan.

These developments occurred because China has been pursuing a tight-knit integration of its AI research labs and the Chinese defense industry, pouring tens of billions into military AI technologies. In comparison, the US government has been relatively hands-off on AI, preferring to fund exploratory research initiatives with AI labs rather than directly overseeing the development of cutting-edge AI technologies. As a result, the US is now behind in developing similar lethal autonomous weapons.

The US government recognizes that its approach to AI technologies has left it flat-footed relative to its geopolitical rivals, risking its position as the leading superpower. It commits to integrating frontier AI labs and technologies more directly into governmental initiatives and the defense industry.

US Governmental Response:

Nationalization of Bioweapon Technologies

Governmental Control: High

In 2035, significant and disturbing developments at a new biotech startup occur. A novel AI virus modeling technique for vaccine development has the side effect of allowing lab researchers to easily develop bioweapons of unprecedented lethality and specificity. The AI system, trained on vast datasets of genetic and epidemiological information, can design viruses tailored to target specific ethnic groups or even individuals based on their genetic makeup. These viruses are relatively feasible to produce, and knowledge of the design of these viruses would permit any of 100+ research labs worldwide to easily create such a pathogen.

The US government determines that the capabilities of this biotech startup are too risky to permit for a private corporation. Furthermore, it believes that any further research into this novel virus modeling technique is too dangerous to permit, as it could easily lead to targeted pandemics. It moves to nationalize this biotech startup fully to prevent any further consequences, and passes legislation prohibiting private research and development into similar virus modeling techniques.

US Governmental Response:

The US government performs what we might consider a Full Acquisition of the specific biotech startup described above.

Outside of this biotech startup, the US government moves quickly to create stringent national (and international) restrictions on research regarding this set of AI virus modeling techniques:

These two sets of drastic actions significantly deter US private companies from undertaking any further R&D in this area of virus and pathogen modeling. The full nationalization of a private company signals that the US is likely to take similar actions in the future.

Conclusion

National security concerns suggest the US will exert more control over frontier AI development. However, predictions of a “Manhattan Project for AI” are reductive and misleading. The US isn’t likely to “nationalize” frontier AI development, at least in the sense of all at once bringing it under full public ownership and control. Doing so would be legally, politically, and practically challenging, and it could ultimately undermine the US’ technological lead in AI.

Instead, we propose that the US government’s control over frontier AI is likely best modeled by our framework of “soft nationalization.” According to this framework, the US will exert progressively greater power over frontier AI development as national security concerns arise by employing several different policy levers. The options described by these levers constitute a spectrum from “soft touch” regulation to de facto government ownership.

This model assumes that the US will act to preserve its national security. However, exactly which combinations of options across policy levers the US will choose depends on the contingencies of global and domestic technopolitics, as well as balancing goals other than national security. 

We hope our model will enable the evaluation of AI safety agendas across realistic scenarios of US involvement, and encourage further related research. In upcoming work, we intend to more rigorously describe the policy levers the US will choose to exercise such control, and the scenarios that will cause the US to deploy them.

  1. ^
  2. ^
  3. ^
  4. ^
  5. ^
  6. ^
  7. ^
  8. ^
  9. ^
  10. ^
  11. ^
  12. ^
  13. ^
  14. ^
  15. ^

     Ibid.

  16. ^
  17. ^
  18. ^
  19. ^
  20. ^
  21. ^
  22. ^
  23. ^

     IV. The Project. Note that Leopold does allude to implementations that do not involve total nationalization, such as defense contracting or voluntary agreements. However, the majority of his argument is built around the idea of a fully centralized government-led research project.

  24. ^
  25. ^
  26. ^
  27. ^
  28. ^
  29. ^
  30. ^
  31. ^
  32. ^

Stefan_Schubert @ 2024-08-27T18:17 (+9)

Some of the listed policy levers seem in themselves insufficient for the government's policy to qualify as soft nationalization. For instance, that seems true of government contracts and some forms of government oversight. You might consider coming up with another term to describe policies that are towards the lower end of government intervention.

In general, you focus on the contrast between soft and total nationalization, but I think it could also be useful to make contrasts with lower levels of government intervention. In my view, there's a lot of ground between a hands-off approach and soft nationalization. Most industries (e.g. in the US) have a lot of regulation - and so the government doesn't take a hands-off approach - yet haven't been subjected to soft nationalization, as I'd use that term.

(Tbc this is a purely conceptual point and not an argument for or against any particular level of government intervention.)

Deric Cheng @ 2024-08-27T23:28 (+2)

I'd agree - for many of these individual policy levers (esp. the monitoring & oversight mechanisms), "soft nationalization" wouldn't be the best term to describe them! 

Part of our linguistic struggle here is that we're attempting to map the entire spectrum of gov. involvement and slap an overarching label on it. "Soft nationalization" gets the general point across, but definitely breaks down on a case-by-case basis.

Stefan_Schubert @ 2024-08-28T09:09 (+11)

Right. I think it could be useful to be quite careful about what terms to use since, e.g. some who might actually be fine with some level of monitoring and oversight would be more sceptical of it if it's described as "soft nationalisation".

You could search the literature (e.g. on other industries) for existing terminology.

Part of our linguistic struggle here is that we're attempting to map the entire spectrum of gov. involvement and slap an overarching label on it. 

One approach could be to use terminology that's explicit about there being a spectrum. E.g. you could use terms like "tiers", "steps", "spectrum", etc. And then you could argue that the US government's approach is unlikely to be at either end of the spectrum (hands-off or total nationalisation).

Davidmanheim @ 2024-08-28T08:17 (+9)

I think it's useful to distinguish between industrial policy, regulation, and nationalization, and your new term seems to be somewhere in between. I think your model is generally useful, but at the same time, introducing a new term without being very clear about what it means in relation to existing terms is probably more confusing than clarifying.

gergo @ 2024-08-29T06:13 (+1)

This is super interesting, thank you for posting. A minor feedback I have is that some of the graphics the post has are hard to read, due to the letters being too small. I'm not sure if you can make the pictures bigger within the post, but if not, it might be nice if you could share a link under the picture that would take the user somewhere where it can be viewed more easily. (Personally I just used zoomed in in my browser which was also low effort but I'm not sure if everyone will do that)