Sharing the AI Windfall: A Strategic Approach to International Benefit-Sharing

By michel @ 2024-08-16T12:54 (+67)

This is a linkpost to https://wrtaigovernance.substack.com/p/sharing-the-ai-windfall-a-strategic

Summary

If AI progress continues on its current trajectory, the developers of advanced AI systems—and the governments who house those developers—will accrue tremendous wealth and technological power.

In this post, I consider how and why US government[1] may want to internationally share some benefits accrued from advanced AI—like financial benefits or monitored access to private cutting-edge AI models. Building on prior work that discusses international benefit-sharing primarily from a global welfare or equality lens, I examine how strategic benefit-sharing could unlock international agreements that help all agreeing states and bolster global security.

Two “use cases”  for strategic benefit-sharing in international AI governance:

  1. Incentivizing states to join a coalition on safe AI development
  2. Securing the US’ lead in advanced AI development to allow for more safety work

I also highlight an important, albeit fuzzy, distinction between benefit-sharing and power-sharing:

I identify four main clusters of benefits, but these categories overlap and some benefits don’t fit neatly into any category: Financial and resource-based benefits; Frontier AI benefits; National security benefits; and ‘Seats at the table.

I conclude with two key considerations with respect to benefit-sharing:

Introduction

Advanced AI systems could empower their developers—and the governments who supervise those developers—with enormous benefits. For example, advanced AI systems[2] could give rise to tremendous wealth, breakthrough medical technologies, and decisive national security benefits.

This post examines how these benefits could be shared internationally. In particular, I examine how and why the US government (henceforth USG) may want to strategically share some of the benefits accrued from advanced AI to further their own interest, other states interests, and ultimately bring about a safer world.

Past work[3] on international benefit-sharing has primarily focused on sharing benefits to address wide-spread job-displacement and promote welfare and equality globally. I support such altruistic benefit-sharing to remedy the uptick in global power- and income-inequality that AI could drive. But I want to expand discussions of international benefit-sharing to include sharing benefits as a tool for positive-sum trades.

By offering AI-derived benefits—such as economic aid, monitored frontier AI model access, or security assurances[4]—the USG could enable commitments that are in the interest of all parties at the table and promote global security. For example, the US could provide allied states with monitored access to private, cutting-edge frontier AI models. In exchange, allied states could take steps domestically to prevent the proliferation of weaponized AI systems. (I discuss other ideas on how benefit-sharing could be used in international AI governance below).

There is precedent for US-led strategic benefit-sharing. Consider the Marshall Plan. Post-WW2, the USG helped Western Europe rapidly recover by providing benefits like financial aid and modern technologies, which in turn strengthened the US’ key strategic alliances, created mutually-beneficial markets for US goods and services, promoted democratic capitalism, and stabilized a region that could have otherwise triggered new conflicts.

An overview of the types of benefits that could be shared strategically. Discussed in more detail below.

In what follows, I expand on the types of international AI governance agreements that strategic benefit-sharing could unlock; the distinction between benefit-sharing and power-sharing; concrete benefits the USG could share; and two key considerations I see with respect to strategic benefit sharing (improving credibility and mitigating risks).

What types of international AI governance agreements could strategic benefit-sharing (and power-sharing) unlock?

While sharing AI-derived benefits could in principle be used to incentive international agreements on many topics, this post focuses on benefit-sharing as a tool to unlock agreements on questions like who develops next-generation AI systems and how they’re governed.

What exactly should those international AI governance agreements look like? I’m not sure yet. With so much uncertainty about the capabilities of future AI systems, their risks, and future domestic regulation in states like the US and China, it's hard to know which international institutions and agreements are feasible and desirable.  

So, rather than proposing a specific international agreement involving benefit-sharing, I highlight two different plausibly-good outcomes that benefit-sharing could help unlock.

Benefit-sharing vs power-sharing: An important, fuzzy distinction

If the USG wants another state to sacrifice something significant (eg., their domestic advanced AI program), the USG may need to do more than just share benefits. The USG may need to share power.

That is, the USG may need to make concessions to its own power and share technology, national security capabilities, or ‘control rights’ for its most advanced AI models that significantly empower other states. I call this power-sharing and view it as distinct from benefit-sharing.[5] Note, however, that this distinction is blurry: financial investments are a prototypical example of benefit sharing, for example, but if the USG gives very lucrative, irrevocable financial benefits to another state, even financial investments start looking like power-sharing.

 

Power-sharing is more consequential than benefit sharing, but it should be on the table when thinking about international agreements.

What AI-derived benefits could the USG share with other states?

I identify four main clusters of benefits, although some benefits don’t fit cleanly into these categories.  

I’ve listed many possible benefits, even if many prove unrealistic as we learn more about the trajectory of AI progress and which actors accrue the biggest benefits. I’ve also listed some benefits that are not directly derived from AI advanced (eg., US protection) but could be instrumental for international AI governance agreements

Importantly, I focus on listing benefits that would most likely not cost the US their international lead on frontier AI development. For example, the list below doesn’t include benefits like unmonitored access to the US’ top AI models or offensive military technology. I think distributing unmonitored access to the US’ most powerful AI models, for example, should be thought of as power-sharing, which is not the focus of this list. (See discussion of power-sharing vs. benefit-sharing.)

Financial and resource-based benefits

The most straightforward benefits that the US could give to other states are money and resources.

Frontier AI benefits

If other states see the US economy and security being supercharged by advanced AI, one of the most direct and desired benefits USG could share may be AI technology or access.

Some frontier AI technology and inputs will naturally diffuse internationally as US companies sell AI products abroad. But other frontier AI technology, like powerful general models that can be easily misused, may not be shared with everyday consumers, especially if the USG works closely with frontier AI labs.

Below, I discuss some levers that could allow the USG (or another actor that has control over the advanced AI models) to balance sharing altruistic and strategic benefits with maintaining a technological edge and preventing the proliferation of dangerous capabilities.

  1. Whether it’s a general-purpose model or a narrow model: Instead of sharing highly capable general models, which are harder to evaluate, a US actor could share narrow AI models, tailored for specific beneficial applications like healthcare or supply chain optimization.
  2. Which generation of model is shared: A US actor could offer models that are 1-3 generations behind their cutting-edge systems. Depending on how securitized or closed-off US frontier AI development becomes, such models could still be more capable than the most capable model consumers would otherwise have access to.
  3. The extent to which model outputs are fine-tuned or restricted: A US actor could share powerful models, yet only with certain ‘fine-tuning’ or reinforcement learning from human feedback (RLHF) so that the model doesn’t answer dangerous questions or follows certain high-level principles (e.g., non-violence). With monitored access (e.g., API access), the sharer could also make use of response filters and output classifier models that restrict what information is also ultimately shown to the user.
  4. The extent to which access is monitored: Access could be closely monitored in real-time, periodically audited, or granted with minimal oversight, depending on trust levels and security concerns.
  5. The extent to which model access is revocable: By sharing API access rather than model weights, for example, a US actor could cut off model access any time.
  6. The types of ‘compute’ access that is shared, if at all: Instead of sharing AI model access or weights, the US could share the computational resources (a.k.a. ‘compute’) that allows for the training and use of AI models. This comes with risk: sharing scarce computational resources with other states, especially states that want to make more powerful models than the US, could undermine US AI leadership. However, some types of monitored compute access that only allows for the use of existing models (ie., ‘inference’ rather than training) could be realistic for US actors to share. Alternatively, US actors could share computational resources with future hardware-enabled governance mechanisms that only allow for certain kinds of training runs, like training runs below a certain compute threshold or training runs that don’t involve certain types of data. ​​
  7. The types of AI development inputs that are shared. Computational resources, mentioned above, are an important input into AI development. But so are data, algorithms, and human capital. Plausibly some data sets, algorithmic breakthroughs, or expertise could be shared strategically.   
  8. The extent to which access to the model is deliberately slowed down: Access to any model could be ‘throttled’ by restricting run speeds or the number of copies that can be run in parallel.
  9. The extent to which meta-information about the model is disclosed: Varying levels of information about the model's architecture, training process, or relationship to other frontier models could be disclosed.

‘Seats at the table’ for decision-making related to AI development

Rather than—or in addition to—providing material benefits, the US could offer invitations to join important processes or proceedings as a form of benefit.

Note that insofar as these seats hold significant decision-making power, I think it makes more sense to think of sharing these seats as power-sharing than benefit-sharing.

Empowering other states national security 

While empowering other states’ national security could pose risks, doing so could be an important part of disincentivizing other states from racing the US to develop dangerous AI capabilities.

If allies and even adversaries of the USG can get the security benefits of advanced AI while leaving the development predominantly to the US, it may help avoid an international AI arms race where the US-led AI project and a competitor feel pressured to cut corners on guaranteeing that the most advanced systems don’t pose catastrophic risks.  

Additionally, boosting other states’ national security systems might help those states work towards goals they share with the US government. For example, sharing counter-terrorism technology could reduce the risk of bio-terrorism that spills into the US despite not targeting the US.

Some examples of ‘national security benefits’ include:

Other benefits

Two key considerations for benefit-sharing

Credibility will be a key challenge for benefit-sharing and power-sharing agreements

Other governments need to believe that the USG benefit-sharing or power-sharing is credible if they’re to offer something in return.[7] 

In the near term, there are a number of international relations credibility tools that could help the USG make its commitments to other states more credible. The need and efficacy of these tools will depend on the properties of the benefits in question, like their revocability[8] and tangibility[9].

However, there may come a period in late-stage AI international relations when traditional international credibility tools no longer work. This is because advanced AI systems may give a leading developer such a ‘decisive strategic advantage’ that they could defect from any existing commitment they’ve made and not pay any price. For example, extremely advanced AI systems could unlock new WMDs, enormous wealth and autonomy, and tech that makes the leading actor immune to nuclear strikes. If other states come to believe that the US could develop such a technological advantage and then defect from commitments at little to no cost, they may not agree to any deals with the USG, even if there are deals they would in-principle want to agree on.

The challenge of making commitments that remain credible even if one party gets a decisive strategic advantage is out of the scope of this post, but it’s a formidable one.[12]

Benefit sharing strategies should account for potential risks.

While sharing AI-enabled benefits could yield strategic advantages for the US, it doesn’t come without risks.

The following risks could stem from sharing benefits like those I listed above.

In addition to these direct risks from benefit-sharing, there may also be risks that stem from the commitments other states agree to in exchange for the benefits. For example, if other states help secure the US’ lead on advanced AI development, this could risk framing AI development too much in terms of ‘who wins.’ Ultimately this could backfire if the US-led AI project races ahead to build new models faster than its safety solutions can keep up.

While I don’t yet have answers on how to mitigate all these risks, these risks seem addressable with smart policy design and strong diplomacy. As AI-driven strategic benefit-sharing draws more consideration from AI governance researchers and implementers, I think these general risks provide a reason to be vigilant for now—not a reason to drop benefit-sharing proposals altogether.  

. . .

I hope this post has begun to illuminate the role strategic benefit-sharing could play in international AI governance. If you’d like to discuss any of this more, you can reach me at michelm.justen [at] gmail [dot] com.

Thank you to Pivotal Research for running the AI governance fellowship that enabled me to conduct this research. Thank you as well to Matthew van der Merwe, Max Dalton, Oliver Guest, Bill Anderson-Samways, Claire Dennis, John Halstead, Oscar Delaney, and many others for valuable comments and feedback on this research. Feedback is not an endorsement; mistakes are my own.

  1. ^

    Although I focus on the USG in this post, it’s still an open question whether the US government (USG) will be in a position to ambitiously share AI-derived benefits. The benefits of advanced AI may instead continue to primarily accrue to US-based AI companies, like OpenAI, Google, and Anthropic. This would leave the USG with benefits that look more like heightened tax revenue than key geopolitical playing pieces. However, I think it’s premature to rule out the possibility that the USG will form some sort of close partnership with these companies in the next decade. As a result, this post includes discussion of AI diplomacy and specific benefits that are only realistic if the USG and frontier AI labs are working together closely.  

    Note that some form of strategic benefit-sharing could also be utilized by joint international projects on advanced AI development, like a ‘CERN for AI’.

  2. ^

     By advanced AI systems, I mean AI systems that can accomplish many cognitive tasks better and faster than most humans can.

  3. ^
  4. ^

     Note that security assurance need not be AI-derived. Traditional benefits and AI-derived benefits could be bundled together.  

  5. ^

     Some examples of power-sharing include sharing frontier AI model weights, offensive military technology, or veto authority on key frontier AI development and deployment decisions.

  6. ^

     Insofar as these seats hold significant decision making power or ‘control rights’, I think it makes more sense to think of sharing these seats as power-sharing than benefit-sharing.

  7. ^

     The USG also needs to believe other states will uphold their side of the deal, but I focus on how to make the more empowered actor’s commitments credible.

  8. ^

     Is it clear to a state when they’ve received the benefit? States could easily tell when they’ve received cash, but knowing whether or not they’re really receiving something less tangible, like a place under the US nuclear umbrella, requires more credibility.  

  9. ^

     Is it easy for the providing actor to revoke the benefit? A financial benefit may not be easy for the US to revoke, but API access to an AI model would be. Revocable benefits require more credibility.

  10. ^

     For example, if the USG revokes walks back from NATO security assurances, future commitments would be less credible.

  11. ^

     A speculative idea about escrow-like arrangements: parties put valuable resources into a neutral third-party or smart contract. By default they get those resources back (e.g., a large sum of money). But if an actor defects from an agreement they lose access to that money, and it potentially goes to the competitor. Escrow accounts have previously been used in international relations, but I haven't looked into the details.

  12. ^

     If you’re interested in working on this, let me know. I may work on this more.