Historical Precedents for International AI Safety Collaborations
By ZacRichardson @ 2025-07-13T21:30 (+14)
About this post
This post was written as part of the SPAR Winter 2025 cohort and was supervised by Aaron Scher (MIRI).
Abstract
Historical cases of international collaboration on sensitive technologies offer key insights for technical AI safety cooperation. Two main challenges for AI safety collaboration are preventing inadvertent disclosure of sensitive information and avoiding proliferation of strategic capabilities. Drawing from historical case studies on INTELSAT's governance of communication satellites, bilateral nuclear security arrangements between rival states, and international encryption standardisation, this paper identifies patterns of successful technical collaboration that advance positive applications while limiting opportunities for subversion. Recommendations include: building institutional relationships between alignment researchers from competing nations, focusing collaboration on reducing risks from non-state actors' misuse of non-frontier models, jointly designing infrastructure that will advance domestic AI governance goals, and conducting shared research on verification measures.
Introduction
International competitive dynamics may exacerbate the risks associated with developing highly capable AI systems. One way states may wish to counteract this dynamic is by selectively sharing information which enables the safe development and beneficial applications of these systems, through joint research projects and transfers of safety-enhancing techniques. However, the inherent dual-use nature of many AI systems is a significant disincentive for states to engage in these technical collaboration in situations where there is competition over the development of these systems. Additionally, engaging in technical collaborations may introduce opportunities for espionage or accidental disclosure of sensitive information, which may advantage the competing state.
A small AI governance literature has emerged which addresses these issues by proposing specific topics for dialogues or collaborations within AI safety which are less likely to have capabilities externalities[1]. Several papers in this space draw on policy documents from the US and Chinese government to gauge the political feasibility of collaborations on specific topics[2]. However, a key way to strengthen this literature is to provide a greater historical and empirical grounding for different proposed areas of collaboration.
Historical case studies of international collaborations on sensitive technologies provide one source of evidence for how we might better understand the proliferation and unwanted disclosure problems in AI safety collaborations. Even during periods of heated geopolitical competition, countries have historically managed to collaborate on sensitive technologies in order to improve strategic stability and help reap the benefits of dual-use technologies.
Three domains have been identified as particularly relevant examples of such collaborations:
- The development of global communication satellite networks through INTELSAT
- Bilateral exchanges of nuclear security techniques between the US and China (1994-1999) and the US and Russia (1994-2005)
- International processes for developing encryption standards
While these technologies vary in their strategic importance, all three carry national security implications and involve the selective transfer of knowledge related to dual-use technologies. In the INTELSAT case, a global partnership was founded to develop, run, and maintain a global satellite communications network while limiting its military application. The nuclear case study focuses on lab-to-lab exchanges between scientists from China, Russia, and the US focused on preventing nuclear accidents and theft. These two cases speak to the potential for international collaborations to help prevent the spread of dangerous technologies through direct involvement. Unlike the other case studies, the encryption standards case discusses the way in which countries can shape sensitive technologies through their involvement in standard setting processes.
The paper will begin with a brief discussion of the case selection methodology and a summary of the key lessons from each case study. This will be followed by an in-depth analysis of each of the three case studies followed by a concise set of lessons from each case study. The final section presents a set of recommendations for international technical collaborations on AI safety.
Methodology
In selecting case studies, I chose to focus on technologies that had dual-use capabilities or were otherwise sensitive to national security. Here, I follow the European Commission’s definition of dual-use as “goods that can be used for both civilian and military applications”[3]. I also sought to end up with a collection of case studies which involved both public and private sector actors in order to be more suitable to a world where AI systems are controlled by parties of both types. While the United States serves as a consistent participant across all three cases, the studies represent different fora of international engagement: multilateral governance through INTELSAT, bilateral exchanges with the US, Russia and China, and international standard-setting processes involving multiple European and Asian nations.
Notably, my selection was constrained by access to publicly available information, English language sources, and the subject matter knowledge required to interpret each case study. The primary sources used in this report skew heavily towards English-language sources, which may give undue additional weight to anglophone countries’ retelling of these events.
Across this paper, I sometimes describe collaborations as successful or unsuccessful. I deem a collaboration as successful if it achieved its stated goals, enabled a mutually beneficial exchange without other sensitive information being unintentionally transferred. A collaboration is unsuccessful if the desired exchange does not occur or successful espionage does. This report does not deeply investigate the long-term effects of these collaborations on the governance of the technologies in question nor does it make strong claims about the counterfactual impact of these interventions. Undeniably, this criteria makes this analysis vulnerable to confounding variables. Nonetheless, I expect this to provide a good starting point for establishing best practices in international AI safety collaborations.
To generate recommendations for AI safety collaborations based on these case studies, I draw on existing AI governance literature related to model evaluations, international governance, and verification mechanisms. This helps identify concrete problems related to unwanted disclosure and verification. For example, an international collaboration focused on evaluating frontier models would face issues around what level of model access to give evaluators in order to allow sufficient capability elicitation.
The recommendations made in this report make some assumptions about the structure of frontier AI development over the coming years. First, it assumes continued progress in both general-purpose foundation models (e.g., large language models) and narrow AI systems (e.g., biological design tools) in national security relevant domains. This may rise to the level of what Carlsmith defines as “advanced capability”—that is, they outperform the best humans on at least some strategically important tasks but still fall short of a generally super‑human intelligence[4]. Second, it assumes neither party’s AI or military capabilities provide them with a decisive strategic advantage over the other, but does not assume each country has models of equal capability. Finally, the recommendations assume governments have become more involved in the governance and creation of frontier AI systems than they are at present. This need not entail full-nationalisation, but may involve significant government involvement in decisions about what models are safe to develop and release, or in decisions related to sensitive use cases such as autonomous military systems or integration with critical infrastructure[5].
Executive summary of case studies and recommendations for international AI safety collaborations
INTELSAT
INTELSAT is a consortium of more than 60 governments and telecommunications companies created in the 1960s to launch, operate, and control the world’s first global satellite communications constellation. This involved building complex infrastructure in all participating countries in order to send and receive satellite communications. A key motivation for founding INTELSAT was for the US to gain a soft power victory in the Cold War by providing access to satellite communications to developing countries[6]. In part due to the US’ substantial lead over the USSR and European countries in building communications satellites, they were able to negotiate a collective agreement which changed how the technology was governed globally.
Setting up satellite communications infrastructure in other countries had two primary risks associated with it in the 1960s and 70s: allowing additional countries to gain access to improved military communications and proliferating satellite launch stations which could be repurposed for ICBM launch systems. However, prohibitions on military use, centralisation of satellite launch capabilities, and shared ownership of the profits derived from INTELSAT partially helped manage these risks. Through a series of workshops and training sessions, engineers from the US and UK widely disseminated information about how to build satellite Earth stations to enable states to expand their satellite communications infrastructure.
Lessons for AI safety collaborations from INTELSAT
- Ambitious collaborations may be more feasible when the capability gap between countries’ AI development programs is larger. Intelsat’s governance structure was heavily influenced by the US having a multi-year lead in communications satellites technology over the USSR and Europe.
- Provided there is a sufficiently strong governance structure, shared benefits from AI development may limit competitive dynamics or regional fragmentation of AI governance standards.
- As technology complexity increases, exchanges must become more interactive/dialogical and less unidirectional. Over the course of INTELSAT’s life satellite communications infrastructure became increasingly complex. As a result the exchanges needed to become more interactive and tailored to the specific needs of individual Earth station operators.
Nuclear Security:
Since the 1960s, countries have collaborated to share information designed to improve the security of their nuclear arsenals and reduce the risks of unauthorised launches. This report focuses on the US-China Arms Control Exchange (1994-1999) and US-Russia Warhead Safety and Security Exchange (1994-2005) programs. These involved lab-to-lab exchanges in which nuclear scientists from the US, China, and Russia worked side by side on improving the storage and monitoring of fissionable materials, and designing verification mechanisms to support disarmament efforts.
While these exchanges did not end nuclear tensions between these countries they did reduce risks of nuclear accidents and theft of nuclear weapons by non-state actors. Through a combination of pre-agreed security guidelines, a focus on sharing protocols and abstract techniques rather than specific designs, and establishing professional relationships between scientists, risks of accidental disclosure were largely mitigated.
Lessons for AI safety collaborations from nuclear security:
- Based on our case studies, collaborations focused on reducing risks from non-state actors tended to be more successful than ones focused on reducing nuclear escalation risks between major powers. This may mean international AI safety collaborations should initially focus on preventing catastrophic misuse rather than threats related to loss of control or concentration of power.
- Preexisting relationships between scientific communities are crucial determinants of successful technology transfers. We should aim to establish strong institutional relationships between key AI developers in the US and China well in advance of high stakes formal collaborations.
- Through using privacy preserving technologies, we may be able to collaborate on sensitive aspects of each country’s program, even if the techniques have not reached full technological maturity.
Encryption Standards:
Collaborations on encryption algorithms present a tension between transparency and obscurity. While transparency exposes systems to adversarial analysis and enables broader vulnerability discovery, most classified encryption algorithms are not made public in order to limit the knowledge of would-be attackers. Two cases of international collaborations exemplify this tradeoff: the competition to create the Advanced Encryption Standard (AES) and the subversion of International Standards Organisations encryption standardisation process by intelligence agencies.
The AES case demonstrates how transparency and deep engagement by the global academic cryptography community can increase confidence in and boost adoption of commercial cybersecurity standards. The second case demonstrates some of the key risks for such collaborations through intentional subversion by participants.
Lessons for AI safety collaborations from encryption standards
- Collaborate on designs and proof-of-concept safety techniques, not finished products.
- Cooperation on non-frontier models still creates value even when frontier collaborations are not feasible. AES encryption came from collaboration and secures most financial transactions globally, while classified communications use separate systems.
- Sharing detailed technical safety work with the wider community can build trust and reduce risks of subversion.
- Openly distribute security-enhancing technologies widely where possible.
Factors that influence the feasibility of AI safety collaborations
By synthesising findings from our case studies, I have generated some common factors which may influence the practical feasibility of AI safety collaborations.
Increases Feasibility of Collaboration | Decreases Feasibility of Collaboration |
Large capability gap between participating countries | Comparable AI capabilities levels in each country |
AI progress is not primarily reliant on algorithmic improvements (i.e., the scaling hypothesis holds true) | Comparable access to non-algorithmic AI progress inputs (e.g., compute and data) between countries developing frontier AI |
Low public profile to minimize domestic political fallout | High-profile collaborations vulnerable to domestic PR attacks (e.g., Cox Report) |
Slower AI progress and adoption in strategically relevant domains | Faster AI progress and adoption in strategically relevant domains |
Focus collaborative efforts on widely used, non-frontier models | Attempting collaboration on maximally sensitive AI systems |
Strong technical expertise & existing relationships between scientists | Ambitious collaborations without previous small-scale ones |
Existence of robust technical verification measures | Non-existence of robust technical verification measures |
Lax legal obstacles to discussing or transferring sensitive technologies, although other disincentives to sharing will exist. | Stringent legal obstacles to discussing or transferring sensitive technologies (e.g., non-proliferation laws related to AI development insights) |
The development of Intelsat as a global communications satellite regulatory body (1964 - 2001)
Brief Overview of INTELSAT as a sensitive technology collaboration
INTELSAT, the international organisation charged with building and maintaining the world’s first large-scale satellite communications constellation, serves as one example of how states can selectively share infrastructure to provide wide access to a beneficial technology while controlling the spread of its dual use aspects. Concretely, this involved building out collectively owned equipment which allowed participating countries to access satellite communications for non-military purposes and sharing profits between member states, while centralising control over the launch of new satellites. Through this multilateral structure INTELSAT enabled developing countries to gain cheaper access to global television broadcasts, international telephone calls, and data transmission – most likely much sooner and more cheaply than they would have through the unregulated market dynamics.
While the primary motive behind founding INTELSAT was to use the technology to prevent the spread of communism to developing countries during the Cold War[7], it also notably increased the costs to countries looking to develop their own military satellite communications networks. Because setting up satellite communications infrastructure was so capital intensive most countries did not have separate military and civilian infrastructure for their communication satellites in the early days of the technology[8]. As a result, commercial uses of the satellites were one way to recuperate costs. However, because INTELSAT provided both a means of sharing costs between member states, relatively cheap access to satellite communication, and operated monopolistically to block competitors the effective costs of building up military constellations increased[9].
A final way INTELSAT controlled sensitive technology risks was by making Earth stations widely available while restricting access to dangerous components. They specifically centralized control of launch vehicles to prevent the proliferation of launch pads that could be repurposed for ballistic missiles.
Background on INTELSAT and the strategic potential of communications satellites
INTELSAT was initially formed in 1964 as satellite communications technology was just beginning to mature. Participating countries contributed to the satellite network through funding, maintenance, and earth station operations in exchange for the ability to access global satellite connectivity for phone and television transmission. To this day, a significant proportion of global satellite communications is transmitted by INTELSAT-operated satellites and there are Earth Stations operating in over 66 countries[10]; although INTELSAT itself was privatized in 2001.
The creation and maintenance of a global satellite communications network requires significant investment and coordination internationally. To provide global connectivity, you not only need multiple satellites in complementary orbits, but also suitable Earth stations around the world capable of receiving and re-transmitting the signal received from the satellites to users in the area. Finally, infrastructure must be put in place to launch satellites and plans need to be made for how to retire satellites that have outlived their useful life. Because of these costs without international coordination, market dynamics would likely have produced either a global monopoly that underserved poorer countries, or a fragmented landscape of regional providers with limited coverage[11]. While a number of European and Soviet-aligned countries were able to produce working communications satellites by the mid 1960s, US telecommunications provider, AT&T seemed likely in a position to monopolise the commercial satellite communications industry without some intervention[12].
To avoid these outcomes, the US government led the push to create a global satellite communications network. Their primary motives were to cement their lead in communications satellites technology and project US soft power to developing nations[13].
Another motivation for creating INTELSAT was to leverage the US’ technological lead in communications satellites to gain longer lasting control over the relevant infrastructure. While the USSR was the first country to launch a working satellite with Sputnik in 1954[14], the US had a substantial lead over the USSR in communications satellites technology and was in a substantially better position to export satellite communications infrastructure. The first ever communication satellite, SCORE, was launched by the US Air Force in 1958[15], while the USSR did not launch its own communications satellite until April 1965 with the Molniya 1[16]. In addition, US-made satellites also had greater bandwidth for sending and receiving signal than their Soviet counterparts, even into the 1980s[17].
Part of what gave communication satellites so much strategic importance in the eyes of the US defense establishment was their potential military uses. During the space race, satellites were viewed as a significant new axis for military competition between the US and USSR. For example, there were fears that the USSR might use satellites to deploy weapons of mass destruction[18]. Additionally, the launchpads used for communications satellites were often repurposed ICBMs launchers and could potentially be reformulated to build new missile launchers.[19]
More generally, satellite communication could provide a more reliable alternative to radio or telephone communications in remote areas where infrastructure was scarcer. This was particularly valuable in military applications and since no national military satellite communications network existed at the time of INTELSAT’s creation, the gains for an early mover were potentially significant. To this end, the US Army and Air Force had planned to cooperatively build and operate their own satellite constellation for military use – the first large scale one in the world – before INTELSAT had been proposed[20]. Internal communications for military use was a capability that gave at least some actors pause since at the point of Intelsat’s formation in 1964 there were no other large scale communications satellites operating anywhere in the world. For example, in 1972 Defense Secretary Melvin Laird objected to the sale of Earth station equipment to China on the grounds that it would improve their internal military communications[21].
While INTELSAT’s governance structure prohibited the use of INTELSAT satellites for military use in its Definitive Agreement of 1971, governments still sought to use communications satellites for both military and other national security-relevant purposes. They did this through both manufacturing smaller scale networks specifically for reconnaissance use[22] and by building new governance mechanisms that would avoid undue constraints on strategic uses of communications satellites. For example, by governing communications satellites through INTELSAT and Intersputnik – the Soviet-aligned counterpart to INTELSAT – instead of the UN, both the US and USSR were able to reduce the extent to which their activities would be subject to external oversight since the UN Committee on the Peaceful Use of Outer Space[23] had laid out stringent restrictions on how spacecrafts could interact with military activities.
The creation on INTELSAT
The passage of the Communications Satellite Act of 1962 by the Kennedy administration began the creation of the first global communications satellite network[24]. The act established Comsat, a private corporation primarily composed of US telecom executives but overseen by Congress, to achieve this goal. Shortly afterwards, negotiations began to establish the network internationally, which would come to be INTELSAT.
Intelsat was initially formed through an interim agreement made between the US and a number of chiefly US-allied European countries in 1964[25]. The interim agreement had two primary parts: an Intergovernmental Agreement and a Special Agreement. The Intergovernmental Agreement established a shared understanding and intention between signatories to develop a global satellite communications system[26]. The Special Agreement was signed by a designated telecommunications entity from each country who would fund and implement the construction and operation of the relevant infrastructure to run the satellite network. These telecoms entities could submit bids for contracts to conduct activities under the INTELSAT umbrella, for example building or renovating new Earth stations. This meant that for many countries, INTELSAT operations functioned as a public-private partnership between governments and the biggest telecoms providers in that country.
The interim agreement also proposed that Comsat would manage the initial satellite network under the direction of voting states in exchange for providing the majority of the initial capital[27]. This level of control by Comsat was a point of frustration for many European countries, who viewed the Comsat as using the US’ financial and technical resources to shape INTELSAT to generate business opportunities for US aerospace firms[28].
Decision-making under the interim agreements was done via a weighted voting share method. Comsat was given 60.1% of votes and the next largest member state had just 17% of votes[29]. To counterbalance US influence, certain significant decisions, such as satellite launches and changes to INTELSAT’s governing standards, had to involve 12.5% of votes in addition to the US’ vote[30].
Between 1964 - 1973 the parties involved in INTELSAT undertook negotiations to explore ways of gradually phasing out the Comsat-dominated governance structure with a more multilateral one. These agreements led to a reduction in Comsat’s voting share, from 60.1%-40%, and an agreement for Comsat to gradually cease its management of Intelsat[31].
Technical collaboration through INTELSAT
Particularly in its early days, INTELSAT involved in depth technical collaborations on some aspects of communications satellite technology. Earth stations—radio facilities with equipment to catch and route satellite signals to local recipients—were widely created and maintained through collaborative efforts..
One of the main avenues of technical cooperation related to INTELSAT was a series of national technical seminars focused on Earth station operation and maintenance organised by various trade groups throughout the 1960s and 70s. According to Evans and Lundgren, the topics ranged from how to correct for various types of signal interference, to how to assemble and train technical staff[32]. The Earth stations were owned and operated by citizens of the country the Earth Station was located in, most of the seminars primarily involved US and UK speakers giving non-interactive trainings to the receiving countries’ Earth station managers in extraordinary detail[33]. For example, by dictating that Earth stations should include specific bookshelves with a pre-specified set of books on satellite communications for reference[34]. The lecture-based format of these seminars reportedly frustrated many of the Earth station operators, who wanted the chance to ask questions about their specific implementation challenges[35].
Contrary to what you might expect, the construction and operation of Earth stations is a significantly more expensive and laborious process than building the satellites themselves[36]. As a result, the seminars had to become more interactive and tailored to local systems as successively more complex modifications were needed on the Earth stations. For example, The INTELSAT-V satellites launched in 1979 and 1980 required that Earth station operators begin using dual-polarization and new frequency bands – something which required that all INTELSAT Earth stations have their antennas adjusted or refitted[37].
These collaborations sometimes included sharing detailed information related to both earth stations and INTELSAT’s broader technical standards with the Soviet Union. For example, by 1979 INTELSAT Earth stations were in use in the USSR, Cuba, and Romania[38]. One key reason for this open sharing revealed by CIA documents from the early 1970s is that the US viewed INTELSAT’s lead in communications infrastructure too great for Intersputnik, the Soviet-led organisation of communication satellite operators, to compete with as a global system[39]. As a result of this, a key tenant of US strategy regarding the USSR in the satellite communications domain was to incorporate them into INTELSAT[40] to more easily provide global coverage using INTELSAT infrastructure[41].
However, certain particularly sensitive aspects of the program were not widely shared; in particular, launch stations. Between 1965 and 1983 all of INTELSAT’s launches were done at NASA’s Cape Carneval launch station in Florida[42]. Even by the mid 1980s, launch station proliferation was relatively controlled within INTELSAT, with the Kourou station in French Guiana being the only non-US operated launch system in use for INTELSAT launches[43]. A key reason for this was that both ICBMs and communications satellites launched using similar types of heavy duty launchpads and solid fuel varieties[44]. For example, in the 1990s, the Russian Dnper[45] and Chinese Kaituozhe[46] missile launch systems were built from converted satellite launch stations. Although not formally written into INTELSAT’s governing documents – likely since INTELSAT did not have great means for enforcing compliance with its restrictions – the US applied export restrictions[47] to key components of satellite launch stations to mitigate this dual use risk.
The fact that INTELSAT allowed countries to cheaply access satellite communications presented a disincentive for other countries to develop their own costly launch systems. However, geopolitical interests did sometimes mean that a few countries were willing to pay this price. The French government, who was one of the leading voices of opposition to US dominance in INTELSAT, developed its own satellite launchers in Guiana in order to ensure it was not economically dependent on US launch stations for their satellites[48].
INTELSAT's governance model demonstrates a successful strategy for international collaboration on dual-use technology that balanced collective benefits with strategic control. By sharing Earth station technology widely while centralizing launch capabilities, the US was able to leverage its technical lead to shape global satellite communications while managing proliferation risks. This selective sharing approach enabled developing nations to access advanced communications infrastructure decades earlier than market forces would have allowed, while simultaneously raising the costs for countries seeking to develop parallel military satellite networks. As the technology matured and became more complex, collaboration necessarily evolved from unidirectional knowledge transfer to more interactive exchange, revealing how technical complexity shapes collaboration dynamics.
Lessons to draw from communication satellites collaboration for international AI collaborations
The INTELSAT case offers some important parallels for AI governance:
- When one country leads in both AI capabilities and safety expertise, they may be able to set stronger terms for international collaboration than if multiple countries had equal AI development. INTELSAT’s creation was greatly influenced by the US’ technological lead in satellite communications technology over the USSR or European states. As a result, they had more leverage in how INTELSAT would be organised, even while allowing for a more equal distribution of power over time.
- Exchanges on more complex technologies require extended dialogue between developers rather than one way information flows. As INTELSAT’s Earth stations became more complex, seminars on how to maintain and upgrade them had to change their structure from lecture-formats to interactive workshops. A similar dynamic could plausibly arise related to engineering challenges for implementing AI safety techniques. This suggests that we should structure international AI safety collaborations in such a way that the participants have many chances for open dialogue.
Providing countries with affordable access to safe and useful AI applications may reduce the risks of competing governance coalitions. A key worry among some AI governance researchers is that regional coalitions will form different governance norms around AI, which could result in less interoperability and lead some shared risks to be neglected[49]. INTELSAT may provide a proof of concept for some of the ways in which wider benefit sharing can help standardise regulations globally.
However, in many ways analogies between INTELSAT and AI governance are flawed, which limits their applicability to the AI safety collaborations case:
- AI systems are much more dual-use than communications satellites. As a result, the downside risks of proliferating harmful uses of AI may be significantly higher than they are in the satellite case.
- The relationship between capabilities and hardware is much more complicated for AI than for satellites. For satellites, capabilities scale predictably with physical components, but AI capabilities can vary on identical hardware depending on what model is being run.
- Advanced AI systems may be inherently dangerous in a way that communications satellites are not. The primary risks related to communications satellites stem from the fact that they could advance adversaries’ military effectiveness through improved communication or access to ICBM launchers. However, concerns around catastrophic misalignment and the misuse of AI by non-state actors means non-proliferation is not a sufficient condition for safe development.
- Collective ownership of expensive, cutting edge infrastructure.
- Designs for building and operating Earth Stations to send and receive signals were shared widely, including with rival states (USSR, Cuba, etc).
- Collective ownership of infrastructure and prohibition from using INTELSAT equipment for military communications.
- Decisions about how to manage + monitor infrastructure were done by weighted voting.
- Satellite launch pads, which could be repurposed for ICBM launches, were not shared.
Key lessons for AI safety from INTELSAT collaborations | |
Transfer scope |
|
Safeguards |
|
Lessons for AI safety collaboration |
|
Nuclear Security Cooperation between the US, Russia, and China (1994-2007)
Brief overview of nuclear security cooperation as a sensitive technology collaboration
Preventing unauthourised use of a country’s nuclear weapons is a key point of mutual interest between competing states due to the disastrous consequences of unintended nuclear strikes. Because of this, there are numerous cases of competing states transferring nuclear security techniques between each other to reduce nuclear risk. Between the 1960s and 2000s, the US engaged USSR/Russia, China, and Pakistan in a number of technical exchanges aimed at reducing these risks. I will focus on the US-China ACE (1994-1999) and US-Russia WSSX (1994-2005) programs for brevity, a number of other relevant exchanges, such as the Joint Verification Experiments between the US and USSR (1987-1988) tell a similar story[52].
I categorize the shared techniques into two groups: those separate from the weapons’ launch systems and those integrated with the launch systems. Interventions not involving the launch systems aimed to reduce the risk of the theft of nuclear materials by rogue actors, while those that were integrated in the launch systems targeted risks of unintended or unauthourised launches of nuclear weapons. For clarity, I’ll refer to these categories as launch-integrated and launch-separate technologies. These approaches showed both the potential and challenges that arise when collaborating on sensitive technologies to reduce catastrophic risks.
The U.S.-China ACE program exemplifies exchange of launch-separate topics and will take up the majority of this chapter. Following this, I will examine the attempted transfers of launch-integrated techniques, such as Permissive Action Links (PALs) and Environmental Sensing Devices (ESDs) between the US, China, and Russia. I focus primarily on the launch-separate collaborations due to greater publicly available data and greater political feasibility.
The U.S.-China Arms Control Technical Exchange (ACE) program (1994-1999)
Overview of the ACE
The ACE was a set of technical exchanges between scientists at US National Laboratories and Chinese nuclear energy scientists and military engineers that took place between 1994-1999[53]. On the Chinese side, participants were from the China Academy of Engineering Physics (CAEP) and its Institute for Applied Physics and Computational Mathematics (IAPCM), as well as the China Institute for Atomic Energy (CIAE). The CAEP focuses on R&D for nuclear weapons for the People’s Liberation Army, while the CIAE is a civilian nuclear energy organisation[54]. On the US side, the participants included scientists at Sandia, Lawrence Livermore, and Los Alamos National Laboratories[55] – facilities owned by the US government but operated by contractors under Department of Energy oversight. While technically not government agencies, these labs function as direct extensions of US government R&D programs, with nearly all funding, research priorities, and security protocols controlled by federal authorities.[56]. The exchanges were narrowly targeted at a set nuclear materials management, treaty verification, and nuclear export control activities[57]. It should be noted that despite its broad technical remit, the exchanges were small in scale – with the funding allotted to conducting and supervising the exchanges being equivalent to about 2.2 full-time staff equivalent on the program during its three year duration[58].
The exchanges were initially proposed by Deputy Assistant Secretary of State Robert Einhorn in July of 1994[59] and were prompted by a variety of factors on each side. For the Chinese, greater engagement with international nuclear nonproliferation activities was a means of mitigating the international isolation experienced after the 1989 Tiananmen crisis. The crisis had also exposed severe tensions between parts of the military and the CCP leadership, which caused both US and CCP leadership to worry about the controllability of China’s nuclear weapons[60]. On the US side, the primary motivations were to reduce the risk of accidental nuclear conflict between the USSR and China and incentivising China to exert diplomatic pressure on Iran and North Korea’s nuclear programs through integrating China into international nonproliferation standards[61].
Concretely, the ACE involved visits to US nuclear weapons facilities by scientists from CAEP and IAPCM and one visit of US nuclear scientists to Beijing. These visits were a mix of workshops on specific nuclear security practices, demonstrations of Chinese implementation of the techniques, and in some cases periods in which CAEP scientists would work out of US labs for a period of multiple months.
Going into the program, officials from both countries were aware that the program introduced risks of espionage or accidental disclosure of nuclear secrets[62]. Sig Hecker, former director of Los Alamos claimed that exchanges between the US and China were even riskier than exchanges with Russia since the Chinese nuclear weapons program was less advanced, and therefore benefited more from each exchange than the Russians could[63]. The Chinese were similarly wary of disclosing sensitive information through certain exercises.
To account for this risk the collaborations were housed in a unique organisational structure, which aimed to balance scientific independence with political oversight. The collaboration did not directly involve politicians from either side. Instead the collaborations were nominally unofficial but were overseen by a steering committee from each country’s government, which agreed upon high level objectives, pre-approved topics for inclusion in the exchanges, provided funding, and selected participants[64]. On the Chinese side, the program was funded and overseen by the New Committee on Science and Technology made up of civilian and military nuclear scientists. Likewise, the US participants were overseen by an Interagency Contact Group which was primarily made up of State Department and Department of Energy officials[65]. The Interagency Contact Group provided guidance on which subjects were suitable for exchange and pre-approved all topics involved in the exchange.
The steering committee also set in place several espionage-prevention measures. For example, they conducted background checks on Chinese participants to verify their roles at Chinese nuclear institutions, ran briefings for US participants on how to respond to potentially sensitive questions from Chinese participants, and designated what areas of the national labs the participants would be allowed to visit[66].
In January 1999, a report released by US Senator Christopher Cox alleged that the lab to lab exchanges had enabled widespread Chinese espionage and theft of nuclear secrets[67]. Although many of the core claims of Cox’s report have since been discredited, its publication led to the cancellation of the program.
Explanation of the ACE Programs
The collaborations focused on the following objectives[68]:
- Demonstrating new techniques for the protection control and accounting of nuclear materials. This aimed to help prevent unintentional proliferation of nuclear weapons through a third party managing to acquire materials such as enriched uranium or plutonium from Chinese weapons storage sites.
Increasing understanding of and confidence in the International Monitoring System used to verify the Comprehensive Test Ban Treaty, which was signed by both countries in 1996[69]. This would help to enable trust that signatories to the treaty were not covertly conducting nuclear tests.
- Building out cooperative monitoring technologies to improve visibility into the size and status of each other’s nuclear arsenal. Projects also aimed at more specific use cases, such as monitoring the status of shipments of nuclear material.
- Strengthening existing nuclear weapons export control regulations by improving access to technical experts.
- Sharing information related to nuclear energy safety, monitoring, and waste storage.
I will focus on the first two sets of projects for clarity and brevity.
Nuclear Management, Protection, Control, and Accounting (MPC&A) project
The MPC&A project culminated in the US and China collaboratively building a prototype MPC&A facility in the CIAE’s Nuclear Materials Safeguards Laboratory in Beijing[70]. Advancing Chinese MPC&A techniques were deemed especially important because Chinese nuclear storage facilities had received less attention from the international arms control community than those in the former USSR or Pakistan[71]. Partially as a result, fissionable materials were believed to be stored at a number of civilian facilities without military security measures[72].
The MPC&A project consisted of a series of workshops, dialogues, and equipment demonstrations primarily held in Beijing. The project saw US and Chinese scientists discussing their current methods for storing and monitoring fissionable nuclear material and deciding on key areas for improvement in the Chinese system. In 1995-96 two meetings were held between US National lab employees and Chinese scientists from the Institute for Applied Physics and Computational Mathematics in Beijing to make plans for the later workshops and demonstrations.
These two workshops were carried out in 1997 and 1998. The first workshop occurred in March 1997 where Sandia National Lab ran a two week-long workshop at the Institute for Atomic Energy in Beijing. The workshop focused on how to design physical protection systems for nuclear storage sites. The second workshop in 1998 focused on detailed explanations of how to identify key places to install monitoring equipment such as measurement gauges, security cameras, and alarm systems for material held at the storage sites[73]. Importantly, this did not involve actually installing the equipment, but was instead designed to allow Chinese scientists to identify key sites at which to install their own monitoring systems. The other concrete outcome of these workshops was the development of a US-China glossary of MPC&A terminology, which would be used to guide future collaborations[74].
The flagship project of the MPC&A program was the building and demonstration of a prototype Western-style nuclear materials management site at the CIAE’s Nuclear Materials Safeguards Laboratory in Beijing. The US side provided a good deal of equipment including metal detectors, security cameras, encrypted locks, and a barcode system for all material in the facility. The Chinese side provided the facility itself, fencing, and non-destructive assay equipment – monitors which detect fissionable nuclear material without damaging it[75]. Reportedly, both sides were pleased with the outcome of the project and met at the end of 1998 to plan additional collaborations. One of which involved jointly implementing these MPC&A techniques on an active Chinese nuclear fuel fabrication plant and another involved an on-site experiment at the Northwest Institute of Nuclear Technology (NINT) in Xi’an focused on improving monitoring nuclear tests[76].
Overall the ACE’s MPC&A program is believed to have generated meaningful improvements in the security of Chinese nuclear material storage. However, due to the prohibitive costs to China at the time and a lack of ongoing collaboration, China is still believed to lack many aspects of a modernized nuclear monitoring system[77].
Improving monitoring systems to enable verification of the Comprehensive Test Ban Treaty
Another key objective of the ACE was to improve monitoring systems that would enable CTBT signatory states to verify that others were not covertly conducting or preparing for nuclear tests. A key way this is done is through onsite inspections by staff from the Comprehensive Nuclear Test Ban Organisation and through monitoring seismic data for spikes caused by the detonations of nuclear weapons. As part of the ACE, US labs held a workshop and dialogue with the CAEP regarding how the data from these verification measures would be handled and verified[78], as well as how the onsite inspections could be done.
In 1996 a joint experiment was proposed by Sandia National Laboratory to support the seismic monitoring front. A key obstacle to accurate seismic data monitoring is understanding the background level of seismic activity present in a country from non-nuclear sources, such as mines, industrial plants, and natural geological activity. The details of the proposed experiment are not entirely public, but they most likely involve either setting off controlled explosions of known yields near both countries’ nuclear sites, then analysing statistical differences in the monitored seismic activity to calculate the baseline level of activity in the area. NINT, which is believed to run China’s largest nuclear test site, declined to participate for fear of inadvertently revealing information which could be used to determine the yields of past nuclear tests[79]. Ostensibly, this would allow Sandia to look through historical seismic data from around the times of known Chinese nuclear tests and estimate the size of the yields over time.
Following the formal conclusion of the ACE activities at the end of 1998, the US Senate Select Committee released a report which claimed the Chinese military had engaged in a massive espionage campaign against the US military[80]. The report, known as the Cox Report, claimed that the ACE program and other exchanges had enabled Chinese actors to steal designs for nuclear weapons and other military equipment through lax security practices[81]. This led to massive public outcry, both in the US and China, and a substantial souring of relations.
While fully adjudicating the Cox Report’s claims is outside the scope of this article, academic consensus points towards many of the report’s claims being inconsistent with the available evidence and in some cases fully contradicted by them. For an overview of refutations of the Cox Report, see the “The Cox Committee Report: An Assessment” by May et al 1999[82]. To give one concrete example, the US government detained a Los Alamos employee, Wen Ho Lee, for over 9 months on 59 charges related to stealing nuclear secrets on China’s behalf. However, in the end all charges aside from one count of “illegal retention” of defense information was dropped and Lee later successfully sued the Department of Justice for damages[83]. Nonetheless, the publication put an end to planned future exchanges under the ACE.
Discussion of the successes and failures of the ACE programs
When assessing the success of the ACE programs, the primary factors to consider are: 1) whether the intended nuclear security techniques were transferred, 2) whether the exchange enabled espionage or other unwanted exchange, and 3) what effects the exchange had on future exchanges.
Prior to the release of the Cox Report in January 1999, the ACE programs were fairly successful on their own terms. While they did not create groundbreaking nuclear safety breakthroughs, they did allow US and Chinese nuclear scientists to build significant trust and shared understanding relevant to future collaborations. Indeed, Jeff Ding claims that “up until the Cox report’s publication, US and Chinese scientists were still trying to speak the same language”, but that it may have set the groundwork for more intensive exchanges later on[84]. Furthermore, the prototyping of MPC&A techniques did represent a non-negligible improvement in nuclear materials storage and monitoring.
In retrospect, it would have been unrealistic to expect the exchanges to produce groundbreaking results immediately. Part of the reason for this was that the Chinese nuclear community was relatively inexperienced in many key areas required for meeting international nonproliferation standards[85]. For example, Nancy Prindle claimed the Chinese scientists taking part in the lab-to-lab exchanges in 1997-98 were all either over 60 or under 30 due to the destruction of China’s scientific community during the Cultural Revolution claiming much of the middle generation[86]. In another case, Sandia Labs chief scientist Clyde Layne claimed that Chinese scientists had confused PALs with a different nuclear security technique all together[87].
Another modest success of the program was the prevention of unwanted knowledge exchange. Based on what information is publicly available and recent analysis of the Cox Report, the ACE program did not meaningfully enable Chinese or American espionage.
Prior to the release of the Cox Report, the ACE was largely successful in building trust needed for future exchanges. One piece of evidence for the success in fostering shared understanding and building relationships between the two countries’ technical communities is that the workshops on export controls in particular were endorsed by senior officials from CAEP and IAPCM. One reportedly said “we now have a clear picture of the function between the U.S. government, labs, and technical experts”[88], suggesting that they were previously unclear what function each part of the US nuclear community played in creating and enforcing arms control regulations. This seems important and meaningfully relevant to today’s AI governance context, in which US and Chinese commentators frequently overstate how unified the other country’s AI development strategy is[89].
Collaboration on launch-integrated technologies between the US and China (1994-1999) and the US and Russia (1994-2005)
While the ACE program focused on launch-separate technologies, competing countries have also attempted to collaborate on launch-integrated nuclear security technologies with mixed results. Drawing on Jeff Ding's analysis of Permissive Action Links (PALs) and Environmental Sensing Devices (ESDs) - specialized locks and sensors connected to warhead launch mechanisms[90] - I will highlight specific factors that determined whether these more sensitive technologies could be successfully transferred. The key patterns of PAL collaboration between 1960-2005 offer particularly relevant insights for AI safety collaboration because they show how technical details, institutional relationships, and design abstractions influence the feasibility of sharing sensitive safety mechanisms.
I will briefly cover two such cases: US discussions around whether to share launch-integrated technologies with China during the ACE program and US-Russian collaboration through the WSSX program. In both cases, concern over internal instability partially motivated the exchange, however only in the WSSX case were complex launch-integrated technologies successfully shared.
US-China discussions of whether to share launch-integrated technologies
Before, during and after the ACE program Chinese nuclear scientists wished to get access to US PALs to secure their weapons systems. Danny Stillman, a Los Alamos employee who independently visited Chinese nuclear sites for decades, claims that in the early 1990s Chinese scientists requested US assistance on developing older generation PALs[91].
These older PALs were likely no longer in use in US nuclear systems by this time and were extremely unlikely to be useful if espionage was the goal. Despite this, the assistance was never granted. One key issue was that the US was allegedly unsure about how to share information about the present generation of PALs without accidentally revealing too much information about US weapons systems[92]. Jeff Ding argues the lack of pre-existing relationships between US and Chinese scientists contributed to this dilemma due to a lack of shared tacit knowledge[93].
A key question then is why older generation PALs were not transferred. Plausibly, the domestic consequences of being perceived as enabling Chinese espionage were part of the reason, given the intensely negative public perception of China following the Tiananmen Crisis and how relatively easy it would have been to share basic PALs. Regarding early versions of PALs Thomas Schelling commented “Once you have the concept, a 12-year-old could comprehend the mechanics within minutes”[94]. This suggests the practical risks of sharing them were low. Sandia Lab director, Nancy Prindle, claimed any collaboration with the Chinese faced significant opposition from interagency processes[95], so it is possible that the transfer of old PALs was one item that was blocked by the ACE’s steering committee.
US-Russia Warhead Safety and Security Exchange (1994-2005)
Following the collapse of the Soviet Union the US and Russia engaged in two primary collaborations designed to prevent former Soviet nuclear weapons from proliferating to terrorists or other governments. The main two were the Nunn-Lugar Comprehensive Threat Reduction (CTR) program and Warhead Safety and Security Exchange (WSSX). CTR focused on the broad goal of securing and destroying chemical, biological, radiological, and nuclear weapons materials across the whole of the former USSR[96]. WSSX was a much more narrowly targeted technical exchange, similar in scope to the ACE program.
The WSSX agreement was signed in December 1994[97] and involved a series of lab-to-lab exchanges based in Russia which continued until 2005. The agreement specified that the exchanges would cover techniques for safely dismantling weapons, securing buildings containing weapons, and other assorted topics related to nuclear security[98].
Two key projects were supporting the Russian scientists in building an automated system that monitored the location, status, and health of all nuclear weapons across a set of facilities called TOBOS[99], and building a mechanism for providing transparency about the warheads in Russia’s arsenal[100]. Because TOBOS was automatically connected to the warheads themselves as well as to their immediate environment, it encompassed a number of sensitive launch-integrated aspects of the weapons.
According to interviews with participants, a key factor in the ability of the collaborations to achieve this was the depths of experience and personal familiarity with one another that the scientists had due to previous collaborations[101]. As a result, the US scientists were often able to provide useful guidance about whether their Russian counterparts were on the right track through negative affirmations of engineering directions that didn’t work without explicitly telling them what they had done themselves[102].
Aside from TOBOS, a key project was building a system that would allow for increased transparency around changes in Russia’s nuclear arsenal[103]. Because a key focus of the US-Russian relationship in the late 90s was on destroying excess nuclear warheads, it was necessary to devise a means of validating claims that a warhead had been destroyed. Normally the way to verify that a weapon has been dismantled is to compare measurements in the radiological signal the weaponised material uses over time. Each batch of fissionable material had certain unique characteristics which would change in predictable ways if it had been de-weaponised. However, providing all of this radiological ‘fingerprint’ also gave away information about the age, mass, and isotopic composition of the material – some of which was controlled under Russian non-proliferation laws[104]. As a result, a new mechanism needed to be created which disclosed less sensitive information.
In 2000 this mechanism was unveiled at the Fissile Materials Transparency Technology Demonstration at Los Alamos. The proposed mechanism was called the Information Barrier (IB) system. Although the IB system was never used to verify nuclear agreements in practice due to later diplomatic obstacles, it was considered by participants to be a workable privacy-preserving method for verifying information about Russian nuclear weapons.
It operated by allowing the US side to get yes/no signals to pre-defined queries about specific warheads’ radiological profiles through a series of read-only devices. For example, you might ask “does this container hold weapons-grade plutonium with a ratio of Pu-240/Pu-239 less than 0.1?” and get a ‘yes or no’ response[105]. This was used to assess the presence of high-explosive devices within a given sample. This preserved greater privacy about the container itself and helped comply with Russian non-proliferation laws since it did not reveal the isotope composition of the plutonium.
The IB system had six discrete components:
- Data barriers which physically separated the measurement devices from the output consoles to prevent tampering.
- Volatile CD-ROM drives which allowed only short-term storage of the results.
- Single function yes/no display systems which showed the outputs to questions.
- A separate ‘security watchdog’ system which shuts the IB system down if it detects physical tampering or software irregularities,
- Physical shielding to prevent electronic signals from leaking in or out
Procedural restrictions like metal detectors and physical monitoring of entrances and exits to the IB system container.[106]
Overall the WSSX is generally considered to be a successful collaboration on complex nuclear security techniques due to the creation of the TOBOS and Information Barrier systems, as they enabled improved monitoring of nuclear materials and basic verification of the presence. Importantly, actual implementation of some of these techniques in real world verification regimes was curtailed by the non-extension of the WSSX program in 2005 caused by a mixture of diplomatic breakdown and rising project costs. Despite this, the technical achievements remain relevant as a proof of concept for the potential for using privacy preserving verification methods in international AI safety collaborations[107].
Lessons for AI safety collaborations from nuclear security
The history of nuclear security collaborations offers a number of lessons for AI safety collaborations:
- Collaborations focused on preventing theft or unauthourised use of nuclear weapons by non-state actors were on the whole more successful than ones focused on reducing nuclear escalation risks between states. This suggests a potentially promising avenue for future collaborations in the AI safety domain.
- Whether or not there were preexisting relationships between scientific communities was a key determinant of whether complex safety technologies could be successfully transferred. This was because through shared tacit knowledge and experience, techniques could be implemented with less exact guidance.
- Building small scale, lower priority verification systems can be useful for building trust and developing methods for larger scale verification methods later on. As was the case with the Joint Verification Experiments helping to facilitate the WSSX.
However, some key disanalogies exist between these exchanges and the present context of AI safety:
- Unlike nuclear weapons, AI systems have significant commercial and scientific applications which we will want to distribute widely. As a result, the restriction of access to powerful AI models to a small number of actors may not be desirable.
- Today, the international balance of power is more complex than in the 1990s. While both the USSR and China were embroiled in crisis and diplomatic isolation, the period was relatively stable for the US.
- Algorithmic progress and the open-sourcing of AI models may mean access is not restricted to a small number of actors by default, as was the case with nuclear weapons. Therefore, non-proliferation efforts in the AI domain will begin from a very different starting point to the nuclear weapons domain.
- There is broad consensus that we should avoid using nuclear weapons where possible. There are substantial disagreements about the risks presented by advanced AI systems.
- Techniques, locks, sensors, and accounting systems for monitoring and storing nuclear materials.
- Creation of low-level verification measures, mainly through improving our ability to detect nuclear tests through non-invasive measures (e.g., information barrier system, atmospheric and seismic modelling)
- Requests were made to transfer launch-integrated technologies (e.g., PALs), however they were less commonly fulfilled than MPC&A-related technologies.
- Pre-approval of collaboration topics and scope by national security staff; collaborations carried out solely by nuclear scientists.
- Participating countries consented to some counterintelligence measures being undertaken
- Lower level confidence building measures are important for building trust + establishing norms around how to conduct these exchanges
- Whether or not there were preexisting relationships between scientific communities was a key determinant of whether complex safety technologies could be successfully transferred.
- The abstractness of the technologies involved in the transfer were key determinants of how feasible it was to securely transfer launch-integrated technologies.
- It is possible a similar organisational structure to the ACE program could apply to AI development, especially if it is primarily carried out mostly by private sector researchers.
Key lessons for AI safety from nuclear security collaborations | |
Transfer scope |
|
Safeguards |
|
Lessons for AI safety |
|
The encryption standard setting process
Overview of the international encryption standard setting process as a sensitive technology collaboration.
One key difference between the collaborations discussed so far in this paper and AI is that model weights and algorithms can be transferred as software, while PALs and satellite launch stations cannot. A key area where states have engaged in strategic transfers of sensitive software is in the crafting and standardisation of encryption algorithms. International encryption standards allow for interoperable secure communications between countries and ensures that participating countries are meeting reasonable security standards. However, subverted encryption algorithms may allow certain actors to secretly exploit known vulnerabilities and decrypt supposedly secure communications. I will consider cases that illustrate both the benefits and risks that have historically come from international collaborations on encryption algorithms: the creation of the Advanced Encryption Standard (AES) in 2000 and the controversy surrounding subverted encryption standards through the NSA’s submission of compromised standards to NIST and the ISO.
The AES case represents an alternative avenue to direct government-to-government technology transfer through the use of an unusually transparent process in a strategically important domain. It also demonstrates how beneficial technological collaborations can occur through voluntary standards.
Background on the Advanced Encryption standards
The AES was created in 1997-2000 through an international competition run by the US National Institute of Standards (NIST) to replace the Digital Encryption Standard (DES) as the US’ standard means for encrypting non-classified communication. Since its creation in 1973 DES was widely used among US government agencies and also used to encrypt most financial transactions carried out through companies like Europay, Mastercard, and Visa. However, by the mid 1990s demonstrations by academics and cryptography activists had demonstrated that it was increasingly vulnerable to attack due to improvements in computing power[108]. As a result, in 1997 NIST sought to replace DES with a more secure alternative.
A key issue from NIST’s perspective was that it could not force government agencies or private companies to use its encryption standards, meaning that it had to gain acceptance from the global cryptography community in order to make the case that redesigning products to run on the new algorithm was worthwhile[109]. However there were a number of constraints NIST had to work under. First, a growing lack of trust in NIST’s independence from the NSA when it came to developing encryption algorithms[110]. Since 1993, the US law enforcement had used a classified key escrow algorithm called SKIPJACK to allow law enforcement agencies to decrypt DES-protected communication by suspected criminals[111]. Second, products with the full-56 bit version of DES required a special license in order to be exported, so generally only weaker versions of DES-enabled products could be sold internationally. In a bid to gain goodwill, NIST announced in January 1997 that it would run a transparent international contest to develop DES’ replacement in which anyone could submit an algorithm for consideration. The winning algorithm would be certified as the AES and would be made available on a royalty-free, unrestricted basis so that it could be implemented by any firm which wished to adopt the AES without worrying about export control restrictions.
Before delving into the details of the contest, it will be useful to briefly explain some key terms in cryptography. Cryptography uses mathematical techniques to protect information. Block ciphers, like AES, encrypt data in fixed-size chunks (blocks) using a key. The key scrambles the original information (called the plaintext) in a way that can only be unscrambled with the same key. The key itself is a specific string of characters generated through a complex random process specified by the block cipher algorithm. After encryption, the scrambled information is known as the ciphertext.[112]. Therefore, one of a block cipher’s main jobs is to use a sufficiently random process that the plaintext cannot be deduced from the ciphertext, even if someone has a general description of how the cipher works. The strength of encryption typically depends on key length – longer keys (measured in bits) provide more security but usually are more computationally expensive. One way an algorithm’s security is measured is through the number of unique random operations you would need to perform before you could generate the correct plaintext.
For modern encryption, the security measurement is typically expressed as 2^n operations needed to break it through brute force, where n is the key length in bits. With AES's standard key lengths of 128, 192, or 256 bits, breaking encryption requires astronomical numbers of operations. This makes modern encryption computationally infeasible to crack, compared to the previous DES standard's 56-bit keys which are costly, but possible to crack through brute force search.
The AES contest
The criteria for inclusion in the contest were as follows, quoted from Smid 2021[113]:
“(1) be a strong block cipher that would support commonly used modes of operation;
(2) be selected in a fair and open manner;
(3) be usable by both industry and the U.S. government worldwide;
(4) have a variable key size so that security could be increased when needed;
(5) be at least as secure as Triple DES; and
(6) be significantly more efficient than Triple DES.”
In addition, submissions were to include mathematically optimized software implementations of the algorithms in ANIS C and Java, variations in three key sizes, and a design rationale.
The contest was announced in January 1997 and at the First AES Candidate Conference in August 1998 announced that fifteen algorithms from twelve different countries had been submitted[114]. The countries were all US allies, primarily from western Europe and North America, but also included Israeli, Japanese, and South Korean submissions. A following conference was held in March 1999 at which five algorithms were selected as finalists. In the end Rinjdael[115], an iterated block cipher submitted by Belgian cryptographers Joan Daemen and Vincent Rijmen, was selected as the winner of the contest.
The procedure for evaluating and selecting the winning algorithm provides a good example of a transparent international collaboration leading to strong standards. For the purposes of brevity we will focus our analysis on describing the institutional arrangements surrounding the evaluation without diving into technical specifics. For a detailed treatment of the technical evaluations see Nechvatal et al. 2000[116].
Algorithms were evaluated by a selection team of staff from NIST’s Information Technology Laboratory on the basis of three main criteria: 1) Security, 2) Cost, and 3) Algorithm and Implementation Characteristics[117]. Security was evaluated by running known block cipher attacks on the cipher, mathematically examining the randomness of the ciphertext output, and examination of the choices discussed in the design rationale submitted alongside the algorithm. Cost referred to both the intellectual property entanglements that would prevent the algorithm from being distributed royalty free and its computational costs, since it would need to be run on a wide range of devices quickly. The algorithm and implementation characteristics category included issues like the extent to which the algorithm allowed the use of larger key sizes and its compatibility with various other commonly used algorithms. These factors were important for ensuring that the winning algorithm was not only secure, but also practically useful on a wide range of devices and programs.
A key part of the evaluation process was the two conferences NIST ran as part of the AES competition as well as the substantial involvement of public comments in developing the evaluation criteria. At the first AES Candidate Conference in August 1998 the authors of all candidate algorithms produced a presentation explaining their algorithms and taking a live Q&A from the 200 cryptographers in attendance. A request for comments launched in the Federal Register in September allowed the cryptography community to submit their assessments of the candidate algorithms as well as fine-grained guidance on how NIST should evaluate the candidates[118]. Two concrete outcomes of these requests for comments were the discovery of successful attacks on five of the candidates as well as the introduction of a “safety margin” as a measure of how the algorithm trades off security and computational cost[119].
In March 1999 the Second AES Candidate Conference was held in Rome. This included additional questioning to the algorithm developers, discussions of potential modifications to each candidate algorithm, as well as a vote by attendees on which algorithms to include as finalists in the competition. While the vote was consistent with NIST’s actual selection of finalists, NIST’s selection committee had the ultimate say over which algorithms were chosen as finalists.
The process of public comments and a conference was repeated for a third time and on October 2nd 2000 the Belgian candidate, Rijndael was selected as the winner. While all finalist algorithms were deemed to be sufficiently secure, Rijndael was selected due to being the most efficient and flexible algorithm[120].
The AES gradually replaced the DES in US federal agencies and expanded to commercial applications around the world over the next decade. An impact assessment by NIST from 2019 claims the AES had become the de facto industry standard for global finance, telecommunications, healthcare, digital cinema, road vehicle security, and home network firms[121]. The report estimated that between 2001-2017 the standardisation and royalty-free release of the AES created $250.6 billion of counterfactual net-present value to the US economy by averting costs from cyberattacks under DES, costs from purchasing encryption licenses, or market inefficiencies resulting from the use of multiple non-interoperable encryption standards[122].
While this focuses only on the impact of the AES in the US, it was widely used worldwide since large US-based firms commonly used it in overseas activity. Through a pair of validation programs firms could submit their implementations fo AES to NIST-approved labs in 29 countries to get assurance that their products were properly secured in their AES implementation[123]. This suggests it likely had a significant impact on cybersecurity of consumers in many countries.
The AES competition highlights international collaboration's value in securing sensitive dual-use technologies. NIST's transparent, open process invited global participation and extensive public review, building trust during a period of high skepticism about government intentions. By making the winning algorithm royalty-free and unrestricted, they ensured widespread adoption while maintaining security. This transparent approach demonstrates a potential model for AI governance: establishing different collaborative standards based on capability levels, pursuing open international cooperation on intermediate models while maintaining stricter controls for frontier systems. While classified communications still use separate algorithms
AES is trusted for most sensitive-but-unclassified information worldwide—suggesting that transparency and inclusion can be strategic advantages rather than vulnerabilities in governing sensitive technologies.Failures of encryption standards as a case of international collaborations
While the AES competition is one example of positive international collaboration on encryption standards, the process also comes with risks. A key case of this is the Dual EC controversy, in which the NSA subverted standardisation processes at NIST and the ISO in order to standardise an algorithm with a secret vulnerability. While this is not the only case of a government manipulating international cybersecurity standards in order to gain a strategic advantage, it is a particularly severe one. This demonstrates how international technical collaborations can be compromised through subversion by intelligence agencies and how this can have additional unintended consequences.
Dual EC
The Dual_EC_DRBG (Dual Elliptical Curve Deterministic Random Bit Generator, henceforth Dual EC) controversy came to light in 2013 as a result of the Snowden Leaks[125]. The controversy surrounded an algorithm used to generate random numbers within cybersecurity systems. Random numbers are used in cybersecurity for a variety of purposes. For example, a key in a block cipher should be generated through a random process so that attackers cannot find patterns in the difference between known plaintexts and ciphertexts in order to guess future key strings.
However, perfect mathematical randomness is impractical to achieve, so pseudo-random number generators – like Dual EC – are sometimes used instead. Pseudo-random number generators are deterministic processes, so their inputs can be predicted from their outputs. To stay secure they rely on keeping certain information, for example the starting point on which mathematical operations are performed, about the algorithm secret. This means that while “it should be impossible to learn anything about the internal state of the algorithm based on the outputs”[126], it is possible to fully deduce the internal state of the algorithm if you know both its input and output.
For Dual EC specifically, the algorithm uses two points (P and Q) on separate elliptical curves as its starting points. From there a series of mathematical operations are performed which generate a pseudo-random output. From any given output, it is incredibly difficult to ascertain what P and Q are, but if you know P and Q to start with then it is straightforward to predict what are supposedly random outputs[127].
The NSA exploited this secret weakness in Dual EC to gain unilateral eavesdropping capabilities on communications using this protocol by preselecting non-random starting points for P and Q[128]. To facilitate this, the agency covertly pushed for Dual EC to become a widely used standard.
In 2003 the ISO had begun work on an international standard related to random number generators, which did not include Dual EC. During a request for public comment, representatives from the NSA argued ISO’s proposed random number generators were insufficiently secure and lobbied for Dual EC to be included in the standard to remedy this[129].
Partly to support the ISO standardisation effort, the NSA covertly cooperated with NIST and American National Standards Institute (ANSI) to standardise Dual EC. In June 2004 ANSI adopted Dual EC as one of its standard pseudo-random number generators with NIST following suit in 2006[130].
During the NIST standardisation process researchers contacted John Kelsey, the author named on the NIST proposal, for more detail on Dual EC’s security. He redirected their questions to cryptographers at the NSA who he claimed knew more about the algorithm’s workings than he did[131]. This strongly suggests the NSA’s heavy involvement in the standardisation process, despite NIST’s formal independence from the NSA.
While some cryptographers involved in the ISO process objected to Dual EC’s inclusion due to its slow performance and unusual number generation procedure, the ISO accepted the NSA’s suggestions to include Dual EC in ISO standard 18031:2005[132]. While it is not known exactly how many conversations were then vulnerable to eavesdropping, the NSA took other steps to ensure Dual EC was widely used. For example, the NSA paid RSA Cybersecurity $10 million to make Dual EC the default random number generator in their widely sold BSafe security software[133]. Another cybersecurity firm, Juniper Networks, claimed in 2015 that an attacker had modified the Q point in their Dual EC algorithm and had decrypted an unknown amount of communication sent through their VPN services[134]. This demonstrates an additional failure mode of subverted international collaborations on encryption standards.
The Dual EC controversy highlights how international technical collaborations on encryption standards can be compromised through deliberate subversion by state actors. The NSA's successful manipulation of multiple standards bodies to standardize a cryptographic algorithm with a hidden backdoor demonstrates the fragility of these collaborative processes when participants act in bad faith. Beyond the direct implications for international technical collaborations, this case reveals secondary risks, as shown by unknown attackers exploiting the same vulnerability in Juniper Networks' systems. This historical example provides a crucial warning about dual-use technologies: the very features that make international standardization valuable—widespread adoption and interoperability—can amplify vulnerabilities when the process is compromised.
Lessons for AI safety collaborations from encryption standards
A number of lessons for AI safety collaborations can be learned from the successes of the AES competition and failures of subverted international encryption standards:
- Collaborating on designs and proof of concept safety techniques rather than off rather than off-the-shelf implementations to reduce risks of subversion by the transferring country. The extra flexibility this affords may increase the chances of catching or removing intentionally implanted vulnerabilities.
- Collaborations may focus on economically important technologies, but collaborations should not focus on maximally sensitive frontier models. While both the AES and subverted standards cases demonstrate that there can be value in public scrutiny, the fact that competing states have not collaborated on the algorithms which encrypt classified information suggests the risks outweigh the benefits. In the AES case, part of the benefit of sharing the standard widely was to encourage widespread adoption for commercial purposes; by contrast, the high-stakes nature of frontier models means that open collaboration on their design would likely produce more risks than benefits.
- In certain cases, sharing detailed technical descriptions of the collaboration with the wider AI safety community may build trust between participants and reduce the risks of subversion.
However there are also a number of disanalogies between the encryption standards and AI safety cases:
Techniques which make AI systems safer can also increase their capabilities, while improving encryption algorithms does not. Many techniques which increase the alignment and steerability of AI models, such as interpretability and reinforcement learning from human feedback, can also serve to make AI models more efficient in general[135].
- Verifying statements about the behaviour or security of an AI system is radically more difficult than verifying an encryption algorithm’s security. Our understanding of encryption algorithms is much more advanced than our understanding of how deep learning systems behave. As a result, it is substantially more difficult to verify how an AI system will behave in advance.
- AES contest: Exchange of proposed designs for an encryption standard used for unclassified government, financial, and commercial communications.
- Dual EC: Global proliferation of a compromised encryption algorithm through institutional subversion of the standard setting process.
- Exact implementations of the AES were not widely shared, allowing adopters to control direct access to their use case of the algorithm.
- Open scrutiny with extensive involvement of the global cryptography community increased confidence in the AES’ integrity and security.
- Technical review panels at the ISO, NIST, and ANSI were overridden by deference to the NSA’s reputation for cryptography expertise.
- Points to the risks of AI developers or government agencies potentially inserting vulnerabilities into an AI model during the development or collaboration process.
- Standards may play an important role in the way AI is governed and diffused globally
- Sheds light on when public scrutiny is likely to increase confidence in and security of an AI system or risk mitigation strategy.
Key lessons for AI safety from encryption standards collaborations | |
Transfer scope |
|
Safeguards |
|
Applicability to AI |
|
Recommendations for international AI safety collaborations
Foster institutional relationships and shared understanding of certain AI research in advance of broader scale collaboration on AI safety.
All of our case studies demonstrate that preexisting relationships between scientific communities are crucial determinants of successful technology transfers. This is most clear in the case of nuclear security transfers, in which prior interactions between US and Russian nuclear scientists through the Joint Verification Experiments were credited with helping enable the transfer of complex nuclear security technologies through the WSSX[136]. Due to the importance of complex engineering work in AI safety, it seems highly plausible that a similar dynamic could occur in international AI safety collaborations.
While professional relationships take time to develop, some concrete steps can be taken to advance the shared understanding of research priorities between countries. For example, track II dialogues can issue consensus statements around risks,[137] universities can expand intensive exchange programs such as Schwartzman or Yenching Scholars programs to technical ML-related topics, and researchers can help clarify how AI is governed in each country to reduce strategic misperception. During the ACE program one Chinese nuclear scientist noted that prior to the exchange there was not a strong understanding of the interaction of policymakers, national labs, and technical experts in US nuclear activities, suggesting there was a lack of good information about this available within China[138]. It is highly likely that a similar information gap exists today regarding how the US and China govern AI.
Notably, fostering this understanding may help states assess the extent to which their competitors are exercising a unified strategy aimed at technological dominance in AI. There is some evidence that present day US-China competition on AI is intensified strategic misperception, exemplified by articles by both US and Chinese government affiliated researchers characterising the other countries’ AI development strategy as a coordinated whole-of-government approach[139].
States should foster institutional relationships, and share research priorities between key AI labs well before they're needed, especially between academic and non-profit research institutes. These pre-existing connections are crucial for facilitating complex technology exchanges while preventing unwanted disclosures.
Establish structured collaboration frameworks with pre-approved guidelines for information sharing, dissemination controls, and technical exchange protocols.
A determinant of how well a technical exchange goes is the institutional structure it is conducted within. While our historical case studies do not offer any definitive answer on what the ideal situation is, they do suggest some general patterns which allow for detailed exchanges while reducing national security risks.
Establish a clearly defined scope for the collaboration and have national security staff pre-approve topics for exchange before the exchange begins. A key feature of INTELSAT Earth station conferences, the US-China ACE, and the US-Russia WSSX was that the exchanges only directly involved technical staff. Policymakers oversaw, funded, and negotiated the goals of the exchanges, but were ultimately separated from the object level work.
The ACE program has the most clearly documented structure in this regard. A steering committee of national security and diplomatic personnel negotiated the objectives, logistics, and topics of the exchange. This enabled both the US and Chinese sides to learn which participants would be involved and plan training for the technical staff around what topics would be too sensitive to discuss. The US was also able to conduct background checks on all Chinese participants and both sides were allowed to undertake basic counterintelligence measures. Further, an Interagency Contact Group made up of experts from other parts of the respective countries’ governments (e.g., embassies in the other country) were able to provide contextual information about the other country’s potential aims. It seems plausible that a similar structure could be employed for international AI safety collaborations.
Additionally, both INTELSAT and nuclear exchanges involved clear guidelines around how the outputs of the exchanges were to be handled and deployed. For example, INTELSAT’s founding documents clearly spelled out both a prohibition on military uses of the satellites and more mundane issues around how any revenue derived from the use of INTELSAT’s infrastructure would be divided between member states.
Due to AI’s even more inherent dual-use nature, clear guidelines around appropriate publication norms and uses of co-developed technologies will likely be even more essential than they were in our historical cases. But providing an institutional structure in which clear expectations are established in advance may reduce the risks of unintended consequences.
Where possible, orient collaborations towards mitigating threats from non-state actors to increase political viability and reduce the need to share information about frontier AI systems
Our case studies consistently revealed that international collaborations were most successful when focused on addressing risks from non-state actors through work on non-frontier technologies, rather than attempting to cooperate on the most sensitive cutting-edge systems. While other substantial risks related to advanced AI systems exist[140] actions targeting risks from non-state actors may be the most feasible ones for international AI safety collaborations.
This is well-evidenced by our case studies. For example, nuclear security exchanges focused on preventing the theft of fissionable materials were successful more often than launch-integrated ones. Likewise, the AES competition was able to be so transparent in part because the AES was primarily used to prevent ordinary cybercrime rather than defend classified information. This suggests that international AI safety collaborations focused on risks from non-state actors may be more politically feasible than ones which seek to regulate frontier capabilities or require sharing sensitive information about national security applications. Given the current rate of algorithmic progress[141], it seems highly likely that non-frontier models, will be capable of enabling harmful misuse in the relatively near future.
The development of advanced AI systems may introduce national security-relevant capabilities in the near future, some of which are uniquely problematic in the hands of non-state actors. A key stream of AI governance research concerns how to deal with the potential for AI systems to uplift bad actors’ capabilities to generate biological weapons, misinformation, and novel cyberattacks[142]. This type of research is likely to be an especially good target for international AI safety collaborations due to states’ shared interest in the nonproliferation of these capabilities. Concrete research projects in this vein might include working on model-agnostic jailbreak prevention methods via research into non-finetunable models[143] or API monitoring mechanisms[144].
Furthermore, existing AI policy documents by the US and Chinese governments suggest that both states view preventing certain types of AI misuse as high priorities for governance[145]. Concrete technical problems in this vein include methods for identifying AI-generated content, evaluating models’ cyberoffense capabilities, and auditing models for hidden objectives[146]. Since our case studies suggest that states are more likely to successfully collaborate on non-frontier technologies than frontier ones, interventions which focus on countering the risks arising from misuse of non-frontier models may be especially tractable. This is likely to both improve strategic stability through reducing overall AI misuse, but may serve as a building block for future collaborations on more sensitive aspects of AI governance.
The strong historical precedent, evidence of significant common ground between states, and number of open technical problems suggests that collaborations focused on reducing AI misuse by nonstate actors may be an especially promising avenue for international collaborations.
Collaborate on evaluation infrastructure and protocols in order to advance domestic governance initiatives
AI evaluations may play an important role in many countries’ domestic AI governance initiatives[147]. While sharing specific evaluations between countries may leak strategically relevant information, collaborating on infrastructure and high-level protocols is likely to be beneficial for both sides. Because the science of AI evaluations is not yet mature[148], there is significant room to develop best practices and better infrastructure to enable more effective evaluations. Collaborating on evaluation infrastructure and protocols is likely to be beneficial for both countries’ domestic governance with minimal risk of accidental disclosure.
Several of our case studies involve successful collaborations focused on building technical infrastructure which is then independently deployed by the receiving country. For example, once established the AES’ algorithm was shared globally but individual implementations were not controlled by NIST or the US government to allow for greater flexibility.
It is plausible that evaluation infrastructure and protocols could proceed in a similar manner. This is partially due to shared challenges regarding how to conduct evaluations well. For instance, the UK AI Security Institute’s Inspect tool significantly speeds up the process of evaluating models on certain tests[149].
Countries are also likely to share similar challenges related to domestic evaluation-based governance. For instance, it is unclear what level of access third party auditors should have to AI models when conducting dangerous capability evaluations[150]. Likewise, sharing best practices around how to conduct model evaluations may make each country’s frontier models more robust to misuse, for example by sharing known transfer attacks or jailbreaks[151].
Engage in joint research projects on verification measures and privacy preserving technologies with extensive engagement from academic computer science and security researchers.
Technical measures which verify certain facts about the AI development activities may play a significant role in enabling international cooperation on AI safety. However, many of these are not yet technically mature enough to resist attempts at subversion[152]. For example, there are many unsolved technical problems around how to establish tamper resistant Trusted Execution Environments in GPUs[153]. Joint research projects on such problems are likely to be beneficial for future technical exchanges and involve limited risk of disclosing sensitive information.
While there are many serious challenges to implementing many such verification measures, collaborating on smaller, less complex measures is likely to increase the chances of successful exchanges on later more complex ones. For example, participants involved in setting up the Information Barrier system used during the WSSX partially attributed the success of the project to pre-existing relationships formed during the Joint Verification Experiments[154]. Thus, facilitating later collaborations may be a byproduct of lower level collaborations.
In some cases, it may be valuable to conduct this research in a way that is transparent and open to academic scrutiny. In the case of the AES encryption algorithm, public review from the academic cryptography community was a key determinant of the security of the standards. While, decisions to open source AI research should be made carefully, the open sourcing of verification mechanisms may mutually increase confidence that the mechanism does not contain means of subverting the agreement.
Conclusion
By providing detailed historical precedents of international collaborations on sensitive technologies, this paper aims to deepen our understanding of how to overcome key practical obstacles to engaging in these joint projects. The successful elements across INTELSAT, nuclear security exchanges, and encryption standards point to common factors: pre-existing relationships between technical communities, clearly defined collaboration boundaries, and organizational structures that balance political oversight with scientific independence. While no historical precedent perfectly matches the challenges of AI governance, these cases demonstrate that rivals can meaningfully collaborate on sensitive technologies when they focus on shared threats, build trust incrementally, and prioritise technical exchanges that address specific, mutually beneficial objectives.
As AI capabilities advance, strategic competition will likely intensify between major powers. However, the lessons from these case studies suggest that thoughtfully designed collaborations can still succeed even in competitive environments. By focusing initially on capable non-frontier models, developing shared evaluation infrastructure, and targeting common threats from non-state actors, states can build the foundations for more robust international safety norms while reducing risks to their strategic interests.
References
Abramson, N. 1976. “Satellite Trends and Defense Communications.” U.S. Department of Commerce, National Technical Information Servce. https://scispace.com/pdf/satellite-trends-and-defense-communications-35z091m2lv.pdf.
Adan, Sumaya Nur. n.d. “The Case for Including the Global South in AI Governance Discussions.” Accessed May 17, 2025. https://www.governance.ai/analysis/the-case-for-including-the-global-south-in-ai-governance-conversations.
“Agreement Relating to the International Telecommunica Tions Satellite Organization "INTELSAT.” 1971. United Nations. https://treaties.un.org/doc/Publication/UNTS/Volume%201220/volume-1220-I-19677-English.pdf.
AI Security Institute, U. K. 2024. Inspect AI: Framework for Large Language Model Evaluations. https://github.com/UKGovernmentBEIS/inspect_ai.
Behr, Robert. 1971. “Memorandum From Robert M. Behr of the National Security Council Staff to the President’s Assistant for National Security Affairs.” United States Department of State: Office of the Historian. April 3, 1971. https://history.state.gov/historicaldocuments/frus1969-76ve01/d256.
Bereska, Leonard, and Efstratios Gavves. 2024. “Mechanistic Interpretability for AI Safety -- A Review.” arXiv [Cs.AI]. arXiv. http://arxiv.org/abs/2404.14082.
Bernstein, Daniel J., Tanja Lange, and Ruben Niederhagen. 2016. “Dual EC: A Standardized Back Door.” In The New Codebreakers, 256–81. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-662-49301-4_17.
Bleek, Philip. 2000. “Plutonium, Early-Warning Accords Advanced at U.s.-Russian Summit.” Arms Control Daily. July 2000. https://www.armscontrol.org/act/2000-07/news/plutonium-early-warning-accords-advanced-us-russian-summit.
Buchanan, Ben. 2022. The Hacker and the State. London, England: Harvard University Press.
Bucknall, Ben, Saad Siddiqui, Lara Thurnherr, Conor McGurk, Ben Harack, Anka Reuel, Patricia Paskov, et al. 2025. “In Which Areas of Technical AI Safety Could Geopolitical Rivals Cooperate?” arXiv [Cs.CY]. arXiv. http://arxiv.org/abs/2504.12914.
Bukharin, Oleg. 2003. “Appendix 8A. Russian and US Technology Development in Support of Nuclear Warhead and Material Transparency Initiatives.” In Transparency in Nuclear Warheads and Materials, edited by Nicholas Zarimpas, 165–80. SIPRI Monographs. London, England: Oxford University Press.
Busch, Nathan. 2002. “China’s Fissile Material Protection, Control, and Accounting: The Case for Renewed Collaboration.” The Nonproliferation Review, 89–106. https://www.nonproliferation.org/wp-content/uploads/npr/93busch.pdf.
Carlsmith, Joseph. 2022. “Is Power-Seeking AI an Existential Risk?” arXiv [Cs.CY]. arXiv. http://arxiv.org/abs/2206.13353.
Casper, Stephen, Carson Ezell, Charlotte Siegmann, Noam Kolt, Taylor Lynn Curtis, Benjamin Bucknall, Andreas Haupt, et al. 2024. “Black-Box Access Is Insufficient for Rigorous AI Audits.” arXiv [Cs.CY]. arXiv. http://arxiv.org/abs/2401.14446.
Cheng, Deric, and Corin Katzke. n.d. “Soft Nationalization: How the US Government Will Control AI Labs.” Accessed May 17, 2025. https://www.convergenceanalysis.org/publications/soft-nationalization-how-the-us-government-will-control-ai-labs.
CIA Office of Economic Research. 1976. “The Soviet Statsionar Satellite Communications System: Implications for INTELSAТ.” CIA HISTORICAL REVIEW PROGRAM. https://www.cia.gov/readingroom/docs/DOC_0000283805.pdf.
Cliff, Roger, Chad J. R. Ohlandt, and David Yang. 2011. Ready for Takeoff. Santa Monica, CA: RAND.
“Cold War in Space: Top Secret Reconnaissance Satellites Revealed.” n.d. National Museum of the United States Air Force. Accessed May 17, 2025. https://www.nationalmuseum.af.mil/Visit/Museum-Exhibits/Fact-Sheets/Display/Article/195923/cold-war-in-space-top-secret-reconnaissance-satellites-revealed/#:~:text=Americans.
Coll, Steve. 2001. “The Man Inside China’s Bomb Lab.” Washington Post, May 16, 2001. https://www.washingtonpost.com/archive/politics/2001/05/16/the-man-inside-chinas-bomb-labs/b517231d-b91a-4c83-94a0-23f8c4516841/.
Computer Sciences Corporation. 1975. “NASA COMPENDIUM OF SATELLITE COMMUNICATIONS PROGRAMS.” National Aeronautics and Space Association.
Cox, Christopher, Norm Dicks, Porter Goss, Doug Bereuter, James V. Hansen, John M. Spratt Jr, Curt Weldon, Lucille Roybal-Allard, and Bobby Scott. 1999. “House Report 105-851: U.S. NATIONAL SECURITY AND MILITARY/COMMERCIAL CONCERNS WITH THE PEOPLE’S REPUBLIC OF CHINA.” United States Congress. https://www.congress.gov/congressional-report/105th-congress/house-report/851.
“Cracking DES.” 1998. Electronic Frontier Foundation. 1998. https://w2.eff.org/Privacy/Crypto/Crypto_misc/DESCracker/.
Daemen, Joan, and Vincent Rijmen. 2003. “Note on Naming: Rijndael.” https://csrc.nist.gov/csrc/media/projects/cryptographic-standards-and-guidelines/documents/aes-development/rijndael-ammended.pdf.
Deng, Jiangyi, Shengyuan Pang, Yanjiao Chen, Liangming Xia, Yijie Bai, Haiqin Weng, and Wenyuan Xu. 2024. “SOPHON: Non-Fine-Tunable Learning to Restrain Task Transferability for Pre-Trained Models.” arXiv [Cs.LG]. arXiv. http://arxiv.org/abs/2404.12699.
Di Capua, Marco S. 1999. “The Cox Report and the US-China Arms Control Technical Exchange Program.” UCRL-ID-136042. Lawrence Livermore National Laboratory.
Ding, Jeffrey. 2024a. “Keep Your Enemies Safer: Technical Cooperation and Transferring Nuclear Safety and Security Technologies.” European Journal of International Relations 30 (4): 918–45. https://doi.org/10.1177/13540661241246622.
———. 2024b. “ChinAI #292: The Misperception Spiral in US-China Tech Policy Competition.” ChinAI Newsletter. December 16, 2024. https://chinai.substack.com/p/chinai-292-the-misperception-spiral.
Dougherty, John J. 1968. “The Communications Satellite: ‘A Faint Flutter of Wings.’” United States Naval Institute. 1968. https://www.usni.org/magazines/proceedings/1968/june/communications-satellite-faint-flutter-wings.
Doyle, S. 1972. “Permanent Arrangements for the Global Commercial Communication Satellite System of INTELSAT.” The International Lawyer 6 (2).
Einhorn, Robert. 2020. “Revitalizing Nonproliferation Cooperation with Russia and China.” Arms Control Association. November 2020. https://www.armscontrol.org/act/2020-11/features/revitalizing-nonproliferation-cooperation-russia-and-china.
“Exporting Dual-Use Items.” n.d. Trade and Economic Security. Accessed May 17, 2025. https://policy.trade.ec.europa.eu/help-exporters-and-importers/exporting-dual-use-items_en.
Ghoshal, Debalina. 2024. “The Dual-Use Nature of Space Launch Vehicles and Ballistic Missiles and the Complexities.” The SAIS Review of International Affairs - (blog). The SAIS Review of International Affairs. October 17, 2024. https://saisreview.sais.jhu.edu/the-dual-use-nature-of-space-launch-vehicles-and-ballistic-missiles-and-the-complexities/.
Government of the United States of America and Government of the Russian Federation. 1994. “AGREEMENT BETWEEN THE GOVERNMENT OF THE UNITED STATES OF AMERICA AND THE GOVERNMENT OF THE RUSSIAN FEDERATION ON THE EXCHANGE OF TECHNICAL INFORMATION IN THE FIELD OF NUCLEAR WARHEAD SAFETY AND SECURITY.” https://nonproliferation.org/wp-content/uploads/2023/05/wssx_agreement_december_1994.pdf.
Hecker, Siegfried S. 2011. “Adventures in Scientific Nuclear Diplomacy.” Physics Today 64 (7): 31–37. https://doi.org/10.1063/pt.3.1165.
Hellman, Martin E. 1979. “DES Will Be Totally Insecure within Ten Years’.” IEEE Spectrum 16 (7): 32–40. https://doi.org/10.1109/mspec.1979.6368157.
Ho, Anson, Tamay Besiroglu, Ege Erdil, David Owen, Robi Rahman, Zifan Carl Guo, David Atkinson, Neil Thompson, and Jaime Sevilla. 2024. “Algorithmic Progress in Language Models.” arXiv [Cs.CL]. arXiv. http://arxiv.org/abs/2403.05812.
Hughes, Thomas. 1968. “Research Memorandum From the Director of the Bureau of Intelligence and Research (Hughes) to Secretary of State Rusk.” United States Department of State: Office of the Historian. March 28, 1968. https://history.state.gov/historicaldocuments/frus1964-68v34/d100.
“IDAIS-Beijing.” 2024. International Dialogues on AI Safety. September 24, 2024. https://idais.ai/dialogue/idais-beijing/.
“Intersputnik: Status and Prospects.” 1972. 80. Central Intelligence Agency: Directorate of Intelligence.
“Joint Verification Experiment.” n.d. Middlebury Institute of International Studies. Accessed May 17, 2025. https://nonproliferation.org/lab-to-lab-joint-verification-experiment/.
Kaszynski, Mary. 2000. “The Nunn-Lugar Cooperative Threat Reduction Program Securing and Safeguarding Weapons of Mass Destruction.” American Security Project.
Kroeber, Arthur R. 2024. “Unleashing ‘New Quality Productive Forces’: China’s Strategy for Technology-Led Growth.” Brookings. June 4, 2024. https://www.brookings.edu/articles/unleashing-new-quality-productive-forces-chinas-strategy-for-technology-led-growth/.
Kurtz, Ian. 2023. “Not Your Grandfather’s Nukes.” United States Airforce Safety Center. March 16, 2023. https://www.safety.af.mil/News/Article-Display/Article/3342051/not-your-grandfathers-nukes/#:~:text=ESDs.
Laird, Melvin. 1972. “378. Memorandum From Secretary of Defense Laird to the President’s Assistant for National Security Affairs.” Historical Documents - Office of the Historian. 1972. https://history.state.gov/historicaldocuments/frus1969-76v04/d378.
Leech, David P., Stacey Ferris, and John T. Scott. 2019. “The Economic Impacts of the Advanced Encryption Standard, 1996–2017.” Annals of Science and Technology Policy 3 (2): 142–257. https://doi.org/10.1561/110.00000010.
MacAskill, William, and Rose Hadshar. n.d. “Intelsat as a Model for International AGI Governance.” Forethought. Accessed May 17, 2025. https://www.forethought.org/research/intelsat-as-a-model-for-international-agi-governance.
Marks, Samuel, Johannes Treutlein, Trenton Bricken, Jack Lindsey, Jonathan Marcus, Siddharth Mishra-Sharma, Daniel Ziegler, et al. 2025. “Auditing Language Models for Hidden Objectives.” arXiv [Cs.AI]. arXiv. http://arxiv.org/abs/2503.10965.
May, Michael M., Alastair Iain Johnston, W. K. H. Panofsky, Marco Di Capua, and Lewis Franklin. 1999. “Cox Committee Report, The: An Assessment.” https://cisac.fsi.stanford.edu/publications/cox_committee_report_the_an_assessment.
Meissner, Darius. 2024. “National Labs and FFRDCs.” Emerging Technology Policy Careers | Advice for People Interested in Public Service & Emerging Technology. Emerging Technology Policy Careers. January 15, 2024. https://emergingtechpolicy.org/institutions/national-labs-and-ffrdcs/.
Menn, Joseph. 2013. “Exclusive: Secret Contract Tied NSA and Security Industry Pioneer.” Reuters, December 21, 2013. https://www.reuters.com/article/2013/12/20/us-usa-security-rsa-idUSBRE9BJ1C220131220/.
Minenor-Matheson, Graham. 2024. “Evans, C., & Lundgren, L. (2023). No Heavenly Bodies: A History of Satellite Communications Infrastructure. MIT Press, 256 Pp.” Communications 0 (0). https://doi.org/10.1515/commun-2024-0031.
Mitre, Jim, and Joel Prebb. 2025. “Artificial General Intelligence’s Five Hard National Security Problems.” RAND. February 10, 2025. https://www.rand.org/pubs/perspectives/PEA3691-4.html.
Mueller, Milton. n.d. “Intelsat and the Separate System Policy: Toward Competitive International Telecommunications.” Accessed May 17, 2025. https://www.cato.org/sites/cato.org/files/pubs/pdf/pa150.pdf.
National Aeronautics and Space Administration. 2025. “NASA Export Control Program Operations Manual.”
Nechvatal, James, Elaine Barker, Lawrence Bassham, William Burr, Morris Dworkin, James Foti, and Edward Roback. 200AD. “Report on the Development of the Advanced Encryption Standard.” National Institute of Standards and Technology, Information Technology Laboratory.
Newsweek. 1969. “Shh! Let’s Tell the Russians,” May 5, 1969.
OECD Global Strategy Group. 2024. “Futures of Global AI Governance: Co-Creating an Approach for Transforming Economies and Societies.” Organisation for Economic Cooperation and Development.
Ostovar, Michele. 2010. “Communications Satellites: Making the Global Village Possible.” NASA (blog). November 30, 2010. https://www.nasa.gov/history/communications-satellites/.
Perlroth, Nicole, Jeff Larson, and Scott Shane. 2013. “N.S.A. Able to Foil Basic Safeguards of Privacy on Web.” The New York Times, September 5, 2013. https://www.nytimes.com/2013/09/06/us/nsa-foils-much-internet-encryption.html.
Pike, John. n.d. “Dnepr Launch Vehicle - Russia and Space Transportation Systems.” Accessed May 17, 2025. https://www.globalsecurity.org/space/world/russia/dnepr.htm.
Pregenzer, Arian L. 2011. “Technical Cooperation on Nuclear Security between the United States and China Review of the Past and Opportunities for the Future.” SAND2011-9267. Sandia National Laboratory. https://www.osti.gov/servlets/purl/1034870.
Prindle, Nancy. 1998. “The U.S.-China Lab-to-Lab Technical Exchange Program.” Sandia National Laboratory. https://www.nonproliferation.org/wp-content/uploads/npr/prindl53.pdf.
“Research Agenda.” n.d. AI Security Institute. Accessed May 17, 2025. https://www.aisi.gov.uk/research-agenda.
Riqiang, Wu. 2016. “How China Practices and Thinks About Nuclear Transparency.” In UNDERSTANDING: CHINESE NUCLEAR THINKING, Jan. 1, 2016, edited by L. I. Bin Tong Zhao, 219-250 (32 pages). Washington, D.C., DC: Carnegie Endowment for International Peace. https://www.jstor.org/stable/resrep26903.14.
Ruzic, Neil P. 1976. “Spinoff 1976.” NASA Archives. 1976. https://spinoff.nasa.gov/back_issues_archives/1976.pdf.
Sanger, David, and William Broad. 2007. “U.S. Secretly Aids Pakistan in Guarding Nuclear Arms.” The New York Times, November 18, 2007. https://doi.org/10.1063/pt.5.021683.
Scher, Aaron. 2024. “Mechanisms to Verify International Agreements about AI Development.” November 27, 2024. https://techgov.intelligence.org/research/mechanisms-to-verify-international-agreements-about-ai-development.
Sharma, Mrinank, Meg Tong, Jesse Mu, Jerry Wei, Jorrit Kruthoff, Scott Goodfriend, Euan Ong, et al. 2025. “Constitutional Classifiers: Defending against Universal Jailbreaks across Thousands of Hours of Red Teaming.” arXiv [Cs.CL]. arXiv. http://arxiv.org/abs/2501.18837.
Siddiqui, Saad, Lujain Ibrahim, Kristy Loke, Stephen Clare, Marianne Lu, Aris Richardson, Conor McGlynn, and Jeffrey Ding. 2025. “Promising Topics for U.s.-China Dialogues on AI Risks and Governance.” arXiv [Cs.CY]. arXiv. http://arxiv.org/abs/2505.07468.
Smid, Miles E. 2021. “Development of the Advanced Encryption Standard.” Journal of Research of the National Institute of Standards and Technology 126 (126024): 126024. https://doi.org/10.6028/jres.126.024.
Socol, Scott K. 1977. “COMSAT’S FIRST DECADE: DIFFICULTIES IN INTERPRETING THE COMMUNICATIONS SATELLITE ACT OF 1962.” Georgia Journal of International & Comparative Law 7: 678–92. https://digitalcommons.law.uga.edu/cgi/viewcontent.cgi?article=2324&context=gjicl.
“Status of Signatures and Ratifications.” n.d. Comprehensive Test Ban Treaty Organisation. Accessed April 17, 2025. https://www.ctbto.org/our-mission/states-signatories.
Stone, Richard. 2017. “U.S.-China Mission Rushes Bomb-Grade Nuclear Fuel out of Africa.” Science. 2017. https://www.science.org/content/article/us-china-mission-rushes-bomb-grade-nuclear-fuel-out-africa.
“That’s Classified! The History and Future of NSA Type 1 Encryption.” n.d. Accessed May 17, 2025. https://www.mrcy.com/company/blogs/history-and-future-nsa-type-1-encryption.
“UNITED STATES MILITARY SPACE PROGRAMS and WEAPONS OF MASS DESTRUCTION IN OUTER SPACE.” n.d. CIA-RDP66R00638R000100160014-1. Accessed May 17, 2025. https://www.cia.gov/readingroom/docs/CIA-RDP66R00638R000100160014-1.pdf.
Warf, Barney. 2007. “Geopolitics of the Satellite Industry.” Tijdschrift Voor Economische En Sociale Geografie [Journal of Economic and Social Geography] 98 (3): 385–97. https://doi.org/10.1111/j.1467-9663.2007.00405.x.
Weidinger, Laura, Inioluwa Deborah Raji, Hanna Wallach, Margaret Mitchell, Angelina Wang, Olawale Salaudeen, Rishi Bommasani, Deep Ganguli, Sanmi Koyejo, and William Isaac. 2025. “Toward an Evaluation Science for Generative AI Systems.” arXiv [Cs.AI]. arXiv. http://arxiv.org/abs/2503.05336.
Wikipedia contributors. 2024. “List of Intelsat Satellites.” Wikipedia, The Free Encyclopedia. November 21, 2024. https://en.wikipedia.org/w/index.php?title=List_of_Intelsat_satellites&oldid=1258810692.
Zou, Andy, Zifan Wang, Nicholas Carlini, Milad Nasr, J. Zico Kolter, and Matt Fredrikson. 2023. “Universal and Transferable Adversarial Attacks on Aligned Language Models.” arXiv [Cs.CL]. arXiv. http://arxiv.org/abs/2307.15043.
- ^
Bucknall, Ben, Saad Siddiqui, Lara Thurnherr, Conor McGurk, Ben Harack, Anka Reuel, Patricia Paskov, et al. 2025. “In Which Areas of Technical AI Safety Could Geopolitical Rivals Cooperate?” arXiv [Cs.CY]. arXiv. http://arxiv.org/abs/2504.12914.
- ^
Siddiqui, Saad, Lujain Ibrahim, Kristy Loke, Stephen Clare, Marianne Lu, Aris Richardson, Conor McGlynn, and Jeffrey Ding. 2025. “Promising Topics for U.s.-China Dialogues on AI Risks and Governance.” arXiv [Cs.CY]. arXiv. http://arxiv.org/abs/2505.07468.
- ^
“Exporting Dual-Use Items.” n.d. Trade and Economic Security. Accessed May 17, 2025. https://policy.trade.ec.europa.eu/help-exporters-and-importers/exporting-dual-use-items_en.
- ^
Carlsmith, Joseph. 2022. “Is Power-Seeking AI an Existential Risk?” arXiv [Cs.CY]. arXiv. http://arxiv.org/abs/2206.13353. 8.
- ^
Cheng, Deric, and Corin Katzke. n.d. “Soft Nationalization: How the US Government Will Control AI Labs.” Accessed May 17, 2025. https://www.convergenceanalysis.org/publications/soft-nationalization-how-the-us-government-will-control-ai-labs.
- ^
MacAskill, William, and Rose Hadshar. n.d. “Intelsat as a Model for International AGI Governance.” Forethought. Accessed April 17, 2025. https://www.forethought.org/research/intelsat-as-a-model-for-international-agi-governance.
- ^
MacAskill, William, and Rose Hadshar. n.d. “Intelsat as a Model for International AGI Governance.” Forethought. Accessed May 17, 2025. https://www.forethought.org/research/intelsat-as-a-model-for-international-agi-governance.
- ^
Warf, Barney. 2007. “Geopolitics of the Satellite Industry.” Tijdschrift Voor Economische En Sociale Geografie [Journal of Economic and Social Geography] 98 (3): 385–97. https://doi.org/10.1111/j.1467-9663.2007.00405.x.
- ^
Mueller, Milton. n.d. “Intelsat and the Separate System Policy: Toward Competitive International Telecommunications.” Accessed May 17, 2025. https://www.cato.org/sites/cato.org/files/pubs/pdf/pa150.pdf.
- ^
Ruzic, Neil P. 1976. “Spinoff 1976.” NASA Archives. 1976. https://spinoff.nasa.gov/back_issues_archives/1976.pdf.
- ^
Slotten, H.R. (2022) Beyond Sputnik and the Space Race: The Origins of Global Satellite Communications. JHU Press. Ch. 1.
- ^
MacAskill, William, and Rose Hadshar. n.d. “Intelsat as a Model for International AGI Governance.” Forethought. Accessed May 17, 2025. https://www.forethought.org/research/intelsat-as-a-model-for-international-agi-governance.
- ^
MacAskill, William, and Rose Hadshar. n.d. “Intelsat as a Model for International AGI Governance.” Forethought. Accessed May 17, 2025. https://www.forethought.org/research/intelsat-as-a-model-for-international-agi-governance.
- ^
Ostovar, Michele. 2010. “Communications Satellites: Making the Global Village Possible.” NASA (blog). November 30, 2010. https://www.nasa.gov/history/communications-satellites/.
- ^
Dougherty, John J. 1968. “The Communications Satellite: ‘A Faint Flutter of Wings.’” United States Naval Institute. 1968. https://www.usni.org/magazines/proceedings/1968/june/communications-satellite-faint-flutter-wings.
- ^
Minenor-Matheson, Graham. 2024. “Evans, C., & Lundgren, L. (2023). No Heavenly Bodies: A History of Satellite Communications Infrastructure. MIT Press, 256 Pp.” Communications 0 (0). https://doi.org/10.1515/commun-2024-0031. 39.
- ^
CIA Office of Economic Research. 1976. “The Soviet Statsionar Satellite Communications System: Implications for INTELSAТ.” CIA HISTORICAL REVIEW PROGRAM. https://www.cia.gov/readingroom/docs/DOC_0000283805.pdf.
- ^
“UNITED STATES MILITARY SPACE PROGRAMS and WEAPONS OF MASS DESTRUCTION IN OUTER SPACE.” n.d. CIA-RDP66R00638R000100160014-1. Accessed May 17, 2025. https://www.cia.gov/readingroom/docs/CIA-RDP66R00638R000100160014-1.pdf.
- ^
Ostovar, Michele. 2010. “Communications Satellites: Making the Global Village Possible.” NASA (blog). November 30, 2010. https://www.nasa.gov/history/communications-satellites/.
- ^
Mueller, Milton. n.d. “Intelsat and the Separate System Policy: Toward Competitive International Telecommunications.” Accessed May 17, 2025. https://www.cato.org/sites/cato.org/files/pubs/pdf/pa150.pdf.
- ^
Laird, Melvin. 1972. “378. Memorandum From Secretary of Defense Laird to the President’s Assistant for National Security Affairs.” Historical Documents - Office of the Historian. 1972. https://history.state.gov/historicaldocuments/frus1969-76v04/d378.
- ^
“Cold War in Space: Top Secret Reconnaissance Satellites Revealed.” n.d. National Museum of the United States Air Force. Accessed May 17, 2025. https://www.nationalmuseum.af.mil/Visit/Museum-Exhibits/Fact-Sheets/Display/Article/195923/cold-war-in-space-top-secret-reconnaissance-satellites-revealed/#:~:text=Americans.
- ^
Minenor-Matheson, Graham. 2024. “Evans, C., & Lundgren, L. (2023). No Heavenly Bodies: A History of Satellite Communications Infrastructure. MIT Press, 256 Pp.” Communications 0 (0). https://doi.org/10.1515/commun-2024-0031. 189.
- ^
Socol, Scott K. 1977. “COMSAT’S FIRST DECADE: DIFFICULTIES IN INTERPRETING THE COMMUNICATIONS SATELLITE ACT OF 1962.” Georgia Journal of International & Comparative Law 7: 678–92. https://digitalcommons.law.uga.edu/cgi/viewcontent.cgi?article=2324&context=gjicl.
- ^
Evans & Lundgren.
- ^
Doyle, S. 1972. “Permanent Arrangements for the Global Commercial Communication Satellite System of INTELSAT.” The International Lawyer 6 (2).
- ^
Evans & Lundgren. 84-85.
- ^
Evans & Lundgren. 84-85.
- ^
Evans & Lundgren. 84-85.
- ^
Agreement Establishing Interim Arrangements for a Global Commercial Communications Satellite System (1964), Article V (d-e).
- ^
“Agreement Relating to the International Telecommunica Tions Satellite Organization "INTELSAT.” 1971. United Nations. https://treaties.un.org/doc/Publication/UNTS/Volume%201220/volume-1220-I-19677-English.pdf.
- ^
Evans & Lundgren. 135
- ^
Evans & Lundgren. 135
- ^
Evans & Lundgren. 135
- ^
Evans & Lundgren. 135
- ^
Abramson, N. 1976. “Satellite Trends and Defense Communications.” U.S. Department of Commerce, National Technical Information Servce. https://scispace.com/pdf/satellite-trends-and-defense-communications-35z091m2lv.pdf. 3-7.
- ^
Evans & Lundgren. 135
- ^
Evans & Lundgren. 136.
- ^
“Intersputnik: Status and Prospects.” 1972. 80. Central Intelligence Agency: Directorate of Intelligence.
- ^
Evans & Lundgren. 136.
- ^
Hughes, Thomas. 1968. “Research Memorandum From the Director of the Bureau of Intelligence and Research (Hughes) to Secretary of State Rusk.” United States Department of State: Office of the Historian. March 28, 1968. https://history.state.gov/historicaldocuments/frus1964-68v34/d100.
- ^
Wikipedia contributors. 2024. “List of Intelsat Satellites.” Wikipedia, The Free Encyclopedia. November 21, 2024. https://en.wikipedia.org/w/index.php?title=List_of_Intelsat_satellites&oldid=1258810692.
- ^
Computer Sciences Corporation. 1975. “NASA COMPENDIUM OF SATELLITE COMMUNICATIONS PROGRAMS.” National Aeronautics and Space Association.
- ^
Ghoshal, Debalina. 2024. “The Dual-Use Nature of Space Launch Vehicles and Ballistic Missiles and the Complexities.” The SAIS Review of International Affairs - (blog). The SAIS Review of International Affairs. October 17, 2024. https://saisreview.sais.jhu.edu/the-dual-use-nature-of-space-launch-vehicles-and-ballistic-missiles-and-the-complexities/.
- ^
Pike, John. n.d. “Dnepr Launch Vehicle - Russia and Space Transportation Systems.” Accessed May 17, 2025. https://www.globalsecurity.org/space/world/russia/dnepr.htm.
- ^
Cliff, Roger, Chad J. R. Ohlandt, and David Yang. 2011. Ready for Takeoff. Santa Monica, CA: RAND. 90.
- ^
National Aeronautics and Space Administration. 2025. “NASA Export Control Program Operations Manual.”
- ^
Behr, Robert. 1971. “Memorandum From Robert M. Behr of the National Security Council Staff to the President’s Assistant for National Security Affairs.” United States Department of State: Office of the Historian. April 3, 1971. https://history.state.gov/historicaldocuments/frus1969-76ve01/d256.
- ^
OECD Global Strategy Group. 2024. “Futures of Global AI Governance: Co-Creating an Approach for Transforming Economies and Societies.” Organisation for Economic Cooperation and Development. Pp 7.
- ^
MacAskill and Hadshar, 2025.
- ^
Adan, Sumaya Nur. n.d. “The Case for Including the Global South in AI Governance Discussions.” Accessed May 17, 2025. https://www.governance.ai/analysis/the-case-for-including-the-global-south-in-ai-governance-conversations.
- ^
“Joint Verification Experiment.” n.d. Middlebury Institute of International Studies. Accessed May 17, 2025. https://nonproliferation.org/lab-to-lab-joint-verification-experiment/.
- ^
Pregenzer, Arian L. 2011. “Technical Cooperation on Nuclear Security between the United States and China Review of the Past and Opportunities for the Future.” SAND2011-9267. Sandia National Laboratory. https://www.osti.gov/servlets/purl/1034870.
- ^
Prezenger, 2011.
- ^
Prezenger, 2011.
- ^
Meissner, Darius. 2024. “National Labs and FFRDCs.” Emerging Technology Policy Careers | Advice for People Interested in Public Service & Emerging Technology. Emerging Technology Policy Careers. January 15, 2024. https://emergingtechpolicy.org/institutions/national-labs-and-ffrdcs/.
- ^
Prindle, Nancy. 1998. “The U.S.-China Lab-to-Lab Technical Exchange Program.” Sandia National Laboratory. https://www.nonproliferation.org/wp-content/uploads/npr/prindl53.pdf.
- ^
May, Michael M., Alastair Iain Johnston, W. K. H. Panofsky, Marco Di Capua, and Lewis Franklin. 1999. “Cox Committee Report, The: An Assessment.” https://cisac.fsi.stanford.edu/publications/cox_committee_report_the_an_assessment.
- ^
Prindle, 1998.
- ^
Ding, Jeffrey. 2024. “Keep Your Enemies Safer: Technical Cooperation and Transferring Nuclear Safety and Security Technologies.” European Journal of International Relations 30 (4): 918–45. https://doi.org/10.1177/13540661241246622. 20-21.
- ^
Einhorn, Robert. 2020. “Revitalizing Nonproliferation Cooperation with Russia and China.” Arms Control Association. November 2020. https://www.armscontrol.org/act/2020-11/features/revitalizing-nonproliferation-cooperation-russia-and-china.
- ^
Riqiang, Wu. 2016. “How China Practices and Thinks About Nuclear Transparency.” In UNDERSTANDING: CHINESE NUCLEAR THINKING, Jan. 1, 2016, edited by L. I. Bin Tong Zhao, 219-250 (32 pages). Washington, D.C., DC: Carnegie Endowment for International Peace. https://www.jstor.org/stable/resrep26903.14.
- ^
Hecker, Siegfried S. 2011. “Adventures in Scientific Nuclear Diplomacy.” Physics Today 64 (7): 31–37. https://doi.org/10.1063/pt.3.1165.
- ^
Prindle, 1998.
- ^
Di Capua, Marco S. 1999. “The Cox Report and the US-China Arms Control Technical Exchange Program.” UCRL-ID-136042. Lawrence Livermore National Laboratory.
- ^
Di Capua, 1999. 6.
- ^
Cox, Christopher, Norm Dicks, Porter Goss, Doug Bereuter, James V. Hansen, John M. Spratt Jr, Curt Weldon, Lucille Roybal-Allard, and Bobby Scott. 1999. “House Report 105-851: U.S. NATIONAL SECURITY AND MILITARY/COMMERCIAL CONCERNS WITH THE PEOPLE’S REPUBLIC OF CHINA.” United States Congress. https://www.congress.gov/congressional-report/105th-congress/house-report/851.
- ^
Prezenger, 2011. 9-10.
- ^
“Status of Signatures and Ratifications.” n.d. Comprehensive Test Ban Treaty Organisation. Accessed April 17, 2025. https://www.ctbto.org/our-mission/states-signatories.
- ^
Busch, Nathan. 2002. “China’s Fissile Material Protection, Control, and Accounting: The Case for Renewed Collaboration.” The Nonproliferation Review, 89–106. https://www.nonproliferation.org/wp-content/uploads/npr/93busch.pdf.
P. 95.
- ^
Busch, 2002. 95.
- ^
Busch, 2002. 90.
- ^
Prindle, 1998. 4.
- ^
Prindle, 1998. 4-6.
- ^
Busch, 2002. 99.
- ^
Di Capua, 1999. 13.
- ^
Di Capua, 1999. 13-14.
- ^
Prindle, 1998. 5.
- ^
Prindle, 1998. 4-6.
- ^
Cox et al, 1999.
- ^
May et al, 1999.
- ^
May et al, 1999.
- ^
Stone, Richard. 2017. “U.S.-China Mission Rushes Bomb-Grade Nuclear Fuel out of Africa.” Science. 2017. https://www.science.org/content/article/us-china-mission-rushes-bomb-grade-nuclear-fuel-out-africa.
- ^
Ding, 2024. 24.
- ^
Ding, 2024. 24.
- ^
Stone, 2017.
- ^
Ding, 2024. 25.
- ^
Prindle, 1998. 7
- ^
Ding, Jeffrey. 2024. “ChinAI #292: The Misperception Spiral in US-China Tech Policy Competition.” ChinAI Newsletter. December 16, 2024. https://chinai.substack.com/p/chinai-292-the-misperception-spiral.
- ^
Kurtz, Ian. 2023. “Not Your Grandfather’s Nukes.” United States Airforce Safety Center. March 16, 2023. https://www.safety.af.mil/News/Article-Display/Article/3342051/not-your-grandfathers-nukes/#:~:text=ESDs.
- ^
Coll, Steve. 2001. “The Man Inside China’s Bomb Lab.” Washington Post, May 16, 2001. https://www.washingtonpost.com/archive/politics/2001/05/16/the-man-inside-chinas-bomb-labs/b517231d-b91a-4c83-94a0-23f8c4516841/.
- ^
Sanger, David, and William Broad. 2007. “U.S. Secretly Aids Pakistan in Guarding Nuclear Arms.” The New York Times, November 18, 2007. https://doi.org/10.1063/pt.5.021683.
- ^
Ding, 2024.
- ^
Newsweek. 1969. “Shh! Let’s Tell the Russians,” May 5, 1969.
- ^
Ding, 2024. 21.
- ^
Kaszynski, Mary. 2000. “The Nunn-Lugar Cooperative Threat Reduction Program Securing and Safeguarding Weapons of Mass Destruction.” American Security Project.
- ^
Government of the United States of America and Government of the Russian Federation. 1994. “AGREEMENT BETWEEN THE GOVERNMENT OF THE UNITED STATES OF AMERICA AND THE GOVERNMENT OF THE RUSSIAN FEDERATION ON THE EXCHANGE OF TECHNICAL INFORMATION IN THE FIELD OF NUCLEAR WARHEAD SAFETY AND SECURITY.” https://nonproliferation.org/wp-content/uploads/2023/05/wssx_agreement_december_1994.pdf.
- ^
Government of the United States of America and Government of the Russian Federation. 1994. “AGREEMENT BETWEEN THE GOVERNMENT OF THE UNITED STATES OF AMERICA AND THE GOVERNMENT OF THE RUSSIAN FEDERATION ON THE EXCHANGE OF TECHNICAL INFORMATION IN THE FIELD OF NUCLEAR WARHEAD SAFETY AND SECURITY.” https://nonproliferation.org/wp-content/uploads/2023/05/wssx_agreement_december_1994.pdf.
- ^
Ding, 2024. 33.
- ^
Bukharin, Oleg. 2003. “Appendix 8A. Russian and US Technology Development in Support of Nuclear Warhead and Material Transparency Initiatives.” In Transparency in Nuclear Warheads and Materials, edited by Nicholas Zarimpas, 165–80. SIPRI Monographs. London, England: Oxford University Press.
- ^
Ding, 2024. 36.
- ^
Ding, 2024. 36.
- ^
Bukharin, 2003.
- ^
Bukharin, 2003. 172.
- ^
Bukharin, 2003. 169.
- ^
Bukharin, 2003. pp. 172-173.
- ^
Bleek, Philip. 2000. “Plutonium, Early-Warning Accords Advanced at U.s.-Russian Summit.” Arms Control Daily. July 2000. https://www.armscontrol.org/act/2000-07/news/plutonium-early-warning-accords-advanced-us-russian-summit.
- ^
“Cracking DES.” 1998. Electronic Frontier Foundation. 1998. https://w2.eff.org/Privacy/Crypto/Crypto_misc/DESCracker/.
- ^
Smid, Miles E. 2021. “Development of the Advanced Encryption Standard.” Journal of Research of the National Institute of Standards and Technology 126 (126024): 126024. https://doi.org/10.6028/jres.126.024. 5.
- ^
Hellman, Martin E. 1979. “DES Will Be Totally Insecure within Ten Years’.” IEEE Spectrum 16 (7): 32–40. https://doi.org/10.1109/mspec.1979.6368157.
- ^
Smid, 2021. 3.
- ^
Nechvatal, James, Elaine Barker, Lawrence Bassham, William Burr, Morris Dworkin, James Foti, and Edward Roback. 200AD. “Report on the Development of the Advanced Encryption Standard.” National Institute of Standards and Technology, Information Technology Laboratory. 8.
- ^
Smid, 2021.
- ^
Smid, 2021. 5.
- ^
Daemen, Joan, and Vincent Rijmen. 2003. “Note on Naming: Rijndael.” https://csrc.nist.gov/csrc/media/projects/cryptographic-standards-and-guidelines/documents/aes-development/rijndael-ammended.pdf.
- ^
Nechvatal et al, 2000.
- ^
Nechvatal et al, 2000. 8-9.
- ^
Nechvatal et al, 2000. 11.
- ^
Nechvatal et al, 2000. 11.
- ^
Nechvatal et al, 2000. 53.
- ^
Leech, David P., Stacey Ferris, and John T. Scott. 2019. “The Economic Impacts of the Advanced Encryption Standard, 1996–2017.” Annals of Science and Technology Policy 3 (2): 142–257. https://doi.org/10.1561/110.00000010. 162.
- ^
Leech et al., 2019. 255.
- ^
Leech et al., 2019. 171.
- ^
Perlroth, Nicole, Jeff Larson, and Scott Shane. 2013. “N.S.A. Able to Foil Basic Safeguards of Privacy on Web.” The New York Times, September 5, 2013. https://www.nytimes.com/2013/09/06/us/nsa-foils-much-internet-encryption.html.
- ^
Bernstein, Daniel J., Tanja Lange, and Ruben Niederhagen. 2016. “Dual EC: A Standardized Back Door.” In The New Codebreakers, 256–81. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-662-49301-4_17. 259.
- ^
Bernstein et al., 2016. Section 5.
- ^
Buchanan, Ben. 2022. The Hacker and the State. London, England: Harvard University Press. Pp. 67-68.
- ^
Bernstein et al., 2016. 263.
- ^
Bernstein et al., 2016. 256.
- ^
Bernstein et al., 2016. 257.
- ^
Bernstein et al., 2016. 257.
- ^
Menn, Joseph. 2013. “Exclusive: Secret Contract Tied NSA and Security Industry Pioneer.” Reuters, December 21, 2013. https://www.reuters.com/article/2013/12/20/us-usa-security-rsa-idUSBRE9BJ1C220131220/.
- ^
Bernstein et al., 2016.
- ^
Bereska, Leonard, and Efstratios Gavves. 2024. “Mechanistic Interpretability for AI Safety -- A Review.” arXiv [Cs.AI]. arXiv. http://arxiv.org/abs/2404.14082.
- ^
Ding, 2024.
- ^
“IDAIS-Beijing.” 2024. International Dialogues on AI Safety. September 24, 2024. https://idais.ai/dialogue/idais-beijing/.
- ^
Prindle, 1998. 116.
- ^
For example:
“文读懂美国关键和新兴技术战略” (English translation: Understand the US strategy for critical and emerging technologies in one article) by Yapeng Lu and Weiguo Wang (China Academy of Information and Communications Technology). On the US side: Kroeber, Arthur R. 2024. “Unleashing ‘New Quality Productive Forces’: China’s Strategy for Technology-Led Growth.” Brookings. June 4, 2024. https://www.brookings.edu/articles/unleashing-new-quality-productive-forces-chinas-strategy-for-technology-led-growth/. - ^
Mitre, Jim, and Joel Prebb. 2025. “Artificial General Intelligence’s Five Hard National Security Problems.” RAND. February 10, 2025. https://www.rand.org/pubs/perspectives/PEA3691-4.html.
- ^
Ho, Anson, Tamay Besiroglu, Ege Erdil, David Owen, Robi Rahman, Zifan Carl Guo, David Atkinson, Neil Thompson, and Jaime Sevilla. 2024. “Algorithmic Progress in Language Models.” arXiv [Cs.CL]. arXiv. http://arxiv.org/abs/2403.05812.
- ^
“Research Agenda.” n.d. AI Security Institute. Accessed May 17, 2025. https://www.aisi.gov.uk/research-agenda.
- ^
Deng, Jiangyi, Shengyuan Pang, Yanjiao Chen, Liangming Xia, Yijie Bai, Haiqin Weng, and Wenyuan Xu. 2024. “SOPHON: Non-Fine-Tunable Learning to Restrain Task Transferability for Pre-Trained Models.” arXiv [Cs.LG]. arXiv. http://arxiv.org/abs/2404.12699.
- ^
Sharma, Mrinank, Meg Tong, Jesse Mu, Jerry Wei, Jorrit Kruthoff, Scott Goodfriend, Euan Ong, et al. 2025. “Constitutional Classifiers: Defending against Universal Jailbreaks across Thousands of Hours of Red Teaming.” arXiv [Cs.CL]. arXiv. http://arxiv.org/abs/2501.18837.
- ^
Siddiqui, Saad, Lujain Ibrahim, Kristy Loke, Stephen Clare, Marianne Lu, Aris Richardson, Conor McGlynn, and Jeffrey Ding. 2025. “Promising Topics for U.s.-China Dialogues on AI Risks and Governance.” arXiv [Cs.CY]. arXiv. http://arxiv.org/abs/2505.07468.
- ^
Marks, Samuel, Johannes Treutlein, Trenton Bricken, Jack Lindsey, Jonathan Marcus, Siddharth Mishra-Sharma, Daniel Ziegler, et al. 2025. “Auditing Language Models for Hidden Objectives.” arXiv [Cs.AI]. arXiv. http://arxiv.org/abs/2503.10965.
- ^
Siddiqui, et al. 2025. 22.
- ^
Weidinger, Laura, Inioluwa Deborah Raji, Hanna Wallach, Margaret Mitchell, Angelina Wang, Olawale Salaudeen, Rishi Bommasani, Deep Ganguli, Sanmi Koyejo, and William Isaac. 2025. “Toward an Evaluation Science for Generative AI Systems.” arXiv [Cs.AI]. arXiv. http://arxiv.org/abs/2503.05336.
- ^
AI Security Institute, U. K. 2024. Inspect AI: Framework for Large Language Model Evaluations. https://github.com/UKGovernmentBEIS/inspect_ai.
- ^
Casper, Stephen, Carson Ezell, Charlotte Siegmann, Noam Kolt, Taylor Lynn Curtis, Benjamin Bucknall, Andreas Haupt, et al. 2024. “Black-Box Access Is Insufficient for Rigorous AI Audits.” arXiv [Cs.CY]. arXiv. http://arxiv.org/abs/2401.14446.
- ^
Zou, Andy, Zifan Wang, Nicholas Carlini, Milad Nasr, J. Zico Kolter, and Matt Fredrikson. 2023. “Universal and Transferable Adversarial Attacks on Aligned Language Models.” arXiv [Cs.CL]. arXiv. http://arxiv.org/abs/2307.15043.
- ^
Scher, Aaron. 2024. “Mechanisms to Verify International Agreements about AI Development.” November 27, 2024. https://techgov.intelligence.org/research/mechanisms-to-verify-international-agreements-about-ai-development.
- ^
Scher, 2024.
- ^
“Joint Verification Experiment.” n.d. Middlebury Institute of International Studies. Accessed May 17, 2025. https://nonproliferation.org/lab-to-lab-joint-verification-experiment/.
zeshen @ 2025-07-30T14:43 (+3)
I see similarities with this paper. It seems your work focuses more on what's feasible for geopolitical rivals?