Open Philanthropy's AI governance grantmaking (so far)

By Aaron Gertler 🔸 @ 2020-12-17T12:00 (+63)

This is a linkpost to https://www.openphilanthropy.org/blog/ai-governance-grantmaking

Written by Luke Muehlhauser. To save time on formatting, I've left footnotes linked to the original source.


When the Soviet Union began to fracture in 1991, the world was forced to reckon with the first collapse of a nuclear superpower in history.1 The USSR was home to more than 27,000 nuclear weapons, more than one million citizens working at nuclear facilities, and over 600 metric tons of nuclear fissile materials.2 It seemed inevitable that some of these weapons, experts, and materials would end up in terrorist cells or hostile states,3 especially given a series of recent failed attempts at non-proliferation cooperation between the US and the USSR.4

Seeing the threat, the Carnegie and MacArthur foundations funded a Prevention of Proliferation Task Force, which (among other outputs) produced the influential report “Soviet Nuclear Fission: Control of the Nuclear Arsenal in a Disintegrating Soviet Union” by Ash Carter and others.5 Shortly before the report’s publication, the authors presented their findings to Senators Sam Nunn (D-GA) and Richard Lugar (R-IN) at a meeting arranged by the president of the Carnegie foundation.6 In later remarks, Nunn described the report as having an “astounding effect” on him and other Senators.7

Later that year, Nunn and Lugar introduced legislation (co-drafted with Carter and others8) to create the Cooperative Threat Reduction Program, also known as the Nunn-Lugar Act.9 The bill provided hundreds of millions of dollars in funding and scientific expertise to help former Soviet Union states decommission their nuclear stockpiles. As of 2013,10 the Nunn-Lugar Act had achieved the dismantling or elimination of over 7,616 nuclear war-heads, 926 ICBMs, and 498 ICBM sites. In addition to removing weapons, the program also attempted to ensure that remaining nuclear materials in the former USSR were appropriately secured and accounted for.11 In 2012, President Obama said that Nunn-Lugar was one of America’s “smartest and most successful national security programs,” having previously called it “one of the most important investments we could have made to protect ourselves from catastrophe.”12 President-Elect Joe Biden, a U.S. Senator at the time of Nunn-Lugar’s passage, called it “the most cost-effective national security expenditure in American history.”13

The Nunn-Lugar program is an example of how technology governance can have a very large impact, and specifically by reducing global catastrophic risks from technology. Stories like this help inspire and inform our own grantmaking related to mitigating potential catastrophic risks from another (albeit very different) class of high-stakes technologies, namely some advanced artificial intelligence (AI) capabilities that will be fielded in the coming decades, and in particular from what we call “transformative AI” (more below).14

We have previously described some of our grantmaking priorities related to technical work on “AI alignment” (e.g. here), but we haven’t yet said much about our grantmaking related to AI governance. In this post, I aim to clarify our priorities in AI governance, and summarize our AI governance grantmaking so far.

1. Our priorities within AI governance

By AI governance we mean local and global norms, policies, laws, processes, politics, and institutions (not just governments) that will affect social outcomes from the development and deployment of AI systems. We aim to support work related to both AI governance research (to improve our collective understanding of how to achieve beneficial and effective AI governance) and AI governance practice and influence (to improve the odds that good governance ideas are actually implemented by companies, governments, and other actors).

Within the large tent of “AI governance,” we focus on work that we think may increase the odds of eventual good outcomes from “transformative AI,” especially by reducing potential catastrophic risks from transformative AI15 — regardless of whether that work is itself motivated by transformative AI concerns (see next section). By transformative AI, I mean software that has at least as profound an impact on the world’s trajectory as the Industrial Revolution did.16 Importantly, this is a much larger scale of impact than others seem to mean when discussing “transformative technologies” or a “4th industrial revolution,” but it also doesn’t assume technological developments as radical as “artificial general intelligence” or “machine superintelligence” (see here). Nor does it assume any particular AI architecture or suite of capabilities; it remains an open empirical question which architectures and capabilities would have such extreme (positive or negative) impact on society. For example, even a small set of AI systems with narrow and limited capabilities could — in theory, in a worst-case scenario — have industrial-revolution-scale (negative) impact if they were used to automate key parts of nuclear command and control in the U.S. and Russia, and this was the primary cause of an unintended large-scale nuclear war.17 (But this is only one example scenario and, one hopes, a very unlikely one.18)

Unfortunately, it’s difficult to know which “intermediate goals” we could pursue that, if achieved, would clearly increase the odds of eventual good outcomes from transformative AI. Would tighter regulation of AI technologies in the U.S. and Europe meaningfully reduce catastrophic risks, or would it increase them by (e.g.) privileging AI development in states that typically have lower safety standards and a less cooperative approach to technological development? Would broadly accelerating AI development increase the odds of good outcomes from transformative AI, e.g. because faster economic growth leads to more positive-sum political dynamics, or would it increase catastrophic risk, e.g. because it would leave less time to develop, test, and deploy the technical and governance solutions needed to successfully manage transformative AI? For those examples and many others, we are not just uncertain about whether pursuing a particular intermediate goal would turn out to be tractable — we are also uncertain about whether achieving the intermediate goal would be good or bad for society, in the long run. Such “sign uncertainty” can dramatically reduce the expected value of pursuing some particular goal,19 often enough for us to not prioritize that goal.20

As such, our AI governance grantmaking tends to focus on…

In a footnote, I list all the grants we’ve made so far that were, at least in part, motivated by their hoped-for impact on AI governance.23

2. Example work I’ve found helpful

Our sense is that relatively few people who work on AI governance share our focus on improving likely outcomes from transformative AI, for understandable reasons: such issues are speculative, beyond the planning horizon of most actors, may be intractable until a later time, may be impossible to forecast even in broad strokes, etc.

Nevertheless, there has been substantial AI governance work that I suspect has increased the odds of good outcomes from transformative AI,24 regardless of whether that work was itself motivated by transformative AI concerns, or has any connection to Open Philanthropy funding. I list some examples below, in no order:

In the future, we hope to fund more work along these lines. As demonstrated by the examples above, some of the work we fund will involve explicit analysis of very long-run, potentially transformative impacts of AI, but much of the work we fund will be focused on more immediate, tractable issues of AI governance, so long as we are persuaded that the work has a decent chance of improving the odds of eventual good outcomes from transformative AI (and regardless of whether a given grantee has any interest in transformative AI).

Of course, we might never fund anything in AI governance as impactful as the work that led to the Nunn-Lugar Act, but per our commitment to hits-based giving, we are willing to take that risk given the scale of impact we expect from transformative AI.