12 tentative ideas for US AI policy (Luke Muehlhauser)

By Lizka @ 2023-04-19T21:05 (+117)

This is a linkpost to https://www.openphilanthropy.org/research/12-tentative-ideas-for-us-ai-policy

Luke Muehlhauser recently posted this list of ideas. See also this List of lists of government AI policy ideas and How major governments can help with the most important century.

The full text of the post[1] is below. 


About two years ago, I wrote that “it’s difficult to know which ‘intermediate goals’ [e.g. policy goals] we could pursue that, if achieved, would clearly increase the odds of eventual good outcomes from transformative AI.” Much has changed since then, and in this post I give an update on 12 ideas for US policy goals[2] that I tentatively think would increase the odds of good outcomes from transformative AI.[3]

I think the US generally over-regulates, and that most people underrate the enormous benefits of rapid innovation. However, when 50% of the experts on a specific technology think there is a reasonable chance it will result in outcomes that are “extremely bad (e.g. human extinction),” I think ambitious and thoughtful regulation is warranted.[4]

First, some caveats:

Those caveats in hand, below are some of my current personal guesses about US policy options that would reduce existential risk from AI in expectation (in no order).[6]

  1. Software export controls. Control the export (to anyone) of “frontier AI models,” i.e. models with highly general capabilities over some threshold, or (more simply) models trained with a compute budget over some threshold (e.g. as much compute as $1 billion can buy today). This will help limit the proliferation of the models which probably pose the greatest risk. Also restrict API access in some ways, as API access can potentially be used to generate an optimized dataset sufficient to train a smaller model to reach performance similar to that of the larger model.
  2. Require hardware security features on cutting-edge chips. Security features on chips can be leveraged for many useful compute governance purposes, e.g. to verify compliance with export controls and domestic regulations, monitor chip activity without leaking sensitive IP, limit usage (e.g. via interconnect limits), or even intervene in an emergency (e.g. remote shutdown). These functions can be achieved via firmware updates to already-deployed chips, though some features would be more tamper-resistant if implemented on the silicon itself in future chips.
  3. Track stocks and flows of cutting-edge chips, and license big clusters. Chips over a certain capability threshold (e.g. the one used for the October 2022 export controls) should be tracked, and a license should be required to bring together large masses of them (as required to cost-effectively train frontier models). This would improve government visibility into potentially dangerous clusters of compute. And without this, other aspects of an effective compute governance regime can be rendered moot via the use of undeclared compute.
  4. Track and require a license to develop frontier AI models. This would improve government visibility into potentially dangerous AI model development, and allow more control over their proliferation. Without this, other policies like the information security requirements below are hard to implement.
  5. Information security requirements. Require that frontier AI models be subject to extra-stringent information security protections (including cyber, physical, and personnel security), including during model training, to limit unintended proliferation of dangerous models.
  6. Testing and evaluation requirements. Require that frontier AI models be subject to extra-stringent safety testing and evaluation, including some evaluation by an independent auditor meeting certain criteria.[7]
  7. Fund specific genres of alignment, interpretability, and model evaluation R&D. Note that if the genres are not specified well enough, such funding can effectively widen (rather than shrink) the gap between cutting-edge AI capabilities and available methods for alignment, interpretability, and evaluation. See e.g. here for one possible model.
  8. Fund defensive information security R&D, again to help limit unintended proliferation of dangerous models. Even the broadest funding strategy would help, but there are many ways to target this funding to the development and deployment pipeline for frontier AI models.
  9. Create a narrow antitrust safe harbor for AI safety & security collaboration. Frontier-model developers would be more likely to collaborate usefully on AI safety and security work if such collaboration were more clearly allowed under antitrust rules. Careful scoping of the policy would be needed to retain the basic goals of antitrust policy.
  10. Require certain kinds of AI incident reporting, similar to incident reporting requirements in other industries (e.g. aviation) or to data breach reporting requirements, and similar to some vulnerability disclosure regimes. Many incidents wouldn’t need to be reported publicly, but could be kept confidential within a regulatory body. The goal of this is to allow regulators and perhaps others to track certain kinds of harms and close-calls from AI systems, to keep track of where the dangers are and rapidly evolve mitigation mechanisms.
  11. Clarify the liability of AI developers for concrete AI harms, especially clear physical or financial harms, including those resulting from negligent security practices. A new framework for AI liability should in particular address the risks from frontier models carrying out actions. The goal of clear liability is to incentivize greater investment in safety, security, etc. by AI developers.
  12. Create means for rapid shutdown of large compute clusters and training runs. One kind of “off switch” that may be useful in an emergency is a non-networked power cutoff switch for large compute clusters. As far as I know, most datacenters don’t have this.[8] Remote shutdown mechanisms on chips (mentioned above) could also help, though they are vulnerable to interruption by cyberattack. Various additional options could be required for compute clusters and training runs beyond particular thresholds.

Of course, even if one agrees with some of these high-level opinions, I haven’t provided enough detail in this short post for readers to know what, exactly, to advocate for, or how to do it. If you have useful skills, networks, funding, or other resources that you might like to direct toward further developing or advocating for one or more of these policy ideas, please indicate your interest in this short Google Form. (The information you share in this form will be available to me [Luke Muehlhauser] and some other Open Philanthropy employees, but we won’t share your information beyond that without your permission.)

  1. ^

    (Copied with permission.)

  2. ^

    Many of these policy options would plausibly also be good to implement in other jurisdictions, but for most of them the US is a good place to start (the US is plausibly the most important jurisdiction anyway, given the location of leading companies, and many other countries sometimes follow the US), and I know much less about politics and policymaking in other countries.

  3. ^

    For more on intermediate goals, see Survey on intermediate goals in AI governance.

  4. ^

    This paragraph was added on April 18, 2023

  5. ^

    sides my day job at Open Philanthropy, I am also a Board member at Anthropic, though I have no shares in the company and am not compensated by it. Again, these opinions are my own, not Anthropic’s.

  6. ^

    There are many other policy options I have purposely not mentioned here. These include:

    - Hardware export controls. The US has already implemented major export controls on semiconductor manufacturing equipment and high-end chips. These controls have both pros and cons from my perspective, though it’s worth noting that they may be a necessary complement to some of the policies I tentatively recommend in this post. For example, the controls on semiconductor manufacturing equipment help to preserve a unified supply chain to which future risk-reducing compute governance mechanisms can be applied. These hardware controls will likely need ongoing maintenance by technically sophisticated policymakers to remain effective.

    - “US boosting” interventions, such as semiconductor manufacturing subsidies or AI R&D funding. One year ago I was weakly in favor of these policies, but recent analyses have nudged me into weakly expecting these interventions are net-negative given e.g. the likelihood that they shorten AI timelines. But more analysis could flip me back. “US boosting” by increasing high-skill immigration may be an exception here because it relocates rather than creates a key AI input (talent), but I’m unsure, e.g. because skilled workers may accelerate AI faster in the US than in other jurisdictions. As with all the policy opinions in this post, it depends on the magnitude and certainty of multiple effects pushing in different directions, and those figures are difficult to estimate.

    - AI-slowing regulation that isn’t “directly” helpful beyond slowing AI progress, e.g. a law saying that the “fair use” doctrine doesn’t apply to data used to train large language models. Some things in this genre might be good to do for the purpose of buying more time to come up with needed AI alignment and governance solutions, but I haven’t prioritized looking into these options relative to the options listed in the main text, which simultaneously buy more time and are “directly” useful to mitigating the risks I’m most worried about. Moreover, I think creating the ability to slow AI progress during the most dangerous period (in the future) is more important than slowing AI progress now, and most of the policies in the main text help with slowing AI progress in the future, whereas some policies that slow AI today don’t help much with slowing AI progress in the future.

    - Launching new multilateral agreements or institutions to regulate AI globally. Global regulation is needed, but I haven’t yet seen proposals in this genre that I expect to be both feasible and effective. My guess is that the way to work toward new global regulation is similar to how the October 2022 export controls have played out: the US can move first with an effective policy on one of the topics above, and then persuade other influential countries to join it. 

    - A national research cloud. I’d guess this is unhelpful because it accelerates AI R&D broadly and creates a larger number of people who can train dangerously large models, though the implementation details matter.

  7. ^
  8. ^

    E.g. the lack of an off switch exacerbated the fire that destroyed a datacenter in Strasbourg; see section VI.2.1 – iv of this report.

  9. ^

    Full text crossposted with permission. 


MaxRa @ 2023-04-20T09:37 (+25)

Tyler Cowen commented on these proposals:

I am OK with some of these, provided they are applied liberally — for instance, new editions of the iPhone require regulatory consent, but that hasn’t thwarted progress much.  That may or may not be the case for #3 through #6, I don’t know how strict a standard is intended or who exactly is to make the call.  Perhaps I do not understand #2, but it strikes me as a proposal for a complete surveillance society, at least as far as computers are concerned — I am opposed!  And furthermore it will drive a lot of activity underground, and in the meantime the proposal itself will hurt the EA brand.  I hope the country rises up against such ideas, or perhaps more likely that they die stillborn.  (And to think they are based on fears that have never even been modeled.  And I guess I can’t bring in a computer from Mexico to use?)  I am not sure what “restrict API access” means in practice (to whom? to everyone who might be a Chinese spy? and does Luke favor banning all open source? do we really want to drive all that underground?), but probably I am opposed to it.  I am opposed to placing liability for a General Purpose Technology on the technology supplier (#11), and I hope to write more on this soon.

Finally, is Luke a closet accelerationist?  The status quo does plenty to boost AI progress, often through the military and government R&D and public universities, but there is no talk of eliminating those programs.  Why so many regulations but the government subsidies get off scot-free!?  How about, while we are at it, banning additional Canadians from coming to the United States?  (Canadians are renowned for their AI contributions.)  After all, the security of our nation and indeed the world is at stake.  Canada is a very nice country, and since 1949 it even contains Newfoundland, so this seems like less of an imposition than monitoring all our computer activity, right?  It might be easier yet to shut down all high-skilled immigration.  Any takers for that one?

AndreFerretti @ 2023-04-29T15:04 (+6)

I've set up a Manifold market for each of the 12 policy ideas discussed in the post, thanks to Michael Chen's idea (Manifold uses collective wisdom to estimate the likelihood of events). You can visit the markets here and bet on whether the US will adopt these ideas by 2028. So go ahead and place your bets, because who said politics can't be a bit of a gamble?
 

MathiasKB @ 2023-04-20T09:33 (+4)

Fantastic to get this update - was just finding myself complaining about the lack of good object-level AI policy proposals!

At the risk of letting perfect be the enemy of the good, I would love a top level post for each of the recommendations, going into much greater detail. Getting discussions of policy proposals into the open where they can be criticized from diverse perspectives is crucial to arrive at policies that are robustly good.

One thing I find interesting to think about, is how well-funded non-governmental actors might be able to bring these policies to life. After all, I expect most progress to come out of a few influential labs. Getting a handshake agreement from those labs, would achieve results not too dissimilar from national legislation.

For rapid shutdown mechanisms, for example, the bottleneck to me seems just as much to be developing the actual protocols as getting adoption. If a great protocol is developed that would allow openAI leadership to shut down a compute cluster at the hardware level running an experimental AI, and adopting the protocol doesn't add much overhead, I feel like there's a non-zero chance they might adopt it without any coercion. If the overhead is significant, how significant would it be? Is it within the bounds of what a wealthy actor could subsidize?

Arturo Macias @ 2023-04-20T05:54 (+4)

I find any regulation as totally premature. We are not training IA for anything close to General Intelligence. We are still training brain tissue, not animals. 

https://forum.effectivealtruism.org/posts/uHeeE5d96TKowTzjA/world-and-mind-in-artificial-intelligence-arguments-against

PeterSlattery @ 2023-04-20T08:31 (+18)

[Quick meta comment to try to influence forum norms] 

This comment was at -5 karma when I saw it, and hidden.

I disagree with Arturo's comment and disagree voted to indicate this disagreement. I also upvoted his the karma on his comment because I appreciated that he engaged with the post to express his views and that he posted something on the forum to explain those views. 

I'd like other people to do something similar. I think that we should upvote people for expressing good faith disagreement and make an effort to explain that disagreement. Otherwise, the forum will become a complete echo chamber where we all just agree with each other. 

I also think that we should try particularly hard to engage with new people in the community who express reasonable disagreement. Getting lots of anonymous downvotes without useful insights generally discourage engagements in most situations and I don't think that this is what we want.

Zach Stein-Perlman @ 2023-04-20T17:19 (+6)

Of course. But that doesn't really apply to Arturo's comment, which expresses an attitude but doesn't explain that attitude at all. So Arturo's comment

  • can't be useful to others and
  • is impossible to engage with, which is why nobody has replied to Arturo on the object-level.

I want less unhelpful-unexplained-attitude-expressing on the Forum.

 

Arturo, I wish you would explain your beliefs more so we can figure out the truth together.

PeterSlattery @ 2023-04-21T05:35 (+8)

He linked to his post in the comment. I presume that he believes that it explains why he disagrees. I'd consider that contribution to be deserving of not getting downvoted, but I see where you are coming from. 

With that said, if he said, "I think we need regulation" and offered two lines of related thoughts and the same link, would people have downvoted his comment for not being useful and being impossible to engage with? Probably not, I suspect.

Anyway, I may be wrong in this case, but I still think that we probably shouldn't be so quick to downvote comments like this (or at least a bit better). Especially for new community members. 

I see a lot of stuff on the forum get no comments at all which seems worse than getting a few comments with opinions. 

I often see low effort disagreeing comments on a post get downvoted but similarly low effort agreeable comment (e.g., this sounds great) get upvoted. 

I am also influenced by other factors. Discussions I have had and seen where people I know who have been involved in EA for years said that they don't like using the forum because it is too negative or because they don't get any engagement on what write.  

The expectation that lots of lurkers on the forum don't feel comfortable sharing quick thoughts or disagreements because they could get downvotes.

My experiences writing posts that almost no-one commented on where I would have welcomed a 2-minute opinion comment made without arguments or a supposedly supporting link.

But of course other people might disagree with all of that or see different trade-offs etc.

Arturo Macias @ 2023-04-21T13:12 (+6)

I have written two recent posts describing my position. In the first I argued that nuclear war plus our primitive social systems, imply we live in an age of acute existential risk, and the substitution of our flawed governance by AI based government is our chance of survival. 

In the second one, I argue that given the kind of specialized AI we are training so far, existential risk from AI is still negligible, and regulation would be premature.

You can comment on the posts themselves, or you can comment both posts here. 

tcelferact @ 2023-04-23T18:06 (+3)

This seems aimed at regulators; I'd be more interested in a version for orgs like the CIA or NSA. 

Both those orgs seem to have a lot more flexibility than regulators to more or less do what they want when national security is an issue, and AI could plausibly become just that kind of issue. 

So 'policy ideas for the NSA/CIA' could be at once both more ambitious and more actionable.

Zach Stein-Perlman @ 2023-04-23T19:40 (+2)

Interesting. Do you know of existing sources related to 'policy ideas for the NSA/CIA'? What can I read to learn about this?

Zach Stein-Perlman @ 2023-04-20T05:14 (+3)

I am (tentatively) excited about all of these ideas.

Vasco Grilo @ 2023-04-22T08:29 (+2)

Thanks for sharing!

This post doesn’t explain much of my reasoning for tentatively favoring these policy options. All the options below have complicated mixtures of pros and cons, and many experts oppose (or support) each one. This post isn’t intended to (and shouldn’t) convince anyone.

Does Open Phil plan to share any of the reasoning?