Cruxes on US lead for some domestic AI regulation

By Zach Stein-Perlman @ 2023-09-10T18:00 (+20)

Written quickly. Suggestions welcome.

A possible risk of some US AI regulation is that US regulation would differentially slow US AI progress and that would be bad. This post explores the factors that determine how much US regulation would differentially slow US AI progress and how bad that would be.

Note that the differentially slowing US problem only applies to regulation that slows US AI progress (toward powerful/dangerous systems), such as strong regulation of large training runs. The US can do things like facilitate incident reporting and clarify AI labs' liability for harms without slowing domestic AI progress, and some regulation (especially restricting the publication of AI research and sharing of model weights) would differentially slow foreign AI progress!

Note that international coordination on AI safety mostly avoids this problem.

Cruxes

If I was making a model of the differentially slowing US problem, these would be its factors.

(Here "China" often can mean any foreign state. Actual-China seems most relevant because it's well-positioned to lead on AI in worlds where strong US regulation slows US AI progress.)

Two questions that seem particularly important are extraterritoriality and the "effectively move" question. I suspect some people have a good sense of the extent to which AI regulation would be extraterritorialized and what that depends on, and some people have a good sense of the extent to which labs can effectively hop regulatory jurisdictions and what that depends on. If you know, please let me know!


The US government should do speed-orthogonal safety stuff (e.g. facilitating safety features on hardware, liability, training run reporting, incident reporting). The US government should slow foreign progress (e.g. restricting publishing research, restricting sharing research artifacts like model weights, doing export controls, and security standards). My guess is that the US government should avoid slowing leading labs much; things that would change my mind include foreign labs seeming further behind than I currently believe or leading labs seeming less (relatively) safe than I currently believe.


Thanks to two people for discussing some of these ideas with me.

  1. ^

     Enforcing some best practices for safety wouldn't really hurt speed. Some important regulation would.

  2. ^

    To the extent that leading labs are already doing what a regulation would require, the regulation doesn't slow US AI progress, but it doesn't improve safety much either. (It would have the minor positive effects of requiring less cautious labs to be safer, preventing leading labs from becoming much less safe, and maybe causing future regulation to be more productive.)

  3. ^

    My impression: very unlikely.

  4. ^

    Or deny talent, but that seems less important.

  5. ^

    My impression: a lot.

  6. ^

    This seems less important than safety, but my impression is: moderately.


Lukas_Gloor @ 2023-09-11T10:21 (+4)

I suspect some people have a good sense of the extent to which AI regulation would be extraterritorialized and what that depends on, and some people have a good sense of the extent to which labs can effectively hop regulatory jurisdictions and what that depends on. If you know, please let me know!

One thing it probably depends on is regulation around (mega-sized) data centers and where they are located. Konstantin Pilz wrote a report on data centers which points out some geographical and economical constraints on where it makes sense to build them at scale. For instance, outside the US, maybe Canada or Mexico would be good options (but other countries as well). (I'm not sure if it's necessary for much of lab infrastructure to be right next to a mega-sized data center, or if data can be handled remotely, but even if you're moving your lab to some other location, you need to figure out the compute availability somewhere.)

I'd imagine it to be somewhat likely for Canada to follow US legislation if it seems reasonably motivated and if the US puts in an effort to make it happen; I'm more uncertain about US influence on other jurisdictions. 

Yadav @ 2023-09-10T18:20 (+3)

It might be helpful to also think of China's compute access in a world where they invade Taiwan. I don't think this should be weighed highly IMO but still seems personally useful to work through. 

trevor1 @ 2023-09-10T19:38 (+3)

I heard that chip fabs/factories are extremely soft targets and would be destroyed by the losing side either way. This is definitely the right kind of way to think about this though.

Yadav @ 2023-09-10T18:10 (+3)

i’d recommend reading the following for people interested by this - https://www.foreignaffairs.com/china/illusion-chinas-ai-prowess-regulation

chinscratch @ 2023-09-10T21:26 (+1)

Even if it's developed in the US, you should expect China to steal it.

[FBI director Christopher Wray] said China deployed cyber espionage to "cheat and steal on a massive scale", with a hacking programme larger than that of every other major country combined.

https://www.bbc.com/news/world-asia-china-62064506

Zach Stein-Perlman @ 2023-09-10T22:18 (+2)

And largely China doesn't even need to steal it—some labs publish their research and share their models! But yeah, this is a subquestion for the "To what extent does US AI progress boost Chinese AI progress (via e.g. publishing research or leaking insights)" crux.