Pausing AI Developments Isn't Enough. We Need to Shut it All Down

By EliezerYudkowsky @ 2023-04-09T15:53 (+50)

This is a crosspost, probably from LessWrong. Try viewing it there.

null
RobBensinger @ 2023-04-09T15:56 (+14)

(Meta note: The TIME piece was previously discussed here. I've cross-posted the contents because the TIME version is paywalled in some countries, and is plastered with ads. This version adds some clarifying notes that Eliezer wrote on Twitter regarding the article.)

Caleb_Maresca @ 2023-04-10T13:38 (+5)

I am not aware of any international treaties which sanction the use of force against a non-signatory nation except for those circumstances under which one of the signatory nations is first attacked by a non-signatory nation (e.g. collective defense agreements such as NATO). Your counterexample of the Israeli airstrike on the Osirak reactor is not a precedent as it was not a lawful use of force according to international law and was not sanctioned by any treaty. I agree that the Israeli government made the right decision in orchestrating the attack, but it is important to point out the differences between that and what you are suggesting.

Ultimately, to quibble about whether your suggestion is an "act of violence" or not misses the point. What you suggest would be an unprecedented sanctioning of force. I believe the introduction of such an agreement would be very incendiary and would offer a bad precedent. Note that no such agreement was signed in order to prevent nuclear proliferation. Many experts were very worried that nuclear weapons would proliferate much further than they ultimately did. Sometimes the use of force was used, but always with a lighter hand than "let's sign a treaty to bomb anyone we think has a reactor."

Kinoshita Yoshikazu (pseudonym) @ 2023-04-11T15:54 (+3)

Before going too deep into the "should we air strike data centres" issue, I wonder if anyone out there has good numbers about the current availability of hardwares for LLM training. 

Assuming that the US/NATO is committed to shutting down AI development, how much impact does a serious restriction on chip production/distribution have on the ability of a foreign actor to train advanced LLMs? 

I suspect there are enough old GPUs out there that can be repurposed into training centres, but how much more difficult would it be that no/little new hardwares are coming in? 

And for those old GPUs inside consumer machines or crpto farms, is it possible to cripple their LLM training capability through software modifications? 

Assuming that Microsoft and Nvidia/AMD are onboard, I think it should be possible to push a modification to the firmware of almost every GPU installed inside windows machines that are connected to the internet (that...should be almost everything). If software modification can prevent GPUs/whatever from being used effectively in LLM training runs, this will hopefully take most existing GPU stocks (and all newly manufactured GPUs) out of the equation for at least sometime.