What kind of organization should be the first to develop AGI in a potential arms race?
By Eevee🔹 @ 2022-07-17T17:41 (+10)
nulljimrandomh @ 2022-07-17T18:08 (+19)
If there's an arms race dynamic, it's probably a disaster no matter who wins. Having room to delay for late-stage alignment experiments is the barest minimum requirement in order for humanity to have any chance of survival. So the best case is to not have an arms race at all. The next-best thing is for the organization that wins to be the sort of organization that could stop at the brink for late-stage alignment research, if its leader decided to, and for it to have a stable leader who's sane enough to make that decision. Then maximize the size of the gap to second place, to increase the probability and length of that delay.
Needing it to be possible to stop rules out all of government and academia in the US as the US exists now, since those organizations have their high-level decisions made by distant committees, who typically have strong incentives to maintain whatever superficially looks like the status quo, don't typically have the prerequisites to undersand alignment-related strategy, and don't have the technical expertise to recognize when they're at the brink.
I believe that, of all of the organizations that could plausibly win an AGI arms race, this uniquely identifies DeepMind. I do have some misgivings about DeepMind's strategy, and I don't have full confidence that Demis would recognize when we're at the brink or stop there, but no other organization seems even vaguely plausible.
Jérémy Perret @ 2022-07-17T19:06 (+1)
Slightly humorous answer: it should be the very most pessimistic organization out there (I had MIRI in mind, but surely if we're picking the winner in advance we can craft an organization that goes even further on that scale).
My point is the same as jimrandomh: if there's an arms race that actually goes all the way up to AGI, safety measures are going to get in the way of speed, corners will be cut, and disaster will follow.
This assumes, of course, that any unaligned AGI system will be the cause of non-recoverable catastrophe, independently from the good intentions of their designers.
If this assumption proves wrong, then the winner of that race still holds the most powerful and versatile technological artifact ever designed; the kind of organization to wield that kind of influence should be... careful.
I'm not sure which governance design best achieves the carefulness that is needed in that case.