List #2: Why coordinating to align as humans to not develop AGI is a lot easier than, well... coordinating as humans with AGI coordinating to be aligned with humans

By Remmelt @ 2022-12-24T09:53 (+3)


 

A friend in technical AI Safety shared a list of cruxes for their next career step. 

A sub-crux for non-stoppable AI progress was that they could not see coordination happen without a world government and mass surveillance. 

I asked:

Are the next beliefs and courses of action in your list heavily reliant upon this first premise/belief?  ie. If you gradually found out that there are effective methods for groups around the world to govern themselves to not build dangerous auto-scaling/catalysing technology (such as AGI) where those methods do not rely upon centralised world governance or mass surveillance, would that change your mind as to what you / the AIS community needs to focus efforts on?


Copy-pasting a list I then wrote in response (with light edits):

Why coordinating to align as humans to not develop AGI is a lot easier than, well coordinating as humans with AGI coordinating to be aligned with humans