List #1: Why stopping the development of AGI is hard but doable

By Remmelt @ 2022-12-24T09:52 (+24)

This is a crosspost, probably from LessWrong. Try viewing it there.

null
Sharmake @ 2022-12-24T18:34 (+4)

I think that this may be the case, but I would be much more cautious about trying to regulate AI development. I'd start with baby steps that mostly won't cost too much or provoke backlash, like interpretability research.

My model of the situation is:

  1. People are more or less rational, that is we shouldn't expect deviations from rational agent models.

  2. People are mostly selfish, with altruism being essentially signalling, which has little value here.

  3. AI has enough of a chance to bring vastly positive changes on par with a singularity that it dominates other considerations.

In other words, even if there was a 1% chance of a singularity, it would have enough impact that even believing in high AI risk is insufficient to get the population on your side.

This is why I do not think the post is correct, in a nutshell, and that I think the AI governance/digital democracy/privacy movements are way overestimating what costs can be imposed on AI companies (Also known as alignment taxes).

I think AI governance could be surprisingly useful. But attempts to slow things down significantly are mostly unrealistic for the time being.

Remmelt @ 2022-12-25T02:35 (+2)

(copying-pasting response from LessWrong:)

Good to read your thoughts.

I would agree that slowing further AI capability generalisation developments down by more than half in the next years is highly improbable. Got to work with what we have.

My mental model of the situation is different.

  1. People engage in positively reinforcing dynamics around social prestige and market profit, even if what they are doing is net bad for what they care about over the long run.

  2. People are mostly egocentric, and have difficulty connecting and relating, particularly in the current individualistic social signalling and “divide and conquer” market environment.

  3. Scaling up deployable capabilities of AI has enough of a chance to reap extractive benefits for narcissistic/psychopathic tech leader types, that they will go ahead with it, while sowing the world with techno-optimistic visions that suit their strategy. That is, even though general AI will (cannot not) lead to wholesale destruction of everything we care about in the society and larger environment we’re part of.