Why might AI be a x-risk? Succinct explanations please

By Sanjay @ 2023-04-04T12:46 (+20)

As part of a presentation I'll be giving soon, I'll be spending a bit of time explaining why AI might be an x-risk.

Can anyone point me to existing succinct explanations for this, which are as convincing as possible, brief (I won't be spending long on this), and (of course) demonstrating good epistemics.

The audience will be actuaries interested in ESG investing.

If someone fancies entering some brief explanations as an answer, feel free, but I was expecting to see links to content which already exists, since I'm sure there's loads of it.


jackva @ 2023-04-04T14:14 (+3)

Tyler John asked the same question on Twitter and got good responses:

https://twitter.com/tyler_m_john/status/1641061269116538881

kpurens @ 2023-04-07T15:03 (+2)

Here is an intuitive, brief answer that should provide evidence that there is risk:

In the history of life before humans, there have been 5 documented mass extinctions. Humans--the first generally intelligent agent to evolve on our planet--are not causing the 6th mass extinction.

An intelligent agent that is superior to humans, clearly has potential to be another mass extinction agent--and if it turns out humans are in conflict with that agent, the risks are real. 

So it makes sense to understand that risk--and, today, we don't, even though development of these agents is barrowing forward at an incredible pace. 

https://en.wikipedia.org/wiki/Holocene_extinction

https://www.cambridge.org/core/journals/oryx/article/briefly/03807C841A690A77457EECA4028A0FF9
 

Vasco Grilo @ 2023-04-05T22:03 (+2)

Hi Sanjay, There is this post.

Daniel_Eth @ 2023-04-05T10:03 (+2)

I think my explainer on the topic does a good job:

https://forum.effectivealtruism.org/posts/CghaRkCDKYTbMhorc/the-importance-of-ai-alignment-explained-in-5-points

Due to the hierarchical manner in which I wrote the piece, it's brief as long as you don't go down too deep following too many of the claims.

Erich_Grunewald @ 2023-04-04T17:19 (+2)

How about something like:

Of course this is a rough argument, and necessarily leaves out a bunch of detail and nuance.

aogara @ 2023-04-04T16:10 (+2)

Some answers here: https://forum.effectivealtruism.org/posts/p3eiBqnijXPv5pCMA/usd20k-in-prizes-ai-safety-arguments-competition#comments

niplav @ 2023-04-04T13:00 (+2)

AI Risk for Epistemic Minimalists (Alex Flint, 2021).

trevor1 @ 2023-04-04T23:38 (+1)

As far as I'm aware, the best introduction to AI safety is the AI safety chapter in The Precipice. I've tested it on two 55-year-olds and it worked. 

It's a bit long (20 minute read according to lesswrong), but it's filled to the brim with winning strategies for giving people a fair chance to understand AI as an X-risk. A list of names of reputable people who wholeheartedly endorse AI safety, for example.

RomanHauksson @ 2023-04-04T18:37 (+1)

I think it's important to give the audience some sort of analogy that they're already familiar with, such as evolution producing humans, humans introducing invasive species in new environments, and viruses. These are all examples of "agents in complex environments which aren't malicious or Machiavellian, but disrupt the original group of agents anyway".

I believe these analogies are not object-level enough to be arguments for AI X-risk in themselves, but I think they're a good way to help people quickly understand the danger of a superintelligent, goal-directed agent.