Attention on AI X-Risk Likely Hasn't Distracted from Current Harms from AI

By Erich_Grunewald 🔸 @ 2023-12-21T17:24 (+189)

This is a linkpost to https://www.erichgrunewald.com/posts/attention-on-existential-risk-from-ai-likely-hasnt-distracted-from-current-harms-from-ai/

This is a crosspost, probably from LessWrong. Try viewing it there.

null
RyanCarey @ 2023-12-22T16:51 (+20)

I think that disagreement about the size of the risks is part of the equation. But it's missing what is, for at least a few of the prominent critics, the main element - people like Timnit, Kate Crawford, Meredith Whittaker are bought in leftie ideologies focuses on things like "bias", "prejudice", and "disproportionate disadvantage". So they see AI as primarily an instrument of oppression. The idea of existential risk cuts against the oppression/justice narrative, in that it could kill everyone equally. So they have to opposite it.

Obviously this is not what is happening with all people in the FATE AI, or AI Ethics community, but I do think it's what's driving some of the loudest voices, and that we should be clear-eyed about it.

freedomandutility @ 2023-12-22T23:40 (+22)

I disagree because I think these people would be in favour of action to mitigate x-risk from extreme climate change and nuclear war.

RyanCarey @ 2023-12-23T13:07 (+14)

Interesting point, but why do these people think that climate change is going to cause likely extinction? Again, it's because their thinking is politics-first. Their side of politics is warning of a likely "climate catastrophe", so they have to make that catastrophe as bad as possible - existential.

Daniel_Friedrich @ 2023-12-24T11:23 (+4)

The idea of existential risk cuts against the oppression/justice narrative, in that it could kill everyone equally. So they have to opposite it.

That seems like an extremely unnatural thought process. Climate change is the perfect analogy - in these circles, it's salient both as a tool of oppression and an x-risk.

I think far more selection of attitudes happens through paying attention to more extreme predictions, rather than through thinking / communicating strategically. Also, I'd guess people who spread these messages most consciously imagine a systemic collapse, rather than a literal extinction. As people don't tend to think about longtermistic consequences, the distinction doesn't seem that meaningful.

AI x-risk is more weird and terrifying and it goes against the heuristics that "technological progress is good", "people have always feared new technologies they didn't understand" and "the powerful draw attention away from their power". Some people, for whom AI x-risk is hard to accept happen to overlap with AI ethics. My guess is that the proportion is similar in the general population - it's just that some people in AI ethics feel particularly strong & confident about these heuristics.

Btw I think climate change could pose an x-risk in the broad sense (incl. 2nd-order effects & astronomic waste), just one that we're very likely to solve (i.e. the tail risks, energy depletion, biodiversity decline or the social effects would have to surprise us).

Ryan Greenblatt @ 2023-12-21T21:57 (+16)

[Not relevant to the main argument of this post]

They do so because they think x-risk, which (if it occurs) involves the death of everyone

I'd prefer you not fixate on literally everyone dying because it's actually pretty unclear if AI takeover would result in everyone dying. (The same applies for misuse risk, bioweapons misuse can be catastrophic without killing literally everyone.)

For discussion of whether AI takeover would lead to extinction see here, here, and here.

I wish there was a short term which clearly emphasizes "catastrophe-as-bad-as-over-a-billion-people-dying-or-humanity-losing-control-of-the-future".

SiebeRozendal @ 2023-12-27T16:57 (+4)

It's called an existential catastrophe: https://www.fhi.ox.ac.uk/Existential-risk-and-existential-hope.pdf or if you mean 1 step down, it could be a "global catastrophe".

or colloquially "doom" (though I don't think this term has the right serious connotations)

Oliver Sourbut @ 2023-12-30T10:56 (+5)

Yeah. I also sometimes use 'extinction-level' if I expect my interlocutor not to already have a clear notion of 'existential'.

Gabriel Mukobi @ 2023-12-22T18:06 (+1)

lasting catastrophe?

perma-cataclysm?

hypercatastrophe?

Daniel_Friedrich @ 2023-12-22T15:33 (+14)

Great to see real data on the web interest! In the past weeks, I investigated the same topic myself, while taking a psychological perspective & paying attention to the EU AI act, reaching the same conclusion (just published here).

Minh Nguyen @ 2023-12-21T18:50 (+12)

Thank you for this! This is 1 less misconception to deal with.

I always get suspicious when someone treats societal issues like a zero-sum game. Yes, we can worry about more than 1 thing at a time, and it' often not very productive to frame caring about things as oppositional to caring about another thing 

tobytrem @ 2024-01-04T16:55 (+9)

I'm curating this post. 
I think that this is a useful intervention, which contributes to a more productive meta-debate between AI X-risk and AI ethics proponents. 
Thanks for writing!

paul_dfr @ 2024-01-11T22:37 (+4)

Thank you for writing this, I found it very interesting and helpful. I have something between a belief and a hope that the antagonistic dynamics (which I agree are likely driven by the idea that AI safety is merely speculative) will settle down in the short-ish future as more empirical results emerge on the difficulty of training models with the intended goals (e.g. avoiding sycophancy) and get more widely appreciated. I think many people on the critical side still have the idea of AI safety as grounded largely in thought experiments only loosely connected to current technology.