Towards more cooperative AI safety strategies
By richard_ngo @ 2024-07-16T04:36 (+62)
This is a crosspost, probably from LessWrong. Try viewing it there.
nullWill Aldred @ 2024-07-16T08:15 (+4)
as power struggles become larger-scale, more people who are extremely good at winning them will become involved. That makes AI safety strategies which require power-seeking more difficult to carry out successfully.
How can we mitigate this issue? Two things come to mind. Firstly, focusing more on legitimacy [...] Secondly, prioritizing competence.
A third way to potentially mitigate the issue is to simply become more skilled at winning power struggles. Such an approach would be uncooperative, and therefore undesirable in some respects, but on balance, to me, seems worth pursuing to at least some degree.
… I realize that you, OP, have debated a very similar point before (albeit in a non-AI safety thread)—I’m not sure if you have additional thoughts to add to what you said there? (Readers can find that previous debate/exchange here.)
Lukas_Gloor @ 2024-07-16T17:39 (+3)
Secondly, prioritizing competence. Ultimately, humanity is mostly in the same boat: we're the incumbents who face displacement by AGI. Right now, many people are making predictable mistakes because they don't yet take AGI very seriously. We should expect this effect to decrease over time, as AGI capabilities and risks become less speculative. This consideration makes it less important that decision-makers are currently concerned about AI risk, and more important that they're broadly competent, and capable of responding sensibly to confusing and stressful situations, which will become increasingly common as the AI revolution speeds up.
I think this is a good point.
At the same time, I think you can infer that people who don't take AI risk seriously are somewhat likely to lack important forms of competence. This is inference is only probabilistic, but it's IMO pretty strong already (it's a lot stronger now than it used to be four years ago) and it'll get stronger still.
It also depends how much a specific person has been interacting with the technology; meaning, it probably applies a lot less to DC policy people, but applies more to ML scientists or people at AI labs.
richard_ngo @ 2024-07-21T05:11 (+18)
you can infer that people who don't take AI risk seriously are somewhat likely to lack important forms of competence
This seems true, but I'd also say that the people who do take AI risk seriously also typically lack different important forms of competence. I don't think this is coincidental; instead I'd say that there's (usually) a tradeoff between "good at taking very abstract ideas seriously" and "good at operating in complex fast-moving environments". The former typically requires a sort of thinking-first orientation to the world, the latter an action-first orientation to the world. It's possible to cultivate both, but I'd say most people are naturally inclined to one or the other (or neither).
William the Kiwi @ 2024-07-19T14:45 (+1)
Strong upvote. I have experienced problems with this too.
Actionable responses:
1. Build trust. Be clear why you are seeking power/access. Reveal your motivations and biases. Preoffer to answer uncomfortable questions.
2. Provide robust information. Reference expert opinion and papers. Show pictures/videos of physical evidence.
3. The biggest barrier for most people to process AI extinction risk is: the emotional resilience to process that they may die. Anything that lowers this barrier helps. In my experience, people grasp extinction risk better when they "discover" it from trends and examples, rather than are directly told.