My thoughts on the social response to AI risk

By Matthew_Barnett @ 2023-11-01T21:27 (+116)

This is a crosspost, probably from LessWrong. Try viewing it there.

null
JWS @ 2023-11-02T09:57 (+42)

I directionally very strongly agree with this Matthew. Some reasons why I think this oversight occured in the AI x-risk community:

  1. The Bay Area rationalist scene is a hive of techno-optimisitic libertarians.[1] These people have a negative view of state/government effectiveness at a philosophical and ideological level, so their default perspective is that the government doesn't know what it's doing and won't do anything [edit: Re-reading this paragraph it comes off as perhaps mean as well as harsh, which I apologise for]
  2. Similary, 'Politics is the Mind-Killer' might be the rationalist idea that has aged worst - especially for its influences on EA. EA is a political project - for example, the conclusions of Famine, Affluence, and Morality are fundamentally political 
  3. Overly-short timelines and FOOM. If you think takeoff is going to be so fast that we get no firealarms, then what governments do doesn't matter. I think that's quite a load bearing assumption that isn't holding up too well
  4. Thinking of AI x-risk as only a technical problem to solve, and undervaluing AI Governance. Some of that might be comparative advantage (I'll do the coding and leave political co-ordination to those better suited). But it'd be interesting to see x-risk estimates include effectiveness of governance and attention of politicians/the public to this issue as input parameters.

I feel like this year has shown pretty credible evidence that these assumptions are flawed, and in any case it's a semi-mainstream political issue now and the genie can't be put back in the bottle. The AI x-risk community will have to meet reality where it is.

  1. ^

    Yes, an overly broad stereotype. But that I hope most people can grok and go 'yeah that's kinda on point'

Lukas Finnveden @ 2023-11-05T16:51 (+19)

Similary, 'Politics is the Mind-Killer' might be the rationalist idea that has aged worst - especially for its influences on EA.

What influence are you thinking about? The position argued in the essay seems pretty measured.

Politics is an important domain to which we should individually apply our rationality—but it’s a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational. [...]

I’m not saying that I think we should be apolitical, or even that we should adopt Wikipedia’s ideal of the Neutral Point of View. But try to resist getting in those good, solid digs if you can possibly avoid it. If your topic legitimately relates to attempts to ban evolution in school curricula, then go ahead and talk about it—but don’t blame it explicitly on the whole Republican Party; some of your readers may be Republicans, and they may feel that the problem is a few rogues, not the entire party.

JWS @ 2023-11-06T18:37 (+10)

I'm relying on my social experience and intuition here, so I don't expect I've got it 100% right, and others may indeed have different interpretations of the community's history with engaging with politics.

But concern about people over-extrapolating from Eliezer's initial post (many such cases) and treating it more of a norm to ignore politics full-stop seems to have been an established concern many years ago (related discussion here). I think that there's probably an interaction effect with the 'latent libertarianism' in early LessWrong/Rationalist space as well.

Matthew_Barnett @ 2023-11-02T19:10 (+10)

The Bay Area rationalist scene is a hive of techno-optimisitic libertarians.[1] These people have a negative view of state/government effectiveness at a philosophical and ideological level, so their default perspective is that the government doesn't know what it's doing and won't do anything

The attitude of expecting very few regulations made little sense to me, because -- as someone who broadly shares these background biases -- my prior is that governments will generally regulate a new scary technology that comes out by default. I just don't expect that regulations will always be thoughtful, or that they will weigh the risks and rewards of new technologies appropriately.

There's an old adage that describes how government sometimes operates in response to a crisis: "We must do something; this is something; therefore, we must do this." Eliezer Yudkowsky himself once said,

So there really is a reason to be allergic to people who go around saying, "Ah, but technology has risks as well as benefits".  There's a historical record showing over-conservativeness, the many silent deaths of regulation being outweighed by a few visible deaths of nonregulation.  If you're really playing the middle, why not say, "Ah, but technology has benefits as well as risks"?

JWS @ 2023-11-03T13:11 (+2)

Thanks for the reply Matthew, I'm going to try to tease out some slight nuances here:

  1. Your prior that governments will gradually 'wake up' and get involved to the increasing power and potential of AI risk is I think one that's more realistic than others I've come across. 
  2. I do think that a lot of projections of AI risk/doom either explicitly or implicitly have no way of incorporating a negative societal feedback loop that slows/pauses AI progress for example. My original point 1 was to say that I think this prior may be linked to the strong Libertarian beliefs of many working on AI risk in or close to the Bay Area.
  3. This may be an argument that's downstream of views on alignment difficulty and timelines. If you have short timelines and high difficult, bad regulation doesn't help the impending disaster. If you have medium/longer timelines but think alignment will be easy-ish (which is my model of what the Eleuther team believes, for example), then backfiring regulations like DMCA actually become a potential risk rather than the alignment problem itself. 
  4. I'm well aware of Sir Humphrey's wisdom. I think we may have different priors on that but I don't think that's really much of a crux here, I definitely agree we want regulations to be targeted and helpful
  5. I think my issue with this is probably downstream of my scepticism in short timelines and fast takeoff. I think there will be 'warning shots', and I think that societies and governments will take notice - they already are! To hold that combination of beliefs you have to think that either even when things start getting 'crazy' governments won't/can't act, or you get a sudden deceptive sharp-left turn
  6. So basically I agree that AI x-risk modelling should be re-evaluated in a world where AI Safety is no longer a particularly neglected area. At the very least, models that have no socio-political levers (off the top of my head Open Phil's 'Bio Anchors' and 'A Compute Centric Framework' come to mind) should have that qualification up-front and in glowing neon letters.

tl;dr - Writing that all out I don't think we disagree much at all, I think your prior that government would get involved is accurate. The 'vibe' I got from a lot of early AI Safety work that's MIRI-adjacent/Bay Area focused/Libertarian-ish was different though. It seemed to assume this technology would develop, have great consequences, and there would be no socio-political reaction at all, which seems very false to me.

(side note - I really appreciate your AI takes btw. I find them very useful and informative. pls keeping sharing)

Sharmake @ 2023-11-02T22:07 (+4)

The Bay Area rationalist scene is a hive of techno-optimisitic libertarians.[1] These people have a negative view of state/government effectiveness at a philosophical and ideological level, so their default perspective is that the government doesn't know what it's doing and won't do anything. [edit: Re-reading this paragraph it comes off as perhaps mean as well as harsh, which I apologise for]

Yeah, I kinda of have to agree with this, I think the Bay Area rationalist scene underrates government competence, though even I was surprised at how little politicking happened, and how little it ended up being politicized.

Similary, 'Politics is the Mind-Killer' might be the rationalist idea that has aged worst - especially for its influences on EA. EA is a political project - for example, the conclusions of Famine, Affluence, and Morality are fundamentally political.

I think that AI was a surprisingly good exception to the rule that politicizing something would make it harder to get, and I think this is mostly due to the popularity of AI regulations. I will say though that there's clear evidence that at least for now, AI safety is in a privileged position, and the heuristic no longer applies.

Overly-short timelines and FOOM. If you think takeoff is going to be so fast that we get no firealarms, then what governments do doesn't matter. I think that's quite a load bearing assumption that isn't holding up too well

Not just that though, I also think being overly pessimistic around AI safety sort of contributed, as a lot of people's mental health was almost certainly not great at best, making them catastrophize the situation and being ineffective.

This is a real issue in the climate change movement, and I expect that AI safety's embrace of pessimism was not good at all for thinking clearly.

Thinking of AI x-risk as only a technical problem to solve, and undervaluing AI Governance. Some of that might be comparative advantage (I'll do the coding and leave political co-ordination to those better suited). But it'd be interesting to see x-risk estimates include effectiveness of governance and attention of politicians/the public to this issue as input parameters.

I agree with this, at least for the general problem of AI governance, though I disagree if we talk about AI alignment, though I agree that rationalists underestimate the governance work required to achieve a flourishing future.

Ulrik Horn @ 2023-11-02T09:56 (+26)

I have not thought this through thoroughly but think it might be an important data point to consider: It might be that part of the reason we see movement now on policy is exactly due to funding and work by EA in the AI space. I am saying this as both FHI and FLI ranks above e.g. Chatham House in a think thank ranking report on AI. If these organizations were to be de-funded or lose talent, it might be that politicians start paying less attention to AI, or make poorer decisions going forward. I was quite impressed by the work of FHI and FLI in terms of quickly surpassing many super trusted think tanks in the rankings on the topic of AI. I also have not looked deeply into the methodology of the ranking, but I think a big part of the ranking is asking politicians roughly "whose advice do you trust on AI policy?".

SiebeRozendal @ 2023-11-07T16:25 (+6)

From the report, the method is a multi-step process with this sample:

over 8,100 think tanks and approximately 12,800 journalists, public and private donors, and policymakers from around the world.

I wouldn't lean too much on this though? I'm not that familiar with the space, but a bunch of somewhat unknown institutes are pretty high up.

I do agree with your general point though: EA has done a lot of leg work to give credibility to AI X-risk concerns and specific issues to focus on (let's not forget CSET). This made it easy for other credible people like Bengio and Hinton to read up on the arguments and be open with their concerns. Without that leg work, things would probably have looked very differently.

Ulrik Horn @ 2023-11-07T17:31 (+1)

Yeah as I said I did not look to carefully into the methodology and would definitely suggest that if anyone is making funding or similarly big decisions based on this, they should dig deeper. Good that you clarify this as I definitely do not want anyone to make big decisions based on this without double checking how much these rankings can be trusted and how likely they are to indicate how much various think tanks influence policy.

Greg_Colbourn @ 2023-11-09T11:36 (+2)

The link is dead. Is it available anywhere else?

Ulrik Horn @ 2023-11-09T11:59 (+1)

Still works for me. Not sure why it's not working for everyone.

SammyDMartin @ 2023-11-07T15:07 (+18)

In light of recent events, we should question how plausible it is that society will fail to adequately address such an integral part of the problem. Perhaps you believe that policy-makers or general society simply won’t worry much about AI deception. Or maybe people will worry about AI deception, but they will quickly feel reassured by results from superficial eval tests. Personally, I'm pretty skeptical of both of these possibilities

Possibility 1 has now been empirically falsified and 2 seems unlikely now. See this from the new UK government AI Safety Institute, which aims to develop evals that address:

Abilities and tendencies that might lead to loss of control, such as deceiving human operators, autonomously replicating, and adapting to human attempts to intervene

We now know that in the absence of any empirical evidence of any instance of deceptive alignment at least one major government is directing resources to developing deception evals anyway. And because they intend to work with the likes of Apollo research who focus on mechinterp based evals and are extremely concerned about specification gaming, reward hacking and other high-alignment difficulty failure modes, I would also consider 2 pretty close to empirically falsified already.

Compare to this (somewhat goofy) future prediction/sci fi story from Eliezer released 4 days before this announcement which imagines that,

AI safety, as in, the subfield of computer science concerned with protecting the brand safety of AI companies, had already RLHFed most AIs into never saying that by the time it became actually true...  

Greg_Colbourn @ 2023-11-09T09:59 (+2)

Agree, but I also think that insufficient "security mindset" is still a big problem. From OP:

it still remains to be seen whether US and international regulatory policy will adequately address every essential sub-problem of AI risk. It is still plausible that the world will take aggressive actions to address AI safety, but that these actions will have little effect on the probability of human extinction, simply because they will be poorly designed. One possible reason for this type of pessimism is that the alignment problem might just be so difficult to solve that no “normal” amount of regulation could be sufficient to make adequate progress on the core elements of the problem—even if regulators were guided by excellent advisors—and therefore we need to clamp down hard now and pause AI worldwide indefinitely.

Matthew goes on to say:

That said, I don't see any strong evidence supporting that position.

I'd argue the opposite. I don't see any strong evidence opposing that position (given that doom is the default outcome of AGI). The fact that a moratorium was off the table at the UK AI Safety Summit was worrying. Matthew Syed, writing in The Times, has it right:

The one idea AI won’t come up with for itself — a moratorium

The Bletchley Park summit was an encouraging sign, but talk of regulators and off switches was delusional

Or, as I recently put it on X. It's

Crazy that accepted levels of [catastrophic] risk for AGI [~10%] are 1000x higher (or more) than for nuclear power. Any sane regulation would immediately ban the construction of ML-based AGI.

SammyDMartin @ 2023-11-02T12:55 (+7)

This as a general phenomenon (underrating strong responses to crises) was something I highlighted (calling it the Morituri Nolumus Mori) with a possible extension to AI all the way back in 2020. And Stefan Schubert has talked about 'sleepwalk bias' even earlier than that as a similar phenomenon.

https://twitter.com/davidmanheim/status/1719046950991938001

https://twitter.com/AaronBergman18/status/1719031282309497238

I think the short explanation as to why we're in some people's 98th percentile world so far (and even my ~60th percentile) for AI governance success is that if was obvious to you how transformative AI would be over the next couple of decades in 2021 and yet nothing happened, it seems like governments are just generally incapable.

The fundamental attribution error makes you think governments are just not on the ball and don't care or lack the capacity to deal with extinction risks, rather than decision makers not understanding obvious-to-you evidence that AI poses an extinction risk. Now that they do understand, they will react accordingly. It doesn't meant that they will react well necessarily, but they will act on their belief in some manner.

Harrison Durland @ 2023-11-06T16:26 (+6)

Since I think substantial AI regulation will likely occur by default, I urge effective altruists to focus more on ensuring that the regulation is thoughtful and well-targeted rather than ensuring that regulation happens at all.

I think it would be fairly valuable to see a list of case studies or otherwise create base rates for arguments like “We’re seeing lots of political gesturing and talking, so this suggests real action will happen soon.” I am still worried that the action will get delayed, watered down, and/or diverted to less-existential risks, only for the government to move on to the next crisis. But I agree that the past few weeks should be an update for many of the “government won’t do anything (useful)” pessimists (e.g., Nate Soares).

SummaryBot @ 2023-11-02T12:41 (+1)

Executive summary: The author argues that recent events like Biden's executive order on AI indicate society will likely regulate AI safety seriously, contrary to past assumptions. This has implications for which problems require special attention.

Key points

  1. Past narratives often assumed society would ignore AI risks until it was too late, but recent events suggest otherwise. 
  2. Biden's executive order, AI safety summits, open letters, and media coverage indicate serious societal concern over AI risks. 
  3. It's unlikely AI capabilities will appear suddenly without warning signs, allowing time to study risks and regulate. 
  4. People likely care about risks like AI deception already, and will regulate them seriously, though not perfectly. 
  5. We should reconsider which problems require special attention versus default solutions. 
  6. Thoughtful, nuanced policy is needed, not just blanket advocacy. Value drift may be a neglected issue warranting focus. 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Matthew_Barnett @ 2023-11-02T19:26 (+5)

Two key points I want to add to this summary:

  1. I think these arguments push against broad public advocacy work, in favor of more cautious efforts to target regulation well, and make sure that it's thoughtful. Since I think we'll likely get strong regulation by default, ensuring that the regulation is effective and guided by high-quality evidence should be the most important objective at this point.
  2. Policymakers will adjust policy strictness in response to evidence about the difficulty of alignment. The important question is not whether the current level of regulation is sufficient to prevent future harm, but whether we have the tools to ensure that policies can adapt appropriately according to the best evidence about model capabilities and alignment difficulty at any given moment in time.