“X distracts from Y” as a thinly-disguised fight over group status / politics
By Steven Byrnes @ 2023-09-25T15:29 (+89)
1. Introduction
There’s a popular argument that says:
It’s bad to talk about whether future AI algorithms might cause human extinction, because that would be a distraction from the fact that current AI algorithms are right now causing or exacerbating societal problems (misinformation, deepfakes, political polarization, algorithmic bias, maybe job losses, etc.)
For example, Melanie Mitchell makes this argument (link & my reply here), as does Blake Richards (link & my reply here), as does Daron Acemoglu (link & a reply by Scott Alexander here & here), and many more.
In Section 2 I will argue that if we try to flesh out this argument in the most literal and straightforward way, it makes no sense, and is inconsistent with everything else these people are saying and doing. Then in Section 3 I’ll propose an alternative elaboration that I think is a better fit.
I’ll close in Section 4 with two ideas for what we can do to make this problem better.
(By “we”, I mean “people like me who are very concerned about future AI extinction risk (x-risk[1])”. That’s my main intended audience for this piece, although everyone else is welcome to listen in too. If you’re interested in why someone might believe that future AI poses an x-risk in the first place, you’re in the wrong place—try here or here.)
2. Wrong way to flesh out this argument: This is about zero-sum attention, zero-sum advocacy, zero-sum budgeting, etc.
If we take the “distraction” claim above at face value, maybe we could flesh it out as follows:
Newspapers can only have so many front-page headlines per day. Lawmakers can only pass so many laws per year. Tweens can only watch so many dozens of TikTok videos per second. In general, there is a finite supply of attention, time, and money. Therefore, if more attention, time, and money is flowing to Cause A (= future AI x-risk), then that means there’s less attention, time and money left over for any other Cause B (= immediate AI problems).
I claim that this is not the type of claim that people are making. After all, if that’s the logic, then the following would be equally sensible:
- “It’s bad to talk about police incompetence, because it’s a distraction from talking about police corruption.”
- “It’s bad to talk about health care reform, because it’s a distraction from talking about climate change.”
Obviously, nobody makes those arguments. (Well, almost nobody—see next subsection.)
Take the first one. I think it’s common sense that concerns about police incompetence do not distract from concerns about police corruption. After all, why would they? It’s not like newspapers have decided a priori that there will be one and only one headline per month about police problems, and therefore police incompetence and police corruption need to duke it out over that one slot. If anything, it’s the opposite! If police incompetence headlines are getting clicks, we’re likely to see more headlines on police corruption, not fewer. It’s true that the total number of headlines is fixed, but it’s perfectly possible for police-related articles to collectively increase, at the expense of articles about totally unrelated topics like Ozempic or real estate.
By the same token, there is no good reason that concerns about future AI causing human extinction should be a distraction from concerns about current AI:
- At worst, they’re two different topics, akin to the silly idea above that talking about health care reform is a problematic distraction from talking about climate change.
- At best, they are complementary, and thus akin to the even sillier idea above that talking about police corruption is a problematic distraction from talking about police incompetence.
Supporting the latter perspective, immediate AI problems are not an entirely different problem from possible future AI x-risk. Some people think they’re extremely related—see for example Brian Christian’s book. I don’t go as far as he does, but I do see some synergies. For example, both current social media recommendation algorithm issues and future AI x-risk issues are exacerbated by the fact that huge trained ML models are very difficult to interpret and inspect. By the same token, if we work towards international tracking of large AI training runs, it might be useful for both future AI x-risk mitigation and ongoing AI issues like disinformation campaigns, copyright enforcement, AI-assisted spearphishing, etc.
2.1 Side note on Cause Prioritization
I said above that “nobody” makes arguments like “It’s bad to talk about health care reform, because it’s a distraction from talking about climate change”. That’s an exaggeration. Some weird nerds like me do say things kinda like that, in a certain context. That context is called Cause Prioritization, a field of inquiry usually associated these days with Effective Altruism. The whole shtick of Cause Prioritization is to take claims like the above seriously. If we only have so much time in our life and only so much money in our bank account, then there are in fact tradeoffs (on the margin) between spending it to fight for health care reform, versus spending it to fight for climate change mitigation, versus everything else under the sun. Cause Prioritization discourse can come across as off-putting, and even offensive, because you inevitably wind up in a position where you’re arguing against lots of causes that you actually care deeply and desperately about. So most people just reject that whole enterprise. Instead they don’t think explicitly about those kinds of tradeoffs, and insofar as they want to make the world a better place, they tend to do so in whatever way seems most salient and emotionally compelling, perhaps because they have a personal connection, etc. And that’s fine.[2] But Cause Prioritization is about facing those tradeoffs head-on, and trying to do so in a principled, other-centered way.
If you want to do Cause Prioritization properly, then you have to dive into (among other things) a horrific minefield of quantifying various awfully-hard-to-quantify things like “what’s my best-guess probability distribution for how long we have until future x-risk-capable AI may arrive?”, or “exactly how many suffering chickens are equivalently bad to one suffering human?”, or “how do we weigh better governance in Spain against preventing malaria deaths?”.
Anyway, I would be shocked if anyone saying “we shouldn’t talk about future AI risks because it’s a distraction from current AI problems” arrived at that claim via a good-faith open-minded attempt at Cause Prioritization.
Indeed, as mentioned above, there are people out there who do try to do Cause Prioritization analyses, and “maybe future AI will cause human extinction” tends to score right at or near the top of their lists. (Example.)
2.2 Conclusion
So in conclusion, people say “concerns about future AI x-risks distract from concerns about current AI”, but if we flesh out that claim in a superficial, straightforward way, then it makes no sense.
…And that was basically where Scott Alexander left it in his post on this topic (from which I borrowed some of the above examples). But I think Scott was being insufficiently cynical. I offer this alternative model:
3. Better elaboration: This is about zero-sum group status competition
I don’t think anyone is explicitly thinking like the following, but let’s at least consider the possibility that something like this is lurking below the surface:
If we endorse actions to mitigate x-risk from future AIs, we’re implicitly saying “the people who are the leading advocates of x-risk mitigation, e.g. Eliezer Yudkowsky, were right all along.” Thus, we are granting those people status and respect. And thus everything else that those same people say and believe—especially but not exclusively on the topic of AI—implicitly gets more benefit-of-the-doubt.
Simultaneously on the other side, if we endorse actions to mitigate x-risk from future AIs, we’re implicitly saying “the people who are leading advocates against x-risk mitigation, e.g. Timnit Gebru, were wrong all along.” Thus, we are sucking status and respect away from those people. And thus everything else that those people say and believe—especially but not exclusively on the topic of AI—gets some guilt-by association.
Now, the former group of people seem much less concerned about immediate AI concerns like AI bias & misinformation than the latter group. [Steve interjection: I don’t think it’s that simple—see Section 4.2 below—but I do think some people currently believe this.] So, if we take actions to mitigate AI x-risk, we will be harming the cause of immediate AI concerns, via this mechanism of raising and lowering people’s status, and putting “the wrong people” on the nightly news, etc.
Do you see the disanalogy to the police example? The people most vocally concerned about police incompetence, versus the people most vocally concerned about police corruption, are generally the very same people. If we elevate those people as reliable authorities, and let them write op-eds, and interview them on the nightly news, etc., then we are simultaneously implicitly boosting all of the causes that these people are loudly advocating, i.e. we are advancing both the fight against police incompetence and the fight against police corruption.
As an example in the other direction, if a left-wing USA person said:
It’s bad for us to fight endless wars against drug cartels—it’s a distraction from compassionate solutions to drug addiction, like methadone clinics and poverty reduction.
…then that would sound perfectly natural to me! Uncoincidentally, in the USA, the people advocating for sending troops to fight drug cartels, and the people advocating for poverty reduction, tend to be political adversaries on almost every other topic!
4. Takeaways
4.1 Hey AI x-risk people, let’s make sure we’re not pointlessly fanning these flames
As described above, there is no good reason that taking actions to mitigate future AI x-risk should harm the cause of solving immediate AI-related problems; if anything, it should be the opposite.
So: we should absolutely, unapologetically, advocate for work on mitigating AI x-risk. But we should not advocate for work on mitigating AI x-risk instead of working on immediate AI problems. That’s just a stupid, misleading, and self-destructive way to frame what we’re hoping for. To be clear, I think this kind of weird stupid framing is already very rare on “my side of the aisle”—and far outnumbered by people who advocate for work on x-risk and then advocate for work on existing AI problems in the very next breath—but I would like it to be even rarer still.
(I wouldn’t be saying this if I didn’t see it sometimes; here’s an example of me responding to (what I perceived as) a real-world example on twitter.)
In case the above is not self-explanatory: I am equally opposed to saying we should work on mitigating AI x-risk instead of working on the opioid crisis, and for the same reason. Likewise, I am equally opposed to saying we should fight for health care reform instead of fighting climate change.
I’m not saying that we should suppress these kinds of messages because they make us look bad (although they obviously do); I’m saying we should suppress these kinds of messages because they are misleading, for reasons in Section 2 above.
To make my request more explicit: If I’m talking about how to mitigate x-risk, and somebody changes the subject to immediate AI problems that don’t relate to x-risk, then I have no problem saying “OK sure, but afterwards let’s get back to the human extinction thing we were discussing before….” Whereas I would not say “Those problems you’re talking about are much less important than the problems I’m talking about.” Cause Prioritization is great for what it is, but it's not a conversation norm. If someone is talking about something they care about, it's fine if that thing isn't related to alleviating the maximum amount of suffering. That doesn't give you the right to change the subject. Notice that even the most ardent AI x-risk advocates seem quite happy to devote substantial time to non-cosmologically-impactful issues that they care about—NIMBY zoning laws are a typical example. And that’s fine!
Anyway, if we do a good job of making a case that literal human extinction from future AI is a real possibility on the table, then we win the argument—the Cause Prioritization will take care of itself. So that’s where we need to be focusing our communication and debate. Keep saying: “Let’s go back to the future-AI-causing-human-extinction thing. Here’s why it’s a real possibility.” Keep bringing the discussion back to that. Head-to-head comparisons of AI x-risk versus other causes tend to push discussions away from this all-important crux. Such comparisons would be a (ahem) distraction!
4.2 Shout it from the rooftops: There are people of all political stripes who think AI x-risk mitigation is important (and there are people of all political stripes who think it’s stupid)
Some people have a strong opinion about “silicon valley tech people”—maybe they love them, or maybe they hate them. Does that relate to AI x-risk discourse? Not really! Because it turns out that “silicon valley tech people” includes many of the most enthusiastic believers in AI x-risk (e.g. see the New York Times profile of Anthropic, a leading AI company in San Francisco) and it also includes many of its most enthusiastic doubters (e.g. tech billionaire Marc Andreessen: “The era of Artificial Intelligence is here, and boy are people freaking out. Fortunately, I am here to bring the good news: AI will not destroy the world…”).
Likewise, some people have a strong opinion (one way or the other) about “the people extremely concerned about current AI problems”. Well, it turns out that this group likewise includes both enthusiastic believers in future AI x-risk (e.g. Tristan Harris) and enthusiastic doubters (e.g. Timnit Gebru).
By the same token, you can find people taking AI x-risk seriously in Jacobin magazine on the American left, or on Glenn Beck on the American right; in fact, a recent survey of the US public got supportive responses from Democrats, Republicans, and Independents—all to a quite similar extent—to questions about AI extinction risk being a global priority.[3]
I think this situation is good and healthy, and I hope it lasts, and we should try to make it widely known. I think that would help fight the “X distracts from Y” objection to AI x-risk, in a way that complements the kinds of direct, object-level counterarguments that I was giving in Section 2 above.
- ^
There are fine differences between “extinction risk” and “x-risk”, but it doesn’t matter for this post.
- ^
Sometimes I try to get people excited about the idea that they could have a very big positive impact on the world via incorporating a bit of Cause Prioritization into their thinking. (Try this great career guide!) Sometimes I even feel a bit sad or frustrated that such a tiny sliver of the population has any interest whatsoever in thinking that way. But none of that is the same as casting judgment on those who don’t—it’s supererogatory, in my book. For example, practically none of my in-person friends have heard of Cause Prioritization or related ideas, but they’re still great people who I think highly of.
- ^
Party breakdown results were not included in the results post, but I asked Jamie Elsey of Rethink Priorities and he kindly shared those results. It turns out that the support / oppose and agree / disagree breakdowns were universally the same across the three groups (Democrats, Independents, Republicans) to within at most 6 percentage points. If you look at the overall plots, I think you’ll agree that this counts as “quite similar”.
titotal @ 2023-09-25T17:41 (+58)
I'm not a big fan of the distraction argument, and I encourage cooperation between ethicists and x-riskers. However, I don't think you fully inhabited the mind of the x-risk skeptic here.
From their perspective, AI x-risk is absurd. They think it's all based on shoddy thinking and speculation by wacky internet people who are wrong about everything.
From your perspective, it's a matter of police corruption vs police incompetence.
from their perspective, it's a matter of police corruption vs police demonic possession.
Imagine if you're a police reformer who wakes up one day to see article after article worried that the police are being possessed by demons into doing bad things, and seeing a huge movement out there worried about the demon cops. You are then interviewed about whether you are concerned by the demonic possession in the police force.
I think the distraction argument is a natural response to this kind of situation. You want to be clear that you don't believe in demons at all, and that demonic cop possession is not a real problem, but also that police corruption is a real issue. Hence: "demonic cop possession is a distraction from police corruption". I think this is a defensible statement! It's certainly true about the interview itself, in that you want to talk about issues that are real, not ones that aren't.
Steven Byrnes @ 2023-09-25T20:46 (+37)
Thanks for the comment!
I think we should imagine two scenarios, one where I see the demonic possession people as being “on my team” and the other where I see them as being “against my team”.
To elaborate, here’s yet another example: Concerned Climate Scientist Alice responding to statements by environmentalists of the Gaia / naturalness / hippy-type tradition. Alice probably thinks that a lot of their beliefs are utterly nuts. But it’s pretty plausible that she sees them as kinda “on her side” from a vibes perspective. (Hmm, actually, also imagine this is 20 years ago; I think there’s been something of a tribal split between pro-tech environmentalists and anti-tech environmentalists since then.) So probably Alice would probably make somewhat diplomatic statements, emphasizing areas of agreement, etc. Maybe she would say “I think they have the right idea about deforestation and many other things, although I come at it from a more scientific perspective. I don’t think we should take the Gaia idea too literally. But anyway, everyone agrees that there’s an environmental crisis here…” or something like that.
In your demon example, imagine someone saying “I think it’s really great to see so many people questioning the narrative that the police are always perfect. I don’t think demonic possession is the problem, but y’know why so many people keep talking about demonic possession? It’s because they can see there’s a problem, and they’re angry, and they have every right to be angry because there is in fact a problem. And that problem is police corruption…”.
So finally back to the AI example, I claim there’s a strong undercurrent of “The people talking about AI x-risk, they suck, those people are not on my team.” And if there wasn’t that undercurrent, I think most of the x-risk-doesn’t-exist people would have at worst mixed feelings about the x-risk discourse. Maybe they be vaguely happy that there are all these new anti-AI vibes going around, and they would try to redirect those vibes in the directions that they believe to be actually productive, as in the above examples: “I think it’s really great to see people across society questioning the narrative that AI is always a force for good and tech companies are always a force for good. They’re absolutely right to question that narrative; that narrative is wrong and dangerous! Now, on this specific question, I don’t think future AI x-risk is anything to worry about, but let’s talk about AI companies stomping on copyright law…”
Very different vibe, right? Much less aggressive trashing of AI x-risk than what we actually see from some people.
To be clear, in a perfect world, people would ignore vibes and stay on-topic and at the object level, and Alice would just straightforwardly say “My opinion is that Gaia is pseudoscientific nonsense” instead of sanewashing it and immediately changing the subject, and ditto with the demon person and the other imaginary people above. I’m just saying what often happens in practice.
Back to your example, I think it’s far from obvious IMO that the number of articles about police corruption are going to go down in absolute numbers, although it obviously goes down as a fraction of police articles. It’s also far from obvious IMO that this situation will make it harder rather than easier to get anti-corruption laws passed, or to fundraise.
titotal @ 2023-09-26T11:19 (+24)
Great reply! In fact, I think that the speech you wrote for the police reformer is probably the best way to advance the police corruption cause in that situation, with one change: they should be very clear that they don't think that demons exist.
I think there is an aspect where the AI risk skeptics don't want to be too closely associated with ideas they think are wrong: because if the AI x-riskers are proven to be wrong, they don't want to go down with the ship. IE: if another AI winter hits, or an AGI is built that shows no sign of killing anyone, then everyone who jumped on the x-risk train might look like fools, and they don't want to look like fools (for both personal and cause related reasons).
I think there definitely is an aspect of "AI x-risk people suck", but I worry that casting it as a team sports thing makes it seem overly irrational. When Timnit Gebru says that AI x-risk people suck, she's saying they are net negative: they do far more harm in promoting the incorrect x-risk idea and the actions they take (for example, helping start openAI) than they do incidental good in raising AI ethics awareness. You might think this belief is wrong, but the resulting actions make perfect sense, given this belief.
To modify the Gaia example, it'd be like if the Gaia people were trying to block all renewable energy building because it interrupted the chakras of the earth, and also loudly announcing that an earth spirit will become visible to the whole planet in 5 years. Yes, they are objectively increasing attention to your actual cause, but debunking them is still the correct move here. They've moved from on your team to not on your team because of objective object level disagreements over what beliefs are true and what actions should be taken.
trevor1 @ 2023-09-26T01:49 (+1)
I think this comment describes a really important crux here- the mentality of AI safety being like demonic possession is a mentality that's one of the core tools in the toolbox of anyone trying to criticize AI safety in bad faith.
This is a routinely recurring dynamic, in public statements by celebrities like Mitchell, in the news, etc, and ignoring it because it's stupid, is to ignore a key gear in any information warfare battlespace (a defensive one in this case). They might be afraid to mention it outright, but it can still be between the lines a majority of the time, a dog whistle, etc.
Benjamin M. @ 2023-09-25T16:47 (+21)
I agree with large chunks of this post, but I'm weakly confident (75ish%) that the claim about how newspapers work is wrong. Most newspapers that I am familiar with give their reporters specified beats (topics that they focus on), to at least some extent, although I think there are also reporters that don't have specific beats. So if there's an important tech story that needs covered, like AI x-risk, some of that is going to be as a replacement for other tech stories and some of that is going to be taken from people who write on pretty much anything. That still might mean more AI present-risk coverage, because it's hard to talk about one without talking about the other, and because there's a lot of other room in tech to take away stories from, but I don't think it's as simple as it appears.
However, I'm basing this mostly off info from newspapers that wouldn't write important AI x-risk stories. Maybe they behave differently? Some other people here probably know more than I do.
Steven Byrnes @ 2023-09-25T20:51 (+7)
That might be true in the very short term but I don’t believe it in general. For example, how many reporters were on the Ukraine beat before Russia invaded in February 2022? And how many reporters were on the Ukraine beat after Russia invaded? Probably a lot more, right?
Kirsten @ 2023-09-25T17:32 (+9)
I've seen a lot of claims that "x distracts from y" come up as a request to stay on topic. Sometimes a group of people will be having a conversation about something (let's say AI ethics) and someone else will show up and start talking about something different (let's say AI safety). If they make a habit of this, it can make it really difficult to make any progress in your conversation (about AI ethics), and you might reasonably ask peoples to stop changing the subject
Kirsten @ 2023-09-25T17:33 (+4)
I see it most often in online spaces, like the comments of a forum or Facebook post, but I've seen it happen irl as well (for example, some women are discussing negative experiences related to being a woman and a man starts discussing another related topic before they've finished)
Peter Berggren @ 2023-09-25T17:09 (+8)
I think there are two meanings of "distraction" here. The first, more "serious" meaning that the media probably uses is in the more generic sense of "something which distracts people." The second one, and one that a lot of people in the "AI ethics" community like to use, is a sense in which this was deliberately thought up as a diversion by tech companies to distract the public from their own misconduct.
A problem I see is people equivocating between these two meanings, and thus inadvertently arguing against the media's weird steel-man version of the AI ethicists' core arguments, instead of the real arguments they are making.