Stress Externalities More in AI Safety Pitches

By NickGabs @ 2022-09-26T20:31 (+31)

It is important to figure out the best way(s) to convince people that AI safety is worth taking seriously because despite the fact that it is (in my opinion, and in the opinion of many people in the EA community) the most important cause area, it often seems weird to people at first glance.  I think that one way to improve the persuasiveness of AI safety pitches would be to use the frame that AI safety  is a problem because the profit-based incentives of private sector AI developers do not account for the externalities generated by risky AGI projects.

Many groups that EA pitches to are relatively left leaning.  In particular, elite university students are much more left leaning than the general population.  As such, they are likely to be receptive to arguments for taking AI safety seriously which highlight the fact that AI safety is a problem largely due to deeper problems with capitalism.  One such problem is the fact that capitalism fails to take into account externalities, or effects of economic activity which are not reflected in that activity's price.[1]  Developing AGI generates huge negative externalities; while a private sector actor who creates aligned AGI would probably reap much of the economic gains from it (at least in the short term - it is unclear how these gains would be distributed over longer time scales), it would pay only a small fraction of the costs of unaligned AGI, which are almost entirely borne by the rest of the world and future generations.   Thus, misalignment risks from AGI are significantly heightened by the structural failure of capitalism to account for externalities, a problem which left leaning people tend to be very mindful of.  Even beyond left-leaning students, it is widely acknowledged by educated people with an understanding of economics that a major problem with capitalism is that it fails by default to deal with externalities.  Similarly, many people in the general public view corporations and big tech as irresponsible, greedy actors who harm the public good even if they lack an understanding of the logic of externalities.  Thus, in addition to being particularly persuasive to left-leaning people who understand externalities, this framing seems likely to also be persuasive to people with a wider range of political orientations and levels of understanding of economics.

While this argument does not imply that misaligned AGI constitutes an existential risk, when it is combined with the claim that AI systems will have large impacts of some kind on the future (which many who are skeptical of AI x-risk still believe), it implies that we will by default significantly underinvest in ensuring that the AI systems which will shape the future will have positive effects on society.  This conclusion seems likely to make people broadly more concerned about the negative effects of AI.  Moreover, even if they do not conclude that AI development could pose an existential risk, the argument still implies that AI safety research constitutes a public good which should receive much more funding and attention than it currently does.  Given that it seems to me like alignment research focused on preventing existential catastrophe seems highly related to broader efforts to ensure future AI systems have positive effects on the world, having more people believe the previous claim seems quite good.

As a result, it seems like "AI safety is a public good which will be underinvested in by default" or (more polemically) "AI developers are gambling with the fate of humanity for the sake of profit, and we need to stop them/ensure that their efforts don't have catastrophic effects" should be a more common frame used to pitch the importance of AI safety.  It is an accurate and rhetorically effective framing of the problem.  Am I missing something?

  1. ^

    For a longer explanation of externalities, see https://www.imf.org/external/pubs/ft/fandd/basics/external.htm


PeterMcCluskey @ 2022-09-27T15:45 (+13)

It's risky to connect AI safety to one side of an ideological conflict.

jskatt @ 2022-09-28T06:06 (+3)

There are ways to frame AI safety as (partly) an externality problem without getting mired in a broader ideological conflict.

NickGabs @ 2022-09-28T14:29 (+2)

I think you can stress the "ideological" implications of externalities to lefty audiences while having a more neutral tone with more centrist or conservative audiences.  The idea that externalities exist and require intervention is not IMO super ideologically charged.

HaydnBelfield @ 2022-09-26T22:14 (+11)

I'm very pro framing this as an externality. Doesn't just help with left-leaning people, it can also be  helpful for talking to other audiences, such as those immersed in economics or antitrust/competition law.

aogara @ 2022-09-26T21:01 (+6)

I like this framing a lot. My 60 second pitch for AI safety often includes something like this. “It’s all about making sure AI benefits humanity. We think AI could develop really quickly and shape our society, and the big corporations building it are thinking more about profits than about safety. We want to do the research they should be doing to make sure this technology helps everyone. It’s like working on online privacy in the 1990s and 2000s: Companies aren’t going to have the incentive to care, so you could make a lot of progress on a neglected problem by bringing early attention to the issue.”

rodeo_flagellum @ 2022-09-26T21:58 (+5)

Without thinking too deeply, I believe that this framing, i.e. one in line with AI developers are gambling with the fate of humanity for the sake of profit, and we need to stop them/ensure that their efforts don't have catastrophic effects, for AI risk could serve as a conversational cushion for those who are unfamiliar with the general state of AI progress and with the existential risk poorly aligned AI poses. 

Those unfamiliar with AI might disregard the extent of risk from AI if approached in conversation with remarks about how not only it is non-trivial that humanity might be extinguished by AI, but many researchers believe this event is highly likely to occur,  even in the next 25 years. I imagine such scenarios are, for them, generally unbelievable. 

The cushioning could, however, lead to people trying to think about AI risk independently or to them searching for more evidence and commentary online, which might subsequently lead to them to the conclusion that AI does in fact pose a significant existential risk to humanity. 

When trying to introduce the idea of AI risk to someone who is unfamiliar with it, it's probably a good idea to give an example of a current issue with AI, and then have them extrapolate. The example of poorly designed AI systems being used by corporations for click-through, as covered in the introduction of Human Compatible, seems good to use in your framing of AI safety as a public good. Most people are familiar with the ills of algorithms designed for social media, so it is not a great step to imagine researchers designing more powerful AI systems that are deleterious to humanity via a similar design issue but at a much more lethal level: 

They aren't particularly intelligent, but they are in a position to affect the entire world because they directly influence billions of people. Typically, such algorithms are designed to maximize click-through, that is, the probability that the user clicks on presented items. The solution is simply to present items that the user likes to click on, right? Wrong. The solution is to change the user's preferences so that they become more predictable. A more predictable user can be fed items that they are likely to click on, thereby generating more revenue. People with more extreme political views tend to be more predictable in which items they click on. 

Marcel D @ 2022-09-27T03:08 (+2)

Ultimately, you should probably tailor messages to your audience, given their understanding, objections/beliefs, values, etc. If you think they understand the phrase “externalities,” I agree, but a sizable number of people in the world do not properly understand the concept.

Overall, I agree that this is probably a good thing to emphasize, but FWIW I think a lot pitches I’ve heard/read do emphasize this insofar as it makes sense to do so, albeit not always with the specific term “externality.”