Pros and Cons of boycotting paid Chat GPT

By NickLaing @ 2023-03-18T08:50 (+14)

TLDR: As individuals and a community, we should consider the pros and cons of boycotting paid ChatGPT subscription

Straight out of the gate – I’m not arguing that we should boycott (or not), but suggesting that we should  make a clear, reasoned decision whether or not it is best for ourselves and the EA community to sign up to paid AI subscriptions.

Although Machine learning algorithms are now a (usually invisible) part of everyday life, for the first time in history anyone anywhere can now pay to directly use powerful AI – for example through the new 20 dollar chat GPT+ subscription. Here are 3 pros and 3 cons of boycotting paid Chat GPT, largely based on known pros/cons of other boycotts. There will likely be more important reasons on both sides than these oh so shallow thoughts – please share and comment on which of these you might weight more or less in your decision making. 

For a Boycott

  1. Avoid contributing directly to increasing P(doom). This is pretty straightforward, we are paying perhaps the most advanced AI company in the world and a potential supplier of said doom to improve their AI.
     
  2. Integrity - Improve our ability to spread the word: if we can say we have boycotted a high profile AI then our advocacy for AI danger and alignment might be taken more seriously. With a boycott and this 'sacrificial signalling’ we might find it easier to start discussions and our arguments may carry more weight

    Friend: "Wow have you signed up to the new chat GPT?"
    Me/You: "It does look amazing, but I've decided not to sign up"
    Friend "Why on earth is that?"
    Me/You: "Well since you asked..."
     
  3. Historical precedent of boycotting what you’re up against: Animal rights activists are usually vegan/vegetarian, some climate activists don’t fly. As flag bearers for AI safety, perhaps we should take historical movements seriously and explore why they choose to boycott.
     

Against a boycott

  1. Systemic change > Personal change: What really matters is systemic change in AI alignment - whether we personally pay a bit of money to use a given AI makes a negligible difference or even none at all. If we advocate for boycotts or even broadcast our own, it could even distract from more important systemic changes – in this case AI alignment work and government lobbying for AI safety.
     
  2. Using AI to understand it and fight back: Boycotting these tools might hinder our understanding and knowledge of the nature of AI. This is more relevant to direct AI safety workers, but also somewhat relevant to all of us, as we keep ourselves updated by understanding current capabilities.
     
  3. Using AI to make more money to give to alignment orgs: We can then give this money to AI alignment organisations. If we gave at a 1:1 ratio this could be considered “moral offsetting” (thanks Jeffrey), but our increased productivity could potentially allow us to give far more than just offsetting the subscription.
     
  4. As Chat GPT 666 slaughters humanity, perhaps it will spare its paid users?  (J/K)

Sanjay @ 2023-03-18T08:57 (+15)

It looks like the arguments in favour of a boycott would look stronger if there were a coherent AI safety activist movement. (I mean "activist" in the sense of "recruiting other people to take part, and grassroots lobbying of decision-makers", not "activist" in the sense of "takes some form of action, such as doing AI alignment research")

NickLaing @ 2023-03-18T09:03 (+2)

Wow that's a great point Sanjay I love it and agree! I've even thought about writing something about AI activism like "Does AI safety need activists as much as alignment researchers?" but its not my field. It's weird to me that there doesn't seem to already be a strong AI safety activist movement. I feel like the EA community supports activism fairly well, but perhaps a lot of the skills and personal characteristics of those working within the AI safety community don't lean in the activist direction? Don't know nearly enough about it to be honest.

Jay Bailey @ 2023-03-18T09:27 (+11)

I think there's a bit of an "ugh field" around activism for some EA's, especially the rationalist types in EA. At least, that's my experience.

My first instinct, when I think of activism, is to think about people who:

- Have incorrect, often extreme beliefs or ideologies.
- Are aggressively partisan.
- Are more performative than effective with their actions.

This definitely does not describe all activists, but it does describe some activists, and may even describe the median activist. That said, this shouldn't be a reason for us to discard this idea immediately out of hand - after all, how good is the median charity? Not that great compared to what EA's actually do.

Perhaps there's a mass-movement issue here though - activism tends to be best with a large groundswell of numbers. If you have a hundred thousand AI safety activists, you're simply not going to have a hundred thousand people with a nuanced and deep understanding of the theory of change behind AI safety activism. You're going to have a few hundred of those, and ninety nine thousand people who think AI is bad for Reason X, and that's the extent of their thinking, and X varies wildly in quality.

Thus, the question is - would such a movement be useful? For such a movement to be useful, it would need to be effective at changing policy, and it would need to be aimed at the correct places. Even if the former is true, I find myself skeptical that the latter would occur, since even AI policy experts are not yet sure where to aim their own efforts, let alone how to communicate where to aim so well that a hundred thousand casually-engaged people can point in the same useful direction.

NickLaing @ 2023-03-18T09:42 (+7)

Great points thanks so much, agree with almost all of it!

We've obviously had different experience of activists! I have a lot of activist friends, and my first instincts when I think of activists are people who

1. Understand the issue they are campaigning for extremely well, without  
2. Have a clear focus and goal that they want to achieve
2. Are beholden to their ideology yes but not to any political party because they know political tides change and becoming partisan won't help their cause

Although I definitely know a few who fit your instincts pretty well ;)

That's a really good point about the AI policy experts not being sure where to aim their efforts, so how would activists know where to aim theirs? Effective traditional activism needs clear targets and outcomes. A couple of points on the slightly more positive end supporting activism.

  1. At this early stage we are at where very few people are even aware of the potential of AI risk, could raising public awareness be a legitimate purpose to actvism? Obviously when most people are aware and on board with the risk, then you need the effectiveness at changing policy you discussed.
  2. AI activists might be more likely to be EA aligned, so optimistically more likely to be in that small percentage of more focused and successful activists?
Jay Bailey @ 2023-03-19T22:32 (+8)

With respect to Point 2, I think that EA is not large enough that a large AI activist movement would be comprised mostly of EA aligned people. EA is difficult and demanding - I don't think you're likely to get a "One Million EA" march anytime soon. I agree that AI activists who are EA aligned are more likely to be in the set of focused, successful activists (Like many of your friends!) but I think you'll end up with either:

- A small group of focused, dedicated activists who may or may not be largely EA aligned
- A large group of unfocused-by-default, relatively casual activists, most of whom will not be EA aligned

If either of those two would be effective at achieving goals, then I think that makes AI risk activism a good idea. If you need a large group of focused, dedicated activists - I don't think we're going to get that.

As for Point 1, it's certainly possible - especially if having a large group of relatively unfocused people would be useful. I have no idea if this is true, so I have no idea if raising awareness is an impactful idea at this point. (Also, there are those that have made the point that raising AI risk awareness tends to make people more likely to race for AGI, not less - see OpenAI)

DirectedEvolution @ 2023-03-18T17:42 (+9)

Seems to me that we’ll only see a change in course from relentless profit-seeking LLM development if intermediate AIs start misbehaving - smart enough to seek power and fight against control, but dumb enough to be caught and switched off.

I think instead of a boycott, this is a time to practice empathic communication with the public now that the tech is on everybody’s radar and AI x-risk arguments are getting a respectability boost from folks like Ezra Klein.

A poster on LessWrong recently harvested a comment from a NY Times reader that talked about x-risk in a way that clearly resonated with the readership. Figuring out how to scale that up seems like a good task for an LLM. In this theory of change, we need to double down on our communication skills to steer the conversation in appropriate ways. And we’ll need LLMs to help us do that. A boycott takes us out of the conversation, so I don’t think that’s the right play.

NickLaing @ 2023-03-18T17:53 (+3)

I love this thanks!

One thing, I don't understand how a boycott of one paid AI takes us out of the conversation. Why do we need the LLMs t help us double down on communication?

Do you mean we need to show people the LLMs dodgy mistakes to help our argument?

DirectedEvolution @ 2023-03-18T18:27 (+4)

IMO, the main potential power of a boycott is symbolic, and I think you only achieve that is by eschewing LLMs entirely. Instead, we can use them to communicate, plan, and produce examples. As I see it, this needs to be a story about engaged and thoughtful users advocating for real responsibility with potentially dangerous tech, not panicky luddites mounting a weak looking protest.

NickLaing @ 2023-03-18T18:29 (+2)

Gotcha thanks that makes sense.

Jeroen Willems @ 2023-03-18T22:57 (+3)

I would change 2 under "against a boycott" to not just donations, but having an impact in general. Just like an airplane flight could be offset by a talk on veganism.

timunderwood @ 2023-03-18T10:03 (+3)

Maybe a simple argument is that A) it doesn't actually matter (the real money is in stuff being forwarded to Microsoft to integrate into everything, are you planning to boycott windows?), and B) open ai is doing somewhat better on paying attention to safety than should be expected as a default for a major corporation.

Reward people for being directionally correct.

I'm not saying that there aren't counter arguments against this model.