Messy personal stuff that affected my cause prioritization (or: how I started to care about AI safety)
By Julia_Wise🔸 @ 2022-05-05T17:59 (+265)
Crossposted from Otherwise
This is the story of how I started to care about AI risk. It’s far from an ideal decision-making process, but I wanted to try to spell out the untidy reality.
I first learned about the idea of AI risk by reading a lot of LessWrong in the summer of 2011. I didn’t like the idea of directing resources toward it. I didn’t spell out my reasons to myself at the time, but here’s what I think was going on under the surface:
- I was already really dedicated to global health as a cause area, and didn’t want competition with that.
- The concrete thing you could do about AI risk seemed to be “donate to MIRI,” and I didn’t understand what MIRI was doing or how it was going to help.
- These people all seemed to be California tech guys, and that wasn’t my culture.
My explicit thoughts were something like:
- Well yeah, I can see how misaligned AI might be the end of everything
- But maybe that wouldn’t be so bad; seems like there’s a lot of suffering in the world
- Anyway, I don’t know what we’re really going to do about it.
In 2017, a coworker/friend who had worked at an early version of MIRI talked to some of her old friends and got particularly worried about short AI timelines. And seeing her level of concern clicked with me. She wasn’t a California tech guy; she was a former civil rights lawyer from Detroit. She was a Giving What We Can member. She felt like My People.
And I started to take it seriously. I started to feel viscerally that it could be very bad for everything I cared about if we developed superhuman AI and we weren’t ready.
Once I started caring about this area a lot, I took a fresh look around at what might be done about it. In the time since I’d first encountered the idea, more people had also started taking it seriously. Now there were more projects like AI policy work that I found easier to comprehend.
Two other things that shifted over time:
- My concern about people and animals having net-negative lives has been related to what’s happening with my own depression. My concern is a lot stronger when I’m doing worse personally. [edited to add: I don't know which of these impressions is more accurate — just noting that my sense of the external world shifts depending on my internal state.]
- Once I had children, I had a gut-level feeling that it was extremely important that they have long and healthy lives.
Changing my beliefs didn’t mean there were especially good actions to take. Once I changed my view on AI safety I was more willing to donate to that area, but a lot of people had the same idea, and there wasn’t/isn't a lot of obvious work that wasn’t already funded. So I’ve continued donating to a mix of global health (which I still really value) and EA community-building. I was already doing cause-general work and didn’t think I could be more useful in direct work, but I started to encourage other people to consider work on global catastrophic risks.
Reflections now:
- What subculture you belong to doesn’t mean much about how right you are about something. Subcultures / echo chambers develop different ideas from the mainstream, some which will be valuable and many which will be pointless or harmful. (LessWrong was also very into cryonics at the time, and I think it’s right for that idea to get a lot less attention than AI safety.)
- One downside of a homogeneous culture is that other people may bounce off for tribalistic reasons.
- Because you don’t share the same concerns, and don’t speak to the things they care about
- Because they’re put off in some basic social or demographic way, and never seriously listen to you in the first place
- When I think about what could have alerted me that my thinking was driven by group identity more than by logic, what comes to mind is the feeling of annoyance I had about “AI people.”
Erin Braid @ 2022-05-05T19:34 (+28)
Thanks for this post Julia! I really related to some parts of it, while other parts were very different from my experience. I'll take this opportunity to share a draft I wrote sometime last year, since I think it's in a similar spirit:
I used to be pretty uncomfortable with, and even mad about, the prominence of AI safety in EA. I always saw the logic – upon reading the sequences circa 2012, I quickly agreed that creating superintelligent entities not perfectly aligned with human values could go really, really badly, so of course AI safety was important in that sense – but did it really have to be such a central part of the EA movement, which (I felt) could otherwise have much wider acceptance and thus save more children from malaria? Of course, it would be worth allowing some deaths now to prevent a misaligned AI from killing everyone, so even then I didn’t object exactly, but I was internally upset about the perception of my movement and about the dead kids.
I don’t feel this way anymore. What changed?
- [people aren’t gonna like EA anyways – I’ve gotten more cynical and no longer think that AI was necessarily their true objection]
- [AI safety more concrete now – the sequences were extremely insistent but without much in the way of actual asks, which is an unsettling combo all by itself. Move to Berkeley? Devote your life to blogging about ethics? Spend $100k on cryo? On some level those all seemed like the best available ways to prove yourself a True Believer! I was willing to lowercase-b believe, but wary of being a capital-B Believer, which in the absence of actual work to do is the only way to signal that you understand the Most Important Thing In The World]
- [practice thinking about the general case, longtermism]
Unfortunately I no longer remember exactly what I was thinking with #3, though I could guess. #1 and #2 still make sense to me and I could try to expand on them if they're not clear to others.
Thinking about it now, I might add something like:
4. [better internalization of the fact that EA isn't the only way to do good lol – people who care about global health and wouldn't care about AI are doing good work in global health as we speak]
Julia_Wise @ 2022-05-06T01:27 (+9)
Yes, the drive to prove you Belong is another one of those under-the-surface things that's surprisingly powerful!
Denise_Melchin @ 2022-05-06T09:41 (+25)
Thank you for sharing!
My concern about people and animals having net-negative lives has been related to what’s happening with my own depression. My concern is a lot stronger when I’m doing worse personally.
I share the experience that my concern is stronger when I am in a worse mood but I am not sure I share your conclusion.
My concern comes from an intuitive judgement when I am in a bad mood. When I am in a good mood it requires cognitive effort to remember how badly off many other people and animals are.
I don't want to deprioritise the worst off in favour of creating many happy lives in the future just because I have a very privileged life and "forget" how badly off others are.
Julia_Wise @ 2022-05-06T13:06 (+24)
Oh, I don't think either conclusion is clearly right. I do worry that me being happy makes it too easy for me to neglect important worries about what things are like for others.
But I think I was sloppy in rounding to "maybe AI ending everything wouldn't be that bad," partly because the world could well get better than it currently is, and partly because unaligned AI could make things worse.
Denise_Melchin @ 2022-05-06T14:20 (+2)
That makes sense, thank you!
Spencer Becker-Kahn @ 2024-01-24T11:45 (+7)
I think there's something quite interesting here...I feel like one of the main things I see in the post is sort of the opposite of the intended message.
(I realise this is an old post now but I've only just read it and - full disclosure - I've ended up reading it now because I think my skepticism about AI risk arguments is higher than it's been for a long time and so I'm definitely coming at it from that point of view).
If I may paraphrase a bit flippantly, I think that one of the messages is sort of supposed to be: 'just because the early AI risk crowd were very different for me and kind of irritating(!), it doesn't mean that they were wrong' and so 'sometimes you need to pay attention to messages coming from outside of your subculture'.
But actually what happens in the narrative is that you only start caring about AI risk when an old friend who 'felt like one of your own' - and who was "worried" - manages to make you "feel viscerally" about it. So it wasn't that, without direct intervention from either 'tribe', you actually sat down with the arguments/data and understood things logically. Nor was it that you, say, found a set of AI/technology/risk experts to defer to. It was that someone with whom you had more of an affinity made you feel like we should care more and take it seriously. This sounds sort of like the opposite of the intended message, does it not?. i.e. it sounds like more attention was paid to an emotional appeal from an old friend than to whatever arguments were available at the time.
Julia_Wise @ 2024-01-24T19:47 (+2)
Yep, that's all true. I think what I'm pointing to is that de facto people do decide what to pay attention to and what arguments to dig into based on arbitrary factors and tribalism. Ideally I'd have had some less arbitrary way to decide where to focus my attention, but here we are.
WilliamKiely @ 2022-05-05T22:43 (+5)
Thanks for sharing, Julia. I think this sort of post is valuable for helping individuals make better cause prioritization decisions. A related post is Claire Zabel's How we can make it easier to change your mind about cause areas.
Providing these insights can also help us understand why others might not be receptive to working on EA causes, which can be relevant for outreach work.
(Erin commented "people aren’t gonna like EA anyways – I’ve gotten more cynical", but I'm optimistic that an EA community that better understands stories like yours could do things differently to make people more receptive to caring about certain causes on the margin.)
Aaron_Scher @ 2022-05-08T20:29 (+3)
Thanks for linking Claire's post, a great read!
PaulCousens @ 2022-05-10T17:59 (+1)
I have never heard of the ideological Turing Tests that Claire referenced in their post. Those seem interesting. I have felt skeptical about the Turing Tests. That they tell us more about ourselves than they do about AI seems to reflect the nature of my skepticism.
I think that the question of/the definition of what intelligence is will be an important piece of AI. It seems that this question/definition is still vague and/or not agreed upon yet. Sometimes, I have thought that we probably haven't delved enough into what our own intelligence is, what makes it tick, etc. to start conferring intelligence to other entities. So shifting the focus of Turing Tests from AIs to ourselves seems like a good idea to me. I can foresee ideological Turing Tests enhancing our empathy of others and revealing biases we had about others.
Julia_Wise @ 2022-05-11T13:05 (+6)
I think the idea is from Bryan Caplan originally: https://www.econlib.org/archives/2011/06/the_ideological.html
aaronmayer @ 2022-05-06T07:52 (+3)
This is a great post and my feelings have been almost identical over the years.
Thank you for sharing! I also appreciate you being so candid with your depression - I've found that many EAs are reticent bring up personal/health issues given that the movement mostly impacts us on a quasi-professional basis.