How to explain AI risk/EA concepts to family and friends?

By CBiddulph @ 2021-07-12T07:36 (+9)

Background: I'm an undergraduate CS major. Recently, I've mentioned to my mom that I've been getting involved in the "effective altruism" community, and I've been expressing an increased interest in getting a PhD. The other day, my mom asked me why exactly I wanted a PhD.

Me: Well, I want to help others as much as possible.

Mom: Okay, how are you going to help people with a PhD?

Me: Well, I don't know... maybe try to reduce existential risks...

Mom: Whoa, existential risks?

Me: Uh, I don't know, I mean, maybe it wouldn't be that bad, but it seems likely that AI will be very important in the future. And if AI has good goals that match up with the goals of humans, they could solve lots of the world's problems, so I really want to increase the odds of that happening.

Mom: So what's going to happen if AIs don't have good goals?

Me: Well, I guess... they could kill off humanity?

Mom: Whoa!

Fortunately, we moved on in the conversation at this point, but I don't think I gave her the best first impression of these ideas. Does anyone know of any good articles or videos for a popular audience that present the AI alignment problem in moderate depth, without too much sensationalism? I'm sure there are people who would do a much better job than me at explaining these concepts to my mom. Similarly, content on EA concepts in general would be helpful.

It's most important to me to convince my mom that what I'm doing is worthwhile, but I also want to be able to talk about my career plans with non-EAs without them thinking I've joined a Doomsday cult. For people working in existential risk and other "weird" areas - how do you usually talk about your work when it comes up in conversation?


RyanCarey @ 2021-07-12T13:33 (+18)

Explaining AI x-risk directly will excite about 20% of people and freak out the other 80%. Which is fine if you want to be a public intellectual, or chat to people within EA, but not fine for interacting with most family/friends, moving about in academia etc. The standard approach for the latter is to say you're working on researching safe and fair AI, where shorter term risks, and longer term catastrophes are particular examples.

Linch @ 2021-07-12T09:57 (+16)

This is not exactly the answer you're looking for, and I'm not confident about this, but I think it's maybe good to refine your reasons for working on AI risk and being clear what you mean first, and after you get a good sense of what you mean (at least enough to convince a much more skeptical version of yourself), a more easily explainable version of the arguments may come naturally to you. 

(Take everything I say here with a huge lump of salt...FWIW I don't know how to explain EA or longtermism or forecasting stuff to my own parents, partially due to the language barrier). 

technicalities @ 2021-07-12T09:36 (+11)

Brian Christian is incredibly good at tying the short-term concerns everyone already knows about to the long-term concerns. He's done tons of talks and podcasts - not sure which is best, but if 3 hours of heavy content isn't a problem, the 80k one is good.

There's already a completely mainstream x-risk: nuclear weapons (and, popularly, climate change). It could be good to compare AI to these accepted handles. The second species argument can be made pretty intuitive too.

Bonus: here's what I told my mum.

AIs are getting better quite fast, and we will probably eventually get a really powerful one, much faster and better at solving problems than people. It seems really important to make sure that they share our values; otherwise, they might do crazy things that we won't be able to fix. We don't know how hard it is to give them our actual values, and to assure that they got them right, but it seems very hard. So it's important to start now, even though we don't know when it will happen, or how dangerous it will be.

wuschel @ 2021-07-12T08:19 (+7)

Relatable situation. For a short AI risk inroduction for moms, I think I would suggest Robert Miles´ Youtube Chanel

Sanjay @ 2021-07-12T12:12 (+3)

Not sure how good the Robert Miles channel is for mums (mine might not be particularly interested in his channel!) but for communicating about AI risk Robert Miles is (generally) good and I second this recommendation

D0TheMath @ 2021-07-12T15:21 (+4)

Perhaps try explaining by analogy, or providing examples of ways we’re already messing up.

Like the YouTube algorithm. It only maximizes the amount of time people spend on the platform, because (charitably) Google thought that’d be a useful metric for the quality of the content it provides. But instead, it ended up figuring out that if it showed people videos which convinced them of extreme political ideologies, then it would be easier to find videos which would make them angry/happy/sad/other addictive emotions which would keep them on the platform.

This particular problem has since been fixed, but it took quite a while to figure out what was going on, and more time to figure out how to fix it. Maybe use analogies of genies who, if you imperfectly specify your wish, will find some way to technically satisfy it, but screw you over in the process.

One thing which stops me from explaining things well to my parents is the fear of looking weird. Which usually doesn’t stop me (to a fault) when talking with anyone else, but I guess not with my parents. You can avert this via ye-olde Appeal to Authority. Tell them the idea was popularized, in part, by professor Stuart Russel—the writer of the world’s foremost textbook on artificial intelligence—in his book Human Compatible, who currently runs the organization HCAI at Berkeley to tackle this very problem.

edit: Also, be sure to note it’s not just HCAI who’s working on this problem. There’s also MIRI, DeepMind, Anthropic, and other organizations.