How do you talk about AI safety?

By Eevee🔹 @ 2020-04-19T16:15 (+10)

My impression is that it's easy to contribute to "un-nuanced and inaccurate" discourse or hype about artificial intelligence while talking about AI safety. Personally, I'm interested in doing AI safety research, so I need to be able to explain the motivation for my work to people who may be unfamiliar with the field. How do you explain AI safety accurately and without hyping it up too much?


D0TheMath @ 2020-04-19T22:54 (+5)

While I haven't read the book, Slate Star Codex has a great review on Human Compatible. Scott says it speaks of AI safety, especially in the long-term future, in a very professional sounding, and not weird way. So I suggest reading that book, or that review.


You could also list several different smaller scale AI-misalignment problems, such as the problems surrounding Zuckerberg and Facebook. You could say something like "You know how Facebook's AI is programmed to keep you on as long as possible, so often it will show you controversial content in order to rile you up, and get everyone yelling on everyone else so you never leave the platform? Yeah, I make sure that won't happen with smarter, and more influential AIs." If all you're going for is an elevator speech, or explaining to family what is it you do, I'd stop here. Otherwise, say something like "By my estimation, this seems fairly important, as incentives are aligned for companies and countries to use the best AI possible, and better AI means more influential AI, so if you have a really good, but slightly sociopathic AI, it's likely it'll still be used anyway. And if, in a few decades, we get to the point where we have a smarter than human, but still sociopathic AI, it's possible we've just made an immortal Hitler-Einstein combination. Which, needless to say, would be very bad, possibly even extinction-level bad. So if the job is very hard, and the result if the job doesn't get done is very bad, then the job is very very important (that's very)." after the first part.

I've never tried using these statements, but the seem like they'd work.

rohinmshah @ 2020-04-19T23:21 (+3)
While I haven't read the book, Slate Star Codex has a great review on Human Compatible. Scott says it speaks of AI safety, especially in the long-term future, in a very professional sounding, and not weird way. So I suggest reading that book, or that review.

Was going to recommend this as well (and I have read the book).

irving @ 2020-04-19T23:08 (+3)

This isn't a complete answer, but I think it is useful to have a list of prosaic alignment failures to make the basic issue more concrete. Examples include fairness (bad data leading to inferences that reflect bad values), recommendation systems going awry, etc. I think Catherine Olsson has a long list of these, but I don't know where it is. We should generically effect some sort of amplification as AI strength increases; it's conceivable the amplification is in the good direction, but at a minimum we shouldn't be confident of that.

If someone is skeptical about AIs getting smart enough that this matters, you can point to the various examples of existing superhuman systems (game playing programs, dog distinguishers that beat experts, medical imaging systems that beat teams of experts, etc.). Narrow superintelligence should already be enough to worry, depending on how such systems are deployed.

evelynciara @ 2020-04-19T22:17 (+2)

note: your link is broken

irving @ 2020-04-20T10:48 (+2)

Fixed, thanks!