Alignment 201 curriculum

By richard_ngo @ 2022-10-12T19:17 (+94)

This is a crosspost, probably from LessWrong. Try viewing it there.

null
Geoffrey Miller @ 2022-10-15T18:06 (+5)

Richard -- thanks for posting this. It looks like a very useful curriculum.

Naive question as an alignment newbie: 

If the point of 'AI alignment' is 'alignment with human values', why does the alignment field pay so little attention to the many decades of scientific research on the origins, nature, and diversity of human values, and focus almost entirely on the last few decades of research on machine learning?

It feels like many alignment courses are focusing only on the AI side of the equation, and acting as if the human side of alignment is trivial, obvious, and/or under-researched.

Genuine question; it's something that's been puzzling me for several months.

richard_ngo @ 2022-10-16T01:20 (+13)

Most people in the field expect that the hardest part of the problem is "robustly align AI with any goal". I expect AGIs to have a very sophisticated understanding of human values, along with many other concepts. The question is how we can precisely select which concepts they'll be motivated by.

Geoffrey Miller @ 2022-10-16T17:55 (+3)

Richard - thanks for your reply.

What I'm struggling with is how we'd plausibly get from (1) 'align with any human goal' to (2) 'align with all relevant goals across all humans in such a way that we actually minimize global catastrophic risks'. 

In my view, getting to (1) only gets us about 2% of the way towards (2), and doesn't come anywhere close to 'solving alignment' in a way that would allow for safe AGI.

Also, I don't see how AGIs could develop a provably, interpretably, 'very sophisticated understanding of human values' if alignment researchers don't have a sophisticated understanding of human values that they could test against the AGI's understanding. 

At least, it seems like we'd need a strong 'training set' of human values that includes plausibly complete coverage of the 'deployment set' of human values the AGI would actually encounter in the real world -- and I don't see how we'd get a decent training set of values without quite a thorough understanding of the nature and diversity of human values.

I'm raising these issues not to be contrarian or ornery, just out of a genuine puzzlement about the long-term game plan in research on alignment with human values, and why alignment researchers seem often uninterested in behavioral sciences research on human values.

richard_ngo @ 2022-10-16T20:04 (+8)

I don't see how AGIs could develop a provably, interpretably, 'very sophisticated understanding of human values' if alignment researchers don't have a sophisticated understanding of human values that they could test against the AGI's understanding. 

I don't think anyone is aiming for provable alignment properties (except maybe for Stuart Russell); this just seems too hard.

But if AGIs could develop a very sophisticated understanding of other domains that humans don't understand very well, by virtue of being more intelligent than humans, I don't see why they wouldn't be able to understand this domain very well too.

At least, it seems like we'd need a strong 'training set' of human values

This is how classic ML would do it. But in the modern paradigm, ML systems can infer all sorts of information from being trained on a very wide range of data  (e.g. all the books, all the internet, etc), and so we should expect that they can infer human values from that too. There's some preliminary evidence that language models can perform well on common-sense moral reasoning, and alignment researchers generally expect that future language models will be capable of answering questions about ethics to a superhuman level "by default".

More generally, it sounds like you're gesturing towards the difference between "narrow alignment" and "ambitious alignment", as discussed in this blog post. Broadly speaking, the goal of the former is basically to have AI that can be controlled; the goal of the latter is to have AI that could be trusted  steer the world. One reason that most researchers focus on the former is because if we could narrowly align AI, we could then use it to help us with the more complex task of ambitious alignment. And the properties required for an AI to be narrowly aligned (like "helpful", "honest", etc) are sufficiently common-sense that I don't think we gain much from a very in-depth study of them.

Geoffrey Miller @ 2022-10-16T20:17 (+2)

Richard - thanks very much for your quick and helpful reply. I'll have a look at the links you included, and ruminate about this further...

Guy Raveh @ 2022-10-16T18:59 (+4)

I feel like we shouldn't expect to be able to express the Values Of Humanity to an AGI in order for it to be safe - in the same way that humans are currently mostly safe towards the rest of humanity despite not being able to articulate those Values Of Humanity themselves. There's something stopping one person (even a very rich or powerful one) from killing everyone else, and it's not explicit knowledge.

aogara @ 2022-10-15T21:48 (+6)

You might be well aware of this, but there is a great line of research on machine ethics that tries to build AI with a sophisticated understanding of human values. The ETHICS benchmark for example measures language model understanding of various moral theories: https://arxiv.org/abs/2008.02275

Quadratic Reciprocity @ 2022-11-09T21:32 (+3)

Try to develop an algorithm which solves the problems outlined in the heuristic arguments report.

 

In the Eliciting Latent Knowledge readings, this is mentioned. What report is it referring to - there doesn't seem to be a link?
 

CalebWithers @ 2023-01-18T15:13 (+1)

Any updates around the likelihood/timing  of a discussion course? :)