I'm interviewing prolific AI safety researcher Richard Ngo (now at OpenAI and previously DeepMind). What should I ask him?

By Robert_Wiblin @ 2022-09-29T00:00 (+45)

Next week I'm interviewing Richard Ngo, current AI (Safety) Governance Researcher at OpenAI and previous Research Engineer at DeepMind.

Before that he was doing a PhD in the Philosophy of Machine Learning at Cambridge, on the topic of "to what extent is the development of artificial intelligence analogous to the biological and cultural evolution of human intelligence?"

He is focused on making the development and deployment of AGI more likely to go well and less likely to go badly.

Richard is also a highly prolific contributor to online discussion of AI safety in a range of places, for instance:

What should I ask him?


anea @ 2022-09-29T09:18 (+14)
  1. What made him choose to work full time on governance rather than technical AI alignment?
  2. What does he think about working on improving the value of the future conditional on survival versus reducing AI x-risk?
  3. What's the OpenAI Futures theory of change?
  4. What policy areas or proposals in AI policy seem either promising or underexplored?
  5. Thoughts on various AI governance proposals (live monitoring of hardware use, chip shutdown mechanisms, regulating the legality of doing big training runs, international agreements on semiconductor trade, restricting semiconductor exports to certain countries, winfall clause, spreading good norms at top labs, etc.)?
HaydnBelfield @ 2022-09-29T11:05 (+12)

I think he quit his PhD actually. So you could ask him why, and what factors people should consider when choosing to do a PhD or deciding to change while on it.

 

<Before that he did a PhD in the Philosophy of Machine Learning at Cambridge, on the topic of "to what extent is the development of artificial intelligence analogous to the biological and cultural evolution of human intelligence?">

Ben @ 2022-09-29T17:24 (+10)

Sounds interesting! I'd be interested in:

  1. Could Richard give a summary of his conversation with Eliezer, and on what points he agrees and disagrees with him? 
  2. (Perhaps this has been covered somewhere else) Could Richard give an broad overview of different approaches to AI alignment and which ones he thinks are most promising?

Thanks!

nmulani @ 2022-09-29T15:57 (+6)

I'd be particularly curious to hear Richard's thoughts on non-governmental approaches to governance: How robust does he see the corporate governance approaches within labs like OpenAI as being? Does he believe any corporate governance ideas are particularly promising? Additionally, does he see any potential from private sector collaboration or consortia on self-governance, or from  non-profit / NGO attempts at monitoring and risk mitigation?

Greg_Colbourn @ 2022-10-05T13:10 (+5)

Does he agree with FTX Future Fund's worldview on AI? If his probabilities (e.g. for "P(misalignment x-risk|AGI)" or "P(AGI by 2043") are significantly different, will he be entering their competition?

Ben_West @ 2022-10-03T15:01 (+4)

What does he think about rowing versus steering in AI safety? Ie does he think we are basically going in the right direction and we just need to do more of it, or do we need to do more thinking about the direction in which we are heading?

Guy Raveh @ 2022-09-29T01:05 (+4)
  1. How does he view the relationship between AI safety researchers and AI capabilities developers? Can they work in synergy while having sometimes opposite goals?

  2. What does he think the field of AI safety is missing? What kinds of people does it need? What kinds of platforms?

ofer @ 2022-09-29T15:38 (+3)

What are the upsides and downsides of doing AI governance research at an AI company, relative to doing it at a non-profit EA organization?

Geoffrey Miller @ 2022-09-30T02:30 (+2)

Some AI applications may involve AI systems that need to get aligned with the interests, values, and preferences of non-human animals (e.g. pets, livestock, zoo animals, lab animals, endangered wild animals, etc.) -- in addition to being aligned with the humans involved in their care-taking.

Are AI alignment researchers considering how this kind of alignment could happen? 

Which existing alignment strategies might work best for aligning with non-human animals?

Quadratic Reciprocity @ 2022-09-30T00:31 (+2)

Besides (or after) doing his AGI Safety Fundamentals Program (and the potential future part 2 / advanced version of the curriculum), what does he recommend university students interested in AI safety do? 

JoshYou @ 2022-09-29T16:22 (+1)

What makes someone good at AI safety work? How does he get feedback on whether his work is useful, makes sense, etc?