Podcast with Yoshua Bengio on Why AI Labs are “Playing Dice with Humanity’s Future”

By Garrison @ 2024-05-10T17:23 (+29)

This is a linkpost to https://garrisonlovely.substack.com/p/35-yoshua-bengio-on-why-ai-labs-are

This is exactly what I'm afraid of. That some human will build machines that are going to be - not just superior to us - but not attached to what we want, but what they want. And I think it's playing dice with humanity's future. I personally think this should be criminalized, like we criminalize cloning of humans. 

- Yoshua Bengio

My next guest is about as responsible as anybody for the state of AI capabilities today. But he's recently begun to wonder whether the field he spent his life helping build might lead to the end of the world. 

Following in the tradition of the Manhattan Project physicists who later opposed the hydrogen bomb, Dr. Yoshua Bengio started warning last year that advanced AI systems could drive humanity extinct. 

Dr. Bengio is the second-most cited living scientist and one of the so-called “Godfathers of deep learning.” He and the other “Godfathers,” Geoffrey Hinton and Yann LeCun, shared the 2018 Turing Award, computing’s Nobel prize.

In November, Dr. Bengio was commissioned to lead production of the first “State of the Science” report on the “capabilities and risks of frontier AI” — the first significant attempt to create something like the Intergovernmental Panel on Climate Change (IPCC) for AI.

I spoke with him last fall while reporting my cover story for Jacobin’s winter issue, “Can Humanity Survive AI?” Dr. Bengio made waves last May when he and Geoffrey Hinton began warning that advanced AI systems could drive humanity extinct.  

You can find The Most Interesting People I Know wherever you find podcasts and a full transcript here. If you'd like to support the show, sharing it with friends and reviewing it on Apple Podcasts is the most helpful! You can also subscribe to my Substack for updates on all my work. 

We discuss:

Links


Mayowa Osibodu @ 2024-06-19T20:27 (+3)

Interesting podcast - I read the transcript.

My main takeaway was that building AI systems to have self-interest is dangerous because that has the potential to explicitly conflict with humanity's own interest, leading to a major existential risk with super-intelligent AIs.

I wonder if there's any advantage of self-interest in AI though. Is there any way self-interest could possibly make an AI more effective at accomplishing its goals? In biological entities, self-interest obviously helps with e.g. avoiding threats, seeking more favourable living conditions, etc. I wonder if this applies in a similar manner to AIs, or if self-interest in an AI is inconsequential at best.

 

I'm curious, what exactly is the worry with AGI development in e.g. Russia and China? Is the concern that they are somehow less invested in building safe AGI (which seems to strongly conflict with their own self-interest)?

Or is the concern that they could somehow build AGI which selectively harms people/countries of their choosing? In this latter case it seems to me that the problem is exclusively a human one, and isn't ethically different from concerns about super-lethal computer viruses or bio/nuclear weapons. It's not clear how this precise risk is specific to AI/AGI.

Garrison @ 2024-07-08T16:47 (+2)

I think building AI systems with some level of autonomy/agency would make them much more useful, provided they are still aligned with the interests of their users/creators. There's already evidence that companies are moving in this direction based on the business case: https://jacobin.com/2024/01/can-humanity-survive-ai#:~:text=Further%2C%20academics%20and,is%20pretty%20good.%E2%80%9D

This isn't exactly the same as self-interest, though. I think a better analogy for this might be human domestication of animals for agriculture. It's not in the self-interest of a factory farmed chicken to be on a factory farm, but humans have power over which animals exist so we'll make sure there are lots of animals who serve our interests. AI systems will be selected for to the extent they serve the interests of people making and buying them.

RE international development: competition between states undercut arguments for domestic safety regulations/practices. These are exacerbated by beliefs that international rivals will behave less safely/responsibly, but you don't actually need to believe that to justify cutting corners domestically. If China or Russia built an AGI that was totally safe in the sense that it is aligned with its creators interests, that would be seen as a big threat by the US govt. 

If you think that building AGI is extremely dangerous no matter who does it, then having more well-resourced players in the space increases the overall risk. 

Vasco Grilo @ 2024-05-16T07:35 (+2)

Thanks for sharing, Garrison. I have read Yoshua's How Rogue AIs may Arise and FAQ on Catastrophic AI Risks, but I am still thinking annual extinction risk over the next 10 years is less than 10^-6. Do you know Yoshua's thoughts on the possibility of AI risk being quite low due to the continuity of potential harms? If deaths in an AI catastrophe follow a Pareto distribution (power law), which is a common assumption for tail risk, there is less than 10 % chance of such a catastrophe becoming 10 times as deadly, and this severely limits the probability of extreme outcomes. I also believe the tail distribution would decay faster than that of a Pareto distribution for very severe catastrophes, which makes my point stronger.