Talking publicly about AI risk

By Jan_Kulveit @ 2023-04-24T09:19 (+152)

This is a crosspost, probably from LessWrong. Try viewing it there.

null
freedomandutility @ 2023-04-24T23:07 (+11)

I think generally talking about EA topics in less widely spoken languages is a really good way to test messaging!

Ardenlk @ 2023-04-25T09:43 (+9)

thanks for this post! I'm curious - can you explain this more?

the AGI doom memeplex has, to some extent, a symbiotic relationship with the race toward AGI memeplex

titotal @ 2023-04-26T03:28 (+10)

My interpretation would be that they both tend to buy into the same premises that AGI will occur soon and that it will be godlike in power. Depending on how hard you believe alignment is, this would lead you to believe that we should build AGI as fast as possible (so that someone else doesn't build it first), or that we should shut it all down entirely. 

By spreading and arguing for their shared premises, both the doomers and the AGI racers get boosted by the publicity given to the other, leading to growth for them both. 

As someone who does not accept these premises, this is somewhat frustrating to watch. 

sphor @ 2023-04-26T07:40 (+4)

Maybe something like this: https://www.lesswrong.com/posts/KYzHzqtfnTKmJXNXg/the-toxoplasma-of-agi-doom-and-capabilities 

Linch @ 2023-04-26T21:21 (+3)

Thanks, I was thinking about linking the same thing.

David Johnston @ 2023-04-26T21:25 (+3)

AFAIK the official MIRI solution to AI risk is to win the race to AGI but do it aligned.

Part of the MIRI theory is that winning the AGI race will give you the power to stop anyone else from building AGI. If you believe that, then it’s easy to believe that there is a race, and that you sure don’t want to lose.

Jan_Kulveit @ 2023-05-22T15:36 (+2)

Sorry for the delay in response.

Here I look at it from a purely memetic perspective - you can imagine thinking as a self-interested memplex. Note I'm not claiming this is the main useful perspective, or this should be the main perspective to take. 

Basically, from this perspective

* the more people think about AI race, the easier is to imagine AI doom. Also the specific artifacts produced by AI race make people more worried - ChatGPT and GPT-4 likely did more for normalizing and spreading worried about AI doom than all the previous AI safety outreach together. 

The more the AI race is clear reality people agree on, the more attentional power and brainpower you will get.

* but also from the opposite direction... : one of the central claim of the doom memplex is AI systems will be incredibly powerful in our lifetimes - powerful enough to commit omnicide, take over the world, etc. - and their construction is highly convergent. If you buy into this, and you are certain type of person, you are pulled toward "being in this game". Subjectively, it's much better if you - the risk-aware, pro-humanity player - are at the front. Safety concerns of Elon Musk leading to founding of OpenAI likely did more to advance AGI than all advocacy of Kurzweil-type accelerationist until that point...

Empirically, the more people buy into the "single powerful AI systems are incredibly dangerous", the more attention goes toward work on such system.

Both memeplexes share a decent amount of maps, which tend to work as blueprints or self-fullfilling prophecies for what to aim for.


 

Darren McKee @ 2023-04-25T15:38 (+5)

Thank you for a great post and the outreach you are doing.  We need more posts and discussions about optimal framing. 

Oliver Sourbut @ 2023-04-28T14:56 (+4)

Thank you for sharing this! Especially the points about relevant maps and Meta/FAIR/LeCun.

I was recently approached by the UK FCDO as a technical expert in AI with perspective on x-risk. We had what I think were very productive conversations, with an interesting convergence of my framings and the ones you've shared here - that's encouraging! If I find time I'm hoping to write up some of my insights soon.

Oliver Sourbut @ 2023-05-15T13:54 (+1)

I wrote a little here about unpluggability (and crossposted on LessWrong/AF)

Corentin Biteau @ 2023-04-25T16:05 (+2)

Thanks - advice on "how to message complex things" is really useful - I'm always surprised by how neglected this is.

By the way, if at some point you were to redirect people toward a link explaining the problem with AI (article, website, video), as a resource they can use to understand the problem, what would you provide? I'm looking for a link in English - so far it's not clear what to point to.

For instance, the FLI tribune makes a clear case that many knowledgeable people care about this, but it's not very good at explaining what the risk really is.

utilistrutil @ 2023-05-09T05:19 (+1)

I would endorse all of this based on experience leading EA fellowships for college students! These are good principles not just for public media discussions, but also for talking to peers.

Pato @ 2023-05-04T08:11 (+1)

I doubt this:

the AGI doom memeplex has, to some extent, a symbiotic relationship with the race toward AGI memeplex

I mean, if you say it could increase the amount of people working in capabilities at first I would agree, but it probably increases a lot more the amount of people working on safety and wanting to slow/ stop capabilities research, which could create legislation and at the end of the day increase time until AGI.

In respect of the other cons of the doom memeplex I kinda agree to a certain extent but I don't see them come even close to the pros of potentially having lots of people actually taking the problem very seriously.