What EAG sessions would you like on AI?

By Nathan Young @ 2022-03-20T17:05 (+7)

What is the topic of the talk?
Who would you like to give the talk?
What is the format of the talk?
Why is it important?


Sandy @ 2022-03-20T17:46 (+6)

AI risk for beginners/dummies. I know almost nothing about it, and my guess is I'm not alone. 

Nathan Young @ 2022-03-20T17:51 (+2)

Does anyone know who would be good for this talk? I don't.

Jon P @ 2022-03-20T21:48 (+7)

I think Rob Miles youtube channel is a good resource for beginners, he's got a lot of nice videos there and he is a good speaker.

Nathan Young @ 2022-03-20T23:41 (+2)

Hey Sandy, could you edit your answer and put Rob as a suggested speaker?

Chris Leong @ 2022-03-20T17:57 (+4)

I would like to see workshops targeted at people at all different stages of the pipeline (although my expectation is that everyone at EAG would at least know the super basics of what AI risk is and why we might care about it).

So for example you could design a program looking like the following:

Obviously, you could replace these with different events, but the point is to cover all bases.

Nathan Young @ 2022-03-20T18:34 (+2)

I prefer if these were three seperate comments so I could upvote them seperately.

Chris Leong @ 2022-03-21T03:50 (+2)

It's one unified idea though and the idea without the examples would be unclear.

Nathan Young @ 2022-03-24T23:45 (+2)

Here are all the questions in this series:
https://forum.effectivealtruism.org/posts/WQTEuxkXyCy9QFJCb/what-eag-sessions-would-you-like-to-see-on-meta-ea

https://forum.effectivealtruism.org/posts/Nq3bbFjKjs5jPZnou/what-eag-sessions-would-you-like-to-see-on-global-priorities

https://forum.effectivealtruism.org/posts/iQWfeoXFebrEBh4qq/what-eag-sessions-would-you-like-to-see-on-horizon-scanning

https://forum.effectivealtruism.org/posts/AKBong8tuK65MWGjd/what-eag-sessions-would-you-like-on-global-catastrophic

https://forum.effectivealtruism.org/posts/AfmoMv8ixtFGhemnH/what-eag-sessions-would-you-like-to-attend-on-biorisk

https://forum.effectivealtruism.org/posts/6Qw3JvEDkAzmaqpTK/what-eag-sessions-would-you-like-on-ai

https://forum.effectivealtruism.org/posts/rpNwa94ep3jEyFSDB/what-eag-sessions-would-you-like-on-epistemics

https://forum.effectivealtruism.org/posts/wAJ4tLbTuhaoYN7Py/what-eag-sessions-would-you-like-on-animal-welfare

Question Mark @ 2022-03-22T06:21 (+1)

What is the topic of the talk?

Suffering risks, also known as S-risks

Who would you like to give the talk?

Possible speakers could be Brian Tomasik, Tobias Baumann, Magnus Vinding, Daniel Kokotajlo, or Jesse Cliton, among others.

What is the format of the talk?

The speaker would discuss some of the different scenarios in which astronomical suffering on a cosmic scale could emerge, such as risks from malevolent actors, a near-miss in AI alignment, and suffering-spreading space colonization. They would then discuss possible strategies for reducing S-risks, and some of the open questions related to S-risks and how to prevent them.

Why is it important?

So that worse that death scenarios can be avoided if possible.

Nathan Young @ 2022-03-20T17:11 (+1)

Explain AI risk - Rob Bensinger / Andrew Ngo/ Neel Nanda - Workshop

Split people into pairs and get them to explain AI risk to one another. Then get the other person to explain it back. Then give tips on how the explanation could be simpler. Use slido to take comments on what most people found difficult. Then the speakers answer those. Then try again with a new pair. How do you feel?