Stampy's AI Safety Info - New Distillations #2 [April 2023]
By markov @ 2023-05-09T13:34 (+13)
This is a linkpost to https://aisafety.info/?state=8QZH_8222_7626_8EL6_7749_8XBK_8XV7_8PYW_89ZU_7729_6412_6920_8G1G_7580_8H0O_7632_7772_6350_8C7T_8HIA_8IZE_7612_8EL9_89ZQ_8PYV_
Hey! This is another update from the distillers at the AI Safety Info website (and its more playful clone Stampy).
Here are a couple of the answers that we wrote up over the last month. As always let us know if there are any questions that you guys have that you would like to see answered.
The list below redirects to individual links, and the collective URL above renders all of the answers in the list on one page at once.
- What is a subagent?
- How could a superintelligent AI use the internet to take over the physical world?
- Are there any AI alignment projects which governments could usefully put a very large amount of resources into?
- What is deceptive alignment?
- How does Redwood Research do adversarial training?
- How does DeepMind do adversarial training?
- What is outer alignment?
- What is inner alignment?
- What are ‘true names’ in the context of AI alignment?
- Will there be a discontinuity in AI capabilities?
- Isn't the real concern technological unemployment?
- What can we expect the motivations of a superintelligent machine to be?
- What is shard theory?
- What are "pivotal acts"?
- What is perverse instantiation?
- Wouldn't a superintelligence be slowed down by the need to do experiments in the physical world?
- Is the UN concerned about existential risk from AI?
- What are some of the leading AI capabilities organizations?
- What is "whole brain emulation"?
- What are the "no free lunch" theorems?
- What is feature visualization?
- What are some AI governance exercises and projects I can try?
- What is "metaphilosophy" and how does it relate to AI safety?
- What is AI alignment?
- Are there any detailed example stories of what unaligned AGI would look like?
- What is a shoggoth?
Crossposted from LessWrong: https://www.lesswrong.com/posts/EELddDmBknLyjwgbu/stampy-s-ai-safety-info-new-distillations-2
Chris Leong @ 2023-05-10T12:41 (+2)
I would love to see you turn this into a newsletter. I think it would be a great resource for beginners to learn more about AI safety.