Mitigating extreme AI risks amid rapid progress [Linkpost]
By Akash @ 2024-05-21T20:04 (+36)
This is a crosspost, probably from LessWrong. Try viewing it there.
nullSummaryBot @ 2024-05-22T13:46 (+1)
Executive summary: The concept of "AI alignment" conflates distinct problems and obscures important questions about the interaction between AI systems and human institutions, potentially limiting productive discourse and research on AI safety.
Key points:
- The term "AI alignment" is used to refer to several related but distinct problems (P1-P6), leading to miscommunication and fights over terminology.
- The "Berkeley Model of Alignment" reduces these problems to the challenge of teaching AIs human values (P5), but this reduction relies on questionable assumptions.
- The assumption of "content indifference" ignores the possibility that different AI architectures may be better suited for learning different types of values or goals.
- The "value-learning bottleneck" assumption overlooks the potential for beneficial AI behavior without exhaustive value learning, and the need to consider composite AI systems.
- The "context independence" assumption neglects the role of social and economic forces in shaping AI development and deployment.
- A sociotechnical perspective suggests that AI safety requires both technical solutions and the design of institutions that govern AI, with the "capabilities approach" providing a possible framework.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.