Tan Zhi Xuan: AI alignment, philosophical pluralism, and the relevance of non-Western philosophy
By EA Global @ 2020-11-21T08:12 (+19)
This is a linkpost to https://www.youtube.com/watch?v=dbMp4pFVwnU&list=PLwp9xeoX5p8Pq5nu2KkiBFCXmeurxws1u&index=3&t=5s
How can we build (super) intelligent machines that are robustly aligned with human values? AI alignment researchers strive to meet this challenge, but currently draw upon a relatively narrow set of philosophical perspectives common in effective altruism and computer science. This could pose risks in a world where human values are complex, plural, and fragile. Tan Zhi Xuan discusses how these risks might be mitigated by greater philosophical pluralism, describing several problems in AI alignment where non-Western philosophies might provide insight.
In the future, we may post a transcript for this talk, but we haven't created one yet. If you'd like to create a transcript for this talk, contact Aaron Gertler — he can help you get started.
michaelchen @ 2021-06-11T20:35 (+1)
An extended transcript of the talk is available at https://www.alignmentforum.org/posts/jS2iiDPqMvZ2tnik2/ai-alignment-philosophical-pluralism-and-the-relevance-of. There's also a lot more discussion there.