Lukas_Gloor
Posts
AI alignment researchers may have a comparative advantage in reducing s-risks
by Lukas_Gloor @ 2023-02-15 | +79 | 0 comments
by Lukas_Gloor @ 2023-02-15 | +79 | 0 comments
The Life-Goals Framework: How I Reason About Morality as an Anti-Realist
by Lukas_Gloor @ 2022-02-03 | +48 | 0 comments
by Lukas_Gloor @ 2022-02-03 | +48 | 0 comments
Cause prioritization for downside-focused value systems
by Lukas_Gloor @ 2018-01-31 | +75 | 0 comments
by Lukas_Gloor @ 2018-01-31 | +75 | 0 comments
Room for Other Things: How to adjust if EA seems overwhelming
by Lukas_Gloor @ 2015-03-26 | +49 | 0 comments
by Lukas_Gloor @ 2015-03-26 | +49 | 0 comments