Video and transcript of presentation on Otherness and control in the age of AGI

By Joe_Carlsmith @ 2024-10-08T22:30 (+18)

This is a crosspost, probably from LessWrong. Try viewing it there.

null
SummaryBot @ 2024-10-09T12:47 (+1)

Executive summary: The "Otherness and control in the age of AGI" essay series explores how deep atheism and moral anti-realism in AI risk discourse can lead to problematic "yang" impulses for control, and proposes incorporating more balanced "yin" and "green" perspectives while still acknowledging key truths about AI risk.

Key points:

  1. Deep atheism and moral anti-realism in AI risk discourse can promote an impulse for extreme control ("yang") over the future.
  2. This yang impulse has concerning failure modes, like violating ethical boundaries and tyrannically shaping others' values.
  3. We should incorporate more cooperative, liberal norms and "green" perspectives of humility and attunement.
  4. However, we must balance this with acknowledging real risks from potentially alien AI systems.
  5. A nuanced "humanism" is proposed that allows for improving the world while respecting ethical limits.
  6. Our choices shape reality, so we have responsibility to choose wisely in steering the future.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.