Otherness and control in the age of AGI

By Joe_Carlsmith @ 2024-01-02T18:15 (+37)

This is a crosspost, probably from LessWrong. Try viewing it there.

null
SummaryBot @ 2024-01-03T14:56 (+1)

Executive summary: The essay series examines issues around relating to different agents and sharing power, especially as advanced AI systems emerge.

Key points:

  1. Discusses being "gentle" toward non-human others like animals, aliens, and AIs, but notes the risk of "getting eaten" in the attempt.
  2. Touches on technical and empirical issues relevant to AI risk, but focuses more on underlying philosophical assumptions.
  3. Interrogates the influential philosophical views of Eliezer Yudkowsky regarding AI risk, but notes caring about AI safety does not require affiliation with his views.
  4. Questions arise from trying to ensure the future goes well generally, not just from ensuring AIs don't kill everyone.
  5. Aims to examine an abstract existential narrative that conversations about advanced AI often express, rather than Yudkowsky's views specifically.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.