Being nicer than Clippy

By Joe_Carlsmith @ 2024-01-16T19:44 (+25)

This is a crosspost, probably from LessWrong. Try viewing it there.

null
Vasco Grilo @ 2024-01-18T19:18 (+4)

Thanks for the post, Joe. Relatedly, readers may want to check Brian Tomasik's posts on cooperation and peace.

anormative @ 2024-04-08T03:35 (+1)

But the AIs-with-different-values – even: the cooperative, nice, liberal-norm-abiding ones – might not even be sentient! Rather, they might be mere empty machines. Should you still tolerate/respect/etc them, then?"

The flavor of discussion here on AI sentience that follows what I've quoted above always reminds me of, and I think is remarkably similar to, the content of this scene from the Star Trek: The Next Generation episode "The Measure of a Man." It's a courtroom drama-style scene in which Data, an android, is threatened by a scientists who wants to make copies of Data and argues he's property of the Federation. Patrick Stewart, playing Jean Luc-Picard, defending Data, makes an argument similar to Joe's.

You see, he's met two of your three criteria for sentience, so what if he meets the third. Consciousness in even the smallest degree. What is he then? I don't know. Do you? (to Riker) Do you? (to Phillipa) Do you? Well, that's the question you have to answer. Your Honour, the courtroom is a crucible. In it we burn away irrelevancies until we are left with a pure product, the truth for all time. Now, sooner or later, this man or others like him will succeed in replicating Commander Data. And the decision you reach here today will determine how we will regard this creation of our genius. It will reveal the kind of a people we are, what he is destined to be. It will reach far beyond this courtroom and this one android. It could significantly redefine the boundaries of personal liberty and freedom, expanding them for some, savagely curtailing them for others. Are you prepared to condemn him and all who come after him to servitude and slavery?

SummaryBot @ 2024-01-17T14:10 (+1)

Executive summary: The post argues that human values related to "niceness," boundaries, and liberalism are importantly different from the values of a "paperclip maximizer" AI, and suggests incorporating those human values into how we think about relating to AIs with different values.

Key points:

  1. Unlike a paperclip maximizer AI that just cares about making paperclips, human values incorporate notions of "niceness" like respecting others' autonomy and not violently overthrowing them even if they have different values.
  2. Concepts from political liberalism around tolerance, diversity, and respecting individual rights and boundaries are also relevant to how humans should ideally interact with AIs with different values.
  3. These human values likely have practical benefits too in terms of cooperation, attractiveness of society, etc. that are worth preserving with AIs.
  4. However, some minimal versions of liberalism may not guarantee a flourishing future, so we still need to empower agents who care about human values like love and beauty.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.