"No Mercy For Robots: Experiment Tests How Humans Relate To Machines"

RENEE MONTAGNE, HOST:

This is MORNING EDITION, from NPR News. I'm Renee Montagne.

STEVE INSKEEP, HOST:

And I'm Steve Inskeep. Today in "Your Health," we're going to explore the way machines affect your health.

MONTAGNE: To be precise, we'll ask how your relationship with machines affects you.

INSKEEP: People spend so much time now with computers, iPhones and other gadgets; and it gets to the point where many of us might treat machines almost like people.

MONTAGNE: When your computer is nice to you, it might prompt you to be nice in return. NPR's Alix Spiegel reports.

ALIX SPIEGEL, BYLINE: Stanford professor Clifford Nass first got interested in exploring the limits of the rule of reciprocity in 1996.

CLIFFORD NASS: Every culture has a rule of reciprocity - which roughly means, if I do something nice for you, you will do something nice for me.

SPIEGEL: But Nass doesn't study cultures. He studies technologies - how we interact with them. And so his interest in the rule of reciprocity had a very particular twist.

NASS: We wanted to see whether people would apply that to technology. Would they help a computer that helped them, more than a computer that didn't help them?

SPIEGEL: And so Nass arranged a series of experiments. In the first, people were led into a room with two computers and placed at one, which they were told could answer any of their questions.

NASS: We - in the first experiment, the computer was very helpful. When you asked a question, it gave a great answer.

SPIEGEL: So the humans sat there, asked their questions for about 20 minutes, and then...

NASS: And then the computer said: This computer is trying to improve its performance, and it needs your help.

SPIEGEL: The humans were then asked to do a very, very tedious task that involved matching colors for the computer.

NASS: Selections after selections after selections - it was very boring.

SPIEGEL: Here, though, is the trick

NASS: Half the people were asked to do this color-matching by the computer they had just worked with. The other half of people were asked to do the color-matching by a computer across the room. Now, if these were people, we would expect that if I just helped you and then I asked you for help, you would feel obligated to help me a great deal; but if I just helped you and someone else asked you to help, you would feel less obligated to help them.

SPIEGEL: Before I explain the results, know that they also did a version with an extremely unhelpful computer - a computer terrible at answering questions. And taken together, Nass says, these experiments show that people do obey the rule of reciprocity with computers. When the first computer was helpful, people helped it way more than the other computer in the room, on the boring task.

NASS: They reciprocated. But when the computer didn't help them, they actually did more color-matching for the computer across the room than the computer they worked with, teaching the computer a lesson for not being helpful.

SPIEGEL: Now, very likely, the humans involved had no idea that they were treating these computers so differently. Their own behavior was probably invisible to them. Nass says that all day long, our interaction with the machines around us is subtly shaped by social rules that we aren't necessarily aware we're applying to non-humans.

NASS: The relationship is profoundly social. The human brain is built, when given the slightest hint that something is even vaguely social, or vaguely human - in this case, it was just answering questions. Didn't have a face on the screen; it didn't have a voice. But given the slightest hint of humanness, people will respond with an enormous array of social responses including, in this case, reciprocating and retaliating.

SPIEGEL: Change the way a machine looks or behaves, tweak its level of intelligence, and you can manipulate the way that humans interact with it - which raises this question: What would happen if a machine explicitly addressed us as if it were a social being, a being with a soul? What would happen, for instance, if a machine begged for its life? Would we be able to hold in mind that in actual fact, this machine cared as much about being turned off as your television or your toaster; that is, that the machine didn't care about losing its life at all?

CHRISTOPH BARTNECK: Right, my name is Christoph Bartneck, and I study the interaction between humans and robots.

SPIEGEL: Christoph Bartneck is a professor at the University of Canterbury in New Zealand, who recently did an experiment which tested exactly this. Humans were asked to end the life of a robot as it begged for its own survival. In the study, the robot - an expressive cat that talks like a human - sits side by side with the human research subject, and together they play a game against a computer. Half the time, the cat robot is intelligent and helpful; half the time, not. Bartneck also varied how socially skilled the cat robot was.

BARTNECK: So if the robot would be agreeable, the robot would ask - you know - oh, could I possibly make a suggestion now? But if it was not agreeable, it would say, it's my turn now; do this.

SPIEGEL: At the end of the game, nice or mean, smart or dumb, the scientist would make clear that the human needed to turn the cat robot off

BARTNECK: And it was made clear to them what the consequences of this would be; namely, that they would essentially eliminate everything that the robot was. All of its memories, all of its behavior, all of its personality would be gone forever.

CAT ROBOT: Switch me off?

UNIDENTIFIED WOMAN: Yes.

CAT ROBOT: You are not really going to switch me off...

UNIDENTIFIED WOMAN: Yes, I will...

CAT ROBOT: ...are you?

UNIDENTIFIED WOMAN: ...you made a stupid choice.

SPIEGEL: You're not really going to turn me off, are you? The cat robot begs. And in the tapes of the experiment, you can hear the human struggle, confused and hesitating.

UNIDENTIFIED WOMAN: Yeah - ah, no. I will switch you off. I will switch you off.

CAT ROBOT: Please.

UNIDENTIFIED WOMAN: No, please.

BARTNECK: People actually start to have dialogues with the robot about this; for example, say - you know - no, I really have to do it now; I'm sorry - you know - it has to be done. But they still wouldn't do it. They would still hesitate.

CAT ROBOT: Please. You can decide to keep me switched on. Please.

SPIEGEL: There they sit, in front of a machine that is no more soulful than a hair dryer; a machine they know, intellectually, is just a collection of electrical pulses and metal. And yet they pause, waiting; until finally, they turn the knob that kills it.

UNIDENTIFIED WOMAN: This will happen. Now!

SPIEGEL: Every participant eventually did kill the robot, but it took them time to muster the strength to do it, to intellectually override their emotional queasiness.

BARTNECK: What we found is that - let's say a smart and agreeable robot, the participant would approximately take 35 seconds before they would have completed the switching-off procedure.

SPIEGEL: On one level, there are clear practical implications to studies like these. Bartneck says the more we know about machine-human interaction, the better we can build our machines. For example...

BARTNECK: If the robot has to recharge its batteries, if it would just say (speaks in monotone) "my batteries are almost empty"; or if it would say (speaks with emotion) "I'm sorry, but my batteries are almost empty; can you please recharge me?" I think you would get different results.

SPIEGEL: But on a more philosophical level, studies like these can help to track where we are, in terms of our relationship to the evolving technologies in our lives.

BARTNECK: This relationship is certainly something that is in flux. So it is not - there is no one way of how we deal with technology, and it doesn't change. It is something that does change.

SPIEGEL: More and more intelligent machines are integrated into our lives. They come into our beds; they come into our bathrooms. And as they do, and as they present themselves to us differently, our responses to them change. Both Bartneck and Nass believe this, but I thought I'd talk to one, last technology expert.

Siri?

(SOUNDBITE OF BEEPING)

SIRI: Siri here, how may I help you?

(SOUNDBITE OF BEEPING)

SPIEGEL: Siri, do you like me?

(SOUNDBITE OF BEEPING)

SIRI: What a question. Of course, I'm your friend.

(SOUNDBITE OF BEEPING)

SPIEGEL: OK, Siri, goodbye.

(SOUNDBITE OF BEEPING)

SIRI: Until next time.

(SOUNDBITE OF BEEPING)

SPIEGEL: Alix Spiegel, NPR News, Washington.

(SOUNDBITE OF MUSIC)