Though technically impossible to feel empathy ("Identification with and understanding of another's situation, feelings, and motives.") for a thing with no feelings of its own, it seems that robots are becoming so lifelike that some people can't resist biopomorphizing them.
The most effective way to find and destroy a land mine is to step on it.
This has bad results, of course, if you're a human. But not so much if you're a robot and have as many legs as a centipede sticking out from your body. That's why Mark Tilden, a robotics physicist at the Los Alamos National Laboratory, built something like that. At the Yuma Test Grounds in Arizona, the autonomous robot, 5 feet long and modeled on a stick-insect, strutted out for a live-fire test and worked beautifully, he says. Every time it found a mine, blew it up and lost a limb, it picked itself up and readjusted to move forward on its remaining legs, continuing to clear a path through the minefield.
Finally it was down to one leg. Still, it pulled itself forward. Tilden was ecstatic. The machine was working splendidly.
The human in command of the exercise, however -- an Army colonel -- blew a fuse.
The colonel ordered the test stopped.
Why? asked Tilden. What's wrong?
The colonel just could not stand the pathos of watching the burned, scarred and crippled machine drag itself forward on its last leg.
This test, he charged, was inhumane.
Setting aside the absurd word choice (even killing an animal can't be "inhumane") the colonel here is acting foolish. Rational people would be wise to quickly nip this tendency in the bud lest groups like the ACLU abandon real civil rights (like gun rights) altogether and become entirely nonsensical.
More significant than autonomy, thinks Rodney Brooks, may be the way humans have evolved to recognize instantly when an entity behaves like it's alive -- "animate" is the word he uses. Brooks is director of the MIT Computer Science and Artificial Intelligence Laboratory, co-founder and chief technology officer of the pioneering firm iRobot and author of "Flesh and Machines: How Robots Will Change Us." ...
Humans respond so readily to Kismet, created by Cynthia Breazeal, that graduate students working in the lab at night have been known to put up a curtain between themselves and the bot, Brooks reports. They couldn't stand the way it seemed to gaze around and stare at them. It broke their concentration. These humans are as sophisticated about robots as anyone on Earth. Yet even they are freaked by Kismet's lifelike behavior. "We're programmed biologically to respond to certain sorts of things," Brooks explains.
It's not about how the machine works. It's about how humans are wired.
I don't doubt that there's a biological basis for these irrational responses... and I am "fond" of my computer in a certain sense, but that fondness is really for what the computer can do for me and how it performs as a tool, not because I believe my computer has any inherent moral value.
Is the soldiers' emotional connection to their robots different than the age-old relationships between mariners and their ship or even knights and their horses? Some officers seem to think it's good for morale and encourage such behavior.
"I've been a proponent for a long time of painting a mouth and eyes on the Global Hawk," the Learjet-size surveillance bot, says retired Col. Tom Ehrhard, a former chief of the Air Force's "Skunk Works" -- its strategy, concepts and doctrine division.
"It looks like a blind mole. Give it some character. Make it easier for humans to deal with -- more animate. Humans are social animals. Make that other thing part of your family, your social structure. Try to animate and make either fearsome or lovable your implements of war."
I'm not a soldier, so it's very possible this would be good for the troops. I'm all for painting scary pictures on our weapons, for the sake of art, spirit, and morale, but attributing lifelike qualities to machines would seem to undermine the robots' purpose: doing jobs that are too dangerous for humans.