Popular Science has a fantastic article about robot ethics, with a focus on robotic cars. The whole thing is worth reading, but here's a taste.

It happens quickly--more quickly than you, being human, can fully process.

A front tire blows, and your autonomous SUV swerves. But rather than veering left, into the opposing lane of traffic, the robotic vehicle steers right. Brakes engage, the system tries to correct itself, but there's too much momentum. Like a cornball stunt in a bad action movie, you are over the cliff, in free fall.

Your robot, the one you paid good money for, has chosen to kill you. Better that, its collision-response algorithms decided, than a high-speed, head-on collision with a smaller, non-robotic compact. There were two people in that car, to your one. The math couldn't be simpler.

In my opinion, your robotic car should has customizable ethics options that let you, the owner, choose your priorities. If you want to protect your family above all else, then you should be able to select that and bear the legal consequences.

"Buy our car," jokes Michael Cahill, a law professor and vice dean at Brooklyn Law School, "but be aware that it might drive over a cliff rather than hit a car with two people."

Okay, so that was Cahill's tossed-out hypothetical, not mine. But as difficult as it would be to convince automakers to throw their own customers under the proverbial bus, or to force their hand with regulations, it might be the only option that shields them from widespread litigation. Because whatever they choose to do--kill the couple, or the driver, or randomly pick a target--these are ethical decisions being made ahead of time. As such, they could be far more vulnerable to lawsuits, says Cahill, as victims and their family members dissect and indict decisions that weren't made in the spur of the moment, "but far in advance, in the comfort of corporate offices."

In the absence of a universal standard for built-in, pre-collision ethics, superhuman cars could start to resemble supervillains, aiming for the elderly driver rather than the younger investment banker--the latter's family could potentially sue for considerably more lost wages. Or, less ghoulishly, the vehicle's designers could pick targets based solely on make and model of car. "Don't steer towards the Lexus," says Cahill. "If you have to hit something, you could program it hit a cheaper car, since the driver is more likely to have less money."

These questions seem futuristic, but our robots will be making a lot of split-second decisions for us based on the rules we set up in advance. We need to think about what those rules should be.

0 TrackBacks

Listed below are links to blogs that reference this entry: Robotic Ethics.

TrackBack URL for this entry: http://www.mwilliams.info/mt5/tb-confess.cgi/8483

Comments

Supporters

Email blogmasterofnoneATgmailDOTcom for text link and key word rates.

Site Info

Support