It looks like Sam Waterston might have been prescient in supplying Old Glory Robot Insurance in 1995 considering the rapid encroachment of robots into the human ecosystem. Still, I think that Economist article is way overblown and the first two paragraphs alone do much to undermine their thesis.
IN 1981 Kenji Urada, a 37-year-old Japanese factory worker, climbed over a safety fence at a Kawasaki plant to carry out some maintenance work on a robot. In his haste, he failed to switch the robot off properly. Unable to sense him, the robot's powerful hydraulic arm kept on working and accidentally pushed the engineer into a grinding machine. His death made Urada the first recorded victim to die at the hands of a robot.
This gruesome industrial accident would not have happened in a world in which robot behaviour was governed by the Three Laws of Robotics drawn up by Isaac Asimov, a science-fiction writer. The laws appeared in “I, Robot”, a book of short stories published in 1950 that inspired a recent Hollywood film. But decades later the laws, designed to prevent robots from harming people either through action or inaction (see table), remain in the realm of fiction.
As the bolded text above emphasizes, the problem in this case was that the robot couldn't sense Kenji Urada; his death couldn't have been prevented by Asimov's Three Laws of Robotics any more than could an accidental plane crash.
The rest of the article is equally silly, since most people aren't going to want industrial grinders or welders in their homes. A robot designed to do housework, run errands, or even -- as the article suggests -- serve as a sex slave won't need to be strong enough or heavy enough to hurt a human, and certainly won't need any weaponry.
Anyway, as roboticists later in the piece point out, people manage to kill themselves with all sorts of appliances, and robots likely won't be any different.
In any case, says Dr Inoue, the laws really just encapsulate commonsense principles that are already applied to the design of most modern appliances, both domestic and industrial. Every toaster, lawn mower and mobile phone is designed to minimise the risk of causing injury—yet people still manage to electrocute themselves, lose fingers or fall out of windows in an effort to get a better signal. At the very least, robots must meet the rigorous safety standards that cover existing products.
Robots are just machines. Until and unless they ever have human-like intelligence, thinking about "laws" to embed into their "brains" is science fiction. It's very possible that a completely non-sentient robot might appear to be intelligent to a layperson, but in my opinion it's unlikely that a real intelligent robot will ever exist. (I hope I'm wrong!)