New Laws of Robotics Proposed for US Kill-Bots 373
jakosc writes "The Register has a short commentary about a proposed new set of laws of robotics for war robots by John S Canning of the Naval Surface Warfare Centre. Unlike Asimov's three laws of robotics Canning proposes (pdf) that we should 'Let machines target other machines and let men target men.' Although this sounds OK in principle, 'a robot could decide under Mr Canning's rules, to target a weapon system such as an AK47 for destruction on its own initiative, requiring no permission from a human. If the person holding it was thereby killed, that would be collateral damage and the killer droid would be in the clear.'"
Re:Robot laws (Score:5, Informative)
If someone is controlling it, at best it is a telerobot (semi-autonomous) or at worst, a telemanipulator.
A robot, by definition is autonomous and does not have or require human control.
http://en.wikipedia.org/wiki/Telerobotics [wikipedia.org]
Re:Three rules... (Score:1, Informative)
Nice in theory, but in reality China, India and Vietnam [cnn.com] are getting the oil before the US does. It's almost as if... as if... the invasion wasn't about oil after all!
it's ASIMOV (Score:2, Informative)
One of the greatest sci-fi writers ever.
what are you ?
a Microsoft Word Spellchecker user ?
Killing by proxy, "collateral damage" (Score:3, Informative)
'a robot could decide under Mr Canning's rules, to target a weapon system such as an AK47 for destruction on its own initiative, requiring no permission from a human. If the person holding it was thereby killed, that would be collateral damage and the killer droid would be in the clear.'"
The geneva convention frowns upon collateral damage [spj.org], though someone is not a civilian if they're holding a weapon (see the "spontaneousy takes up arms" bit.) That's not a good enough excuse. A person holding a gun is not necessarily a soldier. The could be a homeowner, defending their property from looters, for example. That's why you are supposed to give a chance of surrender. Will a robot do that, reliably? Will a robot properly identify and treat hors de combat people?
Here's a bigger, related question: a robot is a)not a person and b)maybe more durable. A human soldier is allowed to fire in defense. Picture a homeowner in wartime, guarding his house. Robot trundles by, x-rays the house, sees the weapon, charges in. He sees it heading for him, freaks out, fires at it. How can the robot possibly be justified in killing him? Even if it represents a threat, you're only threatening a machine!
Second point: this is really just "killing by proxy." Regardless of whether you pull a trigger on a machine gun, or flip a switch on the General Dynamics Deathmachine 2000: if you knew your actions would cause injury or death, you're culpable. It's not the robot that is responsible when a civilian or hors de combat soldier is killed: it's the operators. Robots don't kill people: people build, program, and operate robots that kill people.
Re:Robot laws (Score:5, Informative)
It's knows which is which because all of the friendly aircraft have IFF systems that identify themselves.
Re:Robot laws (Score:5, Informative)
Little Lost Robot, 1947 (Score:2, Informative)
This was all covered in Isaac Asimov's excellent short story: "Little Lost Robot", which appeared in 1947
Re:Three rules... (Score:4, Informative)