Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Sci-Fi Government Robotics Politics Technology

New Laws of Robotics Proposed for US Kill-Bots 373

jakosc writes "The Register has a short commentary about a proposed new set of laws of robotics for war robots by John S Canning of the Naval Surface Warfare Centre. Unlike Asimov's three laws of robotics Canning proposes (pdf) that we should 'Let machines target other machines and let men target men.' Although this sounds OK in principle, 'a robot could decide under Mr Canning's rules, to target a weapon system such as an AK47 for destruction on its own initiative, requiring no permission from a human. If the person holding it was thereby killed, that would be collateral damage and the killer droid would be in the clear.'"
This discussion has been archived. No new comments can be posted.

New Laws of Robotics Proposed for US Kill-Bots

Comments Filter:
  • Robot laws (Score:5, Insightful)

    by nurb432 ( 527695 ) on Saturday April 14, 2007 @03:45PM (#18734657) Homepage Journal
    Are for books and movies.. In the real world the only law is to win. You cant come in 2nd in a war.
    • unless ... (Score:2, Funny)

      by Anonymous Coward
      you're French.

      ... ducks ...

    • Re:Robot laws (Score:5, Insightful)

      by jim_v2000 ( 818799 ) on Saturday April 14, 2007 @03:51PM (#18734737)
      Plus robots are controlled by someone at a terminal...they don't control themselves. I think this whole discussion is pointless until we have AI.
      • Re: (Score:2, Interesting)

        by n__0 ( 605442 )
        It's important to have the laws before the AI, otherwise the AI won't care so much for the laws. Although whether anything truly intelligent would strictly obey laws is debatable.
        • Re: (Score:2, Insightful)

          by cgenman ( 325138 )
          We do have atonomous weapon AI, right now. They're just not particularly bright. And so far, for the most part we've adhered to the "human must give green light to fire" principal, except in automated defence systems.

          How's this set of principles:

          Automated systems can fire on other automated systems willy-nilly, assuming no people are likely to be in the fire zone.

          Automated systems can fire on people, but only if that person is pointing a weapon at the machine or at the people the machine is there to prote
          • Re: (Score:3, Interesting)

            by cyphercell ( 843398 )

            The weapons system must use two forms of verification to identify friend or foe.

            How does it work with un-uniformed combatants?

            • Re: (Score:3, Insightful)

              by cgenman ( 325138 )
              How does it work with un-uniformed combatants?

              Poorly?
            • Re:Robot laws (Score:5, Informative)

              by NitsujTPU ( 19263 ) on Saturday April 14, 2007 @07:05PM (#18736401)
              I don't know which system he's talking about, but the phalanx systems on battleships is a fully autonomous system that can shoot down enemy aircraft and even knock missiles out of the sky.

              It's knows which is which because all of the friendly aircraft have IFF systems that identify themselves.
              • Re:Robot laws (Score:5, Interesting)

                by Original Replica ( 908688 ) on Saturday April 14, 2007 @08:26PM (#18737095) Journal
                It also works because the parameters that it uses to determine a threat are difficult for civilians to replicate. ie: Flying at a Navy ship at 1000+ mph. Handing out "don't shoot me" tags to civilians isn't gonna work so well in urban warfare. I hate to say it, but seeing as "terrorist" style tactics are the only realistic way to take on a more powerful military force, they are now a permenant part of war. As such, the idea of trying to treat the local civilians as "not the enemy" will not last another decade. The current US handling of Iraq will look as over civilized as Napoleanic "march in a straight line" warfare looks to us today. The robots will kill anyone outside after curfew.
                • Re: (Score:3, Interesting)

                  by TapeCutter ( 624760 )
                  "they are now a permenant part of war"

                  Get over the idea that "terrorists" are new, they have been with us since we started forming tribes and throwing rocks at each other.

                  As for machines that autonomusly kill humans, landmines and other such traps have been around for a long time. At one time land owners were allowed to set mantraps to catch poachers, now British troops have standing orders not to shoot at a fleeing enemy. On the whole I think we are becoming more civilized out of necessity since it h
      • Re:Robot laws (Score:5, Informative)

        by TubeSteak ( 669689 ) on Saturday April 14, 2007 @04:07PM (#18734885) Journal

        Plus robots are controlled by someone at a terminal...they don't control themselves.
        Uhhh... no.
        If someone is controlling it, at best it is a telerobot (semi-autonomous) or at worst, a telemanipulator.

        A robot, by definition is autonomous and does not have or require human control.

        http://en.wikipedia.org/wiki/Telerobotics [wikipedia.org]
      • There are autonomous robots. One could easily design one that seeks out people and kills them specifically. Face tracking software + fully actuated firearm = robot that kills people without human-in-the-loop control.

        IE, we are fully capable of building robots that control themselves in order to carry out this task.
    • Re:Robot laws (Score:4, Insightful)

      by noidentity ( 188756 ) on Saturday April 14, 2007 @04:12PM (#18734915)
      Robots that are smart enough to understand said laws are also only in books and movies.
    • Re: (Score:2, Insightful)

      by JordanL ( 886154 )
      The US military takes the same approach to the Geneva Conventions regarding the use of 50 cal bullets on humans. Technically, you can only use 50 cal guns for equipment, but the US military maintains that clothing, guns, ammunition, flashlights, and other things the enemy may be carrying constitute targetable equipment.
      • Re:Robot laws (Score:5, Insightful)

        by dave420 ( 699308 ) on Saturday April 14, 2007 @04:36PM (#18735153)
        And that's why people all over the world don't take kindly to US forces being near them, regardless of their expressed intent. Collateral damage might only be paperwork to the US forces, but to those directly affected, it's just another reason to fight back. Each death makes a whole family your enemy.
      • Re:Robot laws (Score:4, Interesting)

        by Terminal Saint ( 668751 ) on Saturday April 14, 2007 @05:08PM (#18735455)
        The old "no using .50s on personnel, they're for equipment only" fallacy gets thrown around a lot. In fact, my best friend even had it told to him when he was in basic. According to a DOD legal briefing: nothing in the Geneva or Hague Conventions prohibited the use of .50 cal weapons on enemy personel, the Hague Conventions only apply to signatory nations' UNIFORMED military personel, and US military personel always have the right to defend themselves and other personel with deadly force, using whatever weapon(s) are available; including fists, rocks, pointy sticks, knives, shotguns, cannon, etc.
        • Re: (Score:3, Interesting)

          by repvik ( 96666 )
          The keyword here is "defend". The Norwegian armys standard issue is Heckler & Koch G3 (Slightly modified, renamed to AG3, and produced on licence in Norway). It uses 7.62mm rounds. Norwegian "Special Forces" are equipped with H&K MP5s. The reasoning behind this is that we are allowed to *defend* our country with AG3, but we cannot use the same weapon in an *attack*, thus we have to equip our "attack forces" with MP5s. The same applies to .50 cal (12.7mm), no matter how much the U.S. tries to twist i
          • The keyword here is "defend". The Norwegian armys standard issue is Heckler & Koch G3 (Slightly modified, renamed to AG3, and produced on licence in Norway). It uses 7.62mm rounds. Norwegian "Special Forces" are equipped with H&K MP5s. The reasoning behind this is that we are allowed to *defend* our country with AG3, but we cannot use the same weapon in an *attack*, thus we have to equip our "attack forces" with MP5s. The same applies to .50 cal (12.7mm), no matter how much the U.S. tries to twist i

            • by repvik ( 96666 )

              That doesn't make the slightest lick of sense. The infantry units use the AG3 because it's an assault rifle. It's long, it's heavy, it has a long range and a lot of penetrating power. Special Forces around the world use the MP5 because it's a submachine gun. It's light, it's easy to handle, and it's accurate at short ranges. Infantry usually fight in the open, often from positions. SF's usually fight in tight, enclosed spaces, often while moving. That is what defines the choice of weapon. It has absolutely

              • Re:Robot laws (Score:5, Informative)

                by John Newman ( 444192 ) on Saturday April 14, 2007 @09:49PM (#18737711)

                I'll assume you're a fucking moron, because you are. If Norway sent its infantry abroad, it would not equip them with AG3s.
                Are all Norwegians this polite, gentle, and peace-loving? In any event, reality must have an anti-Norwegian bias, because Norway has sent its soldiers to Bosnia, Kosovo and Afghanistan, and it sent them armed with AG3's (along with even bigger guns) [nato.int]. In the latter two nations they are even operating under the aegis of NATO, rather than the UN. Fortunately the Norwegeian government has ensured they are properly armed, but (sadly) this hasn't stopped them from killing civilian demonstrators [aftenposten.no] or getting killed themselves [bbc.co.uk].
                • Re: (Score:3, Interesting)

                  by repvik ( 96666 )

                  Are all Norwegians this polite, gentle, and peace-loving? In any event, reality must have an anti-Norwegian bias, because Norway has sent its soldiers to Bosnia, Kosovo and Afghanistan, and it sent them armed with AG3's (along with even bigger guns). In the latter two nations they are even operating under the aegis of NATO, rather than the UN. Fortunately the Norwegian government has ensured they are properly armed, but (sadly) this hasn't stopped them from killing civilian demonstrators or getting killed t

          • Re: (Score:3, Interesting)

            by Bishop ( 4500 )
            You must be mistaken. The 7.62mm round NATO is well within the Geneva Conventions for use against personnel. Many Geneva signatory nations have used and continue to use the 7.62mm. The reason it is no longer popular is because the 5.56mm NATO is lighter and equally effective on the battlefield.

            If what you state is true the Norwegian special forces would not use the MP5. The MP5 fires a 9mm or similar round which causes more trauma then the steel jacketed 7.62mm or 5.56mm rounds.

            If the Norwegian soldiers are
        • US military personel always have the right to defend themselves and other personel with deadly force, using whatever weapon(s) are available; including fists, rocks, pointy sticks


          "This is my pointy stick, there are many like it but this one is MINE. Without it I am useless, without me it is useless"........

          oh.. and i can field strip my pointy stick faster than you!
      • Re: (Score:2, Insightful)

        by CestusGW ( 814880 )
        Have you ever actually found a reference which will back up that claim? I used to believe that statement as well, until I actually went looking for it myself. And failed to find it, or anything like it in the Geneva conventions. I think you might be taken in by the same floating bit of misinformation that I was.
      • Watch future weapons lately? They have a new lower caliber gun (.408) that stays supersonic for over 2,200 feet and has more punch than a .50 cal due to the kinetic energy. I guess that's one way around the problem.
      • Re: (Score:3, Insightful)

        by Firethorn ( 177587 )
        Sheesh...

        The Geneva conventions take NO stance on the 12.5mm/50cal ammuntion and it's usage on humans. For that matter, shotguns loaded with slugs are larger in diameter. The whole 'aiming at equipment, such as a belt buckle', is most likely the result of somebody classifying the M2 as a 'anti-equipment' weapon, and the resulting stupidity of logic to make the system usefull against charging troops again.

        Now, .50 caliber IS, by US law, considered the difference between rifle cartridges and artillery type
    • Actually the three laws isn't about winning it's about not loosing, to the robots. It may be hard to imagine a Roomba being a threat but in the 1800s no one could have predicted the last 100 years. In another hundred year we may be dealing with large numbers of autonomous robots. Do you want the Bush administrations protocol of kill all the enemy or Asimov's three laws? There's short sighted and then there's visionary.
      • by nurb432 ( 527695 )
        What i prefer is that this could have remained a 1/2 way intelligent discussion, instead of it degenerating into a totally off topic Bush bash.

        Geesh, get over it already.

          In regards to your question: in war the goal is to win. Eliminating the enemy is an effective way of doing this.
    • by rts008 ( 812749 )
      As long as they don't let Bender program the laws in the killbots, you know:Bender "kill all humans" Robot.

      But blackjack, and hookers are okay.
    • Re: (Score:3, Insightful)

      by dave420 ( 699308 )
      And if by winning you flush the morals of your country down the drain? That's cool? So by your logic the Germans were damned-right in killing 6,000,000 jews, the Americans were spot-on destroying countless villages in Vietnam, the British were fine having concentration camps in the Boer War, and Mao was cool killing 60,000,000? 'Cos they had to win, and nothing else mattered, so it's all good. Brilliant logic.
    • Oh yeah? (Score:4, Interesting)

      by jd ( 1658 ) <imipak@[ ]oo.com ['yah' in gap]> on Saturday April 14, 2007 @04:39PM (#18735197) Homepage Journal
      I can name plenty of nations that have come "second" in a war - and yet outlasted those who "beat" them. The Scots were crushed by the Romans (the Antonine Wall is practically on the northern beaches), mauled by the Vikings and butchered by the Tudors. Guess who outlasted them all? I'll give you a clue - they also got their Stone back.

      They're not the only ones. The Afghans - even with legally-dubious US support - never defeated the Russians, they merely lasted longer than the Russian bank accounts. The Celts were amongst the worst European fighters who ever lived, getting totally massacred by everyone and their cousin Bob, but Carthage stands in ruins, the Angles and Saxons only survive in tiny isolated communities in England and America (Oppenheiner's "The Origins of the British" shows that W.A.S.P.s exist only in their own mind, they have no historical reality), but the Celtic nations are actually doing OK for themselves at the moment.

      Arguably, Serbia won the Balkans conflict, having conquered most of the lands belonging to their neighbors and slaughtered anyone who might claim them back. Uh, they're not doing so well for having won, are they? Kicked out of EU merger talks, Montenegro calling them a bunch of losers, Kosovo giving them the finger...

      Hell, even the United States won in Iraq, as far as the actual war went.

      Winning is the easy part. Anyone can win. Look how much of the world the British conquered. The British won far more than most nations could ever dream of. Yet contemporary accounts (I own several) describe the Great Exhibition as a PR stunt to create a delusion of grandeur that never existed. The Duke of Wellington, that master of winning, was described as a senile buffoon who was dribbling down his shirt and had to be propped up by others to stay on his horse. What's left of the Commonwealth shows you all too well that those descriptions of delusion were the reality, not the winning and not the gloating.

      History dictates that who comes second in a war usually outlasts those who come first.

      • No (Score:5, Insightful)

        by KKlaus ( 1012919 ) on Saturday April 14, 2007 @06:43PM (#18736201)
        This is total nonsense. First off, the Afghans _did_ beat the russians, as the Russians pulled out and stopped attacking. They didn't beat them in a strategic sense with tanks ans planes and whatnot, but they still clearly won. Secondly, your anecdotes don't makes sense. If the Celts that are around today are the same ones that were around to get the crap beaten out of them a thousand years ago, then guess what, the Romans are fine we just call them Italians now. Winning isn't bad, witness the USSR, the third reich, the Persian empire, on and on, for whom losing didn't work out well.

        You're confusing governments with peoples. Yes the Irish are still around. So are the Italians, so, in fact, are the Germans, Japanese, and Brits. Winning or losing wars rarely affects that, with notable exceptions like the Native Americans, for whom I think it's pretty obvious losing was a bad thing. What aren't still around are governments. And while winning might not make one last forever, I think Hitler and Hirohito would tell you losing is much worse.

        Seriously, the only way winning would not be a virtue, is if it led to complacency, arrogance, and ultimately weakness. But even then, you would have to _lose_ a war for it to matter. And really, with the exception of the Native American's most peoples have survived, and there's really no one to outlast. You are thinking of governments, and trust me, just because you can't think of the names of the governments that disappeared (fair because winners write history) they did.
    • Re: (Score:3, Insightful)

      by NMerriam ( 15122 )

      Are for books and movies.. In the real world the only law is to win. You cant come in 2nd in a war.

      On the contrary, winning at any cost is often far worse than losing. A Pyrrhic Victory [wikipedia.org] often invites an even greater disaster in the future, but simply losing a fight means you can continue fighting in other ways, or fight again later when you've marshalled your strength and more carefully evaluated the enemy's weaknesses.

      I'd draw parallels to current world events, but anyone willing to shred the Constitutio

    • Asimov's Three Laws of Robotics were a literary device he used to demonstrate the fallacy of attempting to control a robot by restricting its behaviours. If you read the stories its always about how poorly they work.

      Most people don't know that even now we have a pretty hefty problem with Neural Networks. It is impossible to train a behaviour into a neural network without inserting the inverse behaviour. There is also no way to be 100% sure that the neural net won't ever access the region that contains the i
    • Robot laws ... Are for books and movies.. In the real world the only law is to win. You cant come in 2nd in a war.

      Actually, there are weapons that civilized countries agree not to use. Landmines. Chemical and biological weapons. Some suggest Atomic Weapons. SciFi writers have been recommending that various nanotechnologies and automated robots should join the ranks...

      And it's easy to assume that weapons will be used by their developers against the people they worry about defending themselves against

  • Three rules... (Score:5, Insightful)

    by pla ( 258480 ) on Saturday April 14, 2007 @03:49PM (#18734705) Journal
    1) Spell "Asimov" correctly when submitting an article to Slashdot.

    2) The military will program their toys to kill everything and everything, and to hell with Asimov (right up until they turn on us)

    3) Humans already count as collateral damage in warfare. Damn the men, spare the oilfields!
  • What are we going to do when our robots autonomously decide to kill us???

    I will be losing a lot of sleep over this in about 300 years.
    • Worthwhile pursuit (Score:4, Insightful)

      by Mateo_LeFou ( 859634 ) on Saturday April 14, 2007 @04:09PM (#18734895) Homepage
      There aren't any immediately-practical uses for robotics laws, but if it gets people thinking about ethics & technology I'm all for 'em.
    • by SeaFox ( 739806 )

      What are we going to do when our robots autonomously decide to kill us???

      We'll send wave after wave of our own men after them and once the robots have reached their pre-determined kill limit they'll shut down, and we'll return victorious. I see medals in our future.
  • huh (Score:4, Insightful)

    by gravesb ( 967413 ) on Saturday April 14, 2007 @03:51PM (#18734727) Homepage
    This assumes a level of optical recognition that is missing in current robots. Also, once you let these things go, there is a ton of reliance on the programming and the technology. In my opinion, there should be no autonomous robots on the battlefield. Drones are one thing, with the pilot safe elsewhere, but completely automated robots are another.
    • Re: (Score:3, Insightful)

      This assumes a level of optical recognition that is missing in current robots
      Once the Borg assimilate everyone then the lines will become rather fuzzy. We've already taken the first few steps by RFID'ing everything, chipping our pets (I hear it's mandatory in California), and some companies have even chipped their employees.

      How is a robot supposed to know the difference?
  • How having an human killed by a robot is any worse than an human killed by another human? I think this would be more "feel good" legislation than anything. Rationalizing a kill doesn't make it any better. Countries and their stupid war games. The future (and in some ammounts the present) of the war is exactly this one, unmanned drones and bombers, robotic infantry, intercontinental ballistic missiles operated by automatic systems, thousands of killings based on button pushing. Rationalizing it will not make
  • ED-209 [unclerummy.com]: PLEASE PUT DOWN YOUR WEAPON. YOU HAVE 20 SECONDS TO COMPLY.
    Dick Jones: I think you'd better do what he says, Mr. Kinney.
    [Alarmed, Kinney quickly tosses the gun away. ED-209 growls menacingly.]
    ED-209: YOU NOW HAVE 15 SECONDS TO COMPLY.
  • by RyanFenton ( 230700 ) on Saturday April 14, 2007 @04:02PM (#18734833)
    It's like RoboCop: You shall not harm any employee of the your owners. But you have the authority to find a way to get them fired, and THEN kill them. And no one found any problem with this until their boss was dead in front of them, and they realized they could be next.

    Honestly though, I see value in a policy that no human life should be risked in automatic death systems - including land mines and other traps. These loopholes make that policy as useless as some RoboCop parody though.

    Ryan Fenton
    • Remember the Spacer worlds, who defined "human" to mean people like themselves? More than one book covered robots who killed - or tried to kill - humans because the definition had been made selective enough.

      This reflects how real wars are fought, too. Name me one war in all of recorded history that did NOT involve first dehumanizing the enemy by all sides involved. We see that even today, with German soldiers posing by pictures of the skulls of defeated enemy, or American soldiers posing by naked and shac

  • by interiot ( 50685 ) on Saturday April 14, 2007 @04:09PM (#18734897) Homepage

    The article summary doesn't give the right impression... the proposed policy would allow machines to target military machines. (see p.15-16 of the PDF) Page 23 is the most interesting, saying that anti-personnel landmines are looked down upon in the international community because they linger after war and kill civilians, whereas anti-tank mines aren't looked down upon so much, because they can only misfire during an armed conflict. So the policy is mostly intended to address international political responses to war, not to prevent sentient machines from taking over the human race.

    Though, it would limit somewhat the extent to which machines could enslave the human race... if humans never took up arms, machines could never take lethal action against humans. That doesn't mean machines couldn't control humans politically/economically/socially (eg. deny food, deny housing), but it does mean they couldn't take up a policy of overt extermination of all humans, unless humans decided to fight to the last.

  • by rolfwind ( 528248 ) on Saturday April 14, 2007 @04:10PM (#18734901)
    Until a robot can think, in such a way that it resembles how a human thinks, I think coming up with "laws" such as these are next to useless unless you want a philosophical discussion or a what-if scenario. We have hard enough time trying to get robots to recognize images for what they are (AFAIK some high end surveillance systems for the government can do this on a primitive level -- ie it can't learn to recognize much beyond it's programming) - how would you program such arbitrary, human concepts? Do we wave our hands and make it so?
    • Re: (Score:3, Insightful)

      by drmerope ( 771119 )
      More importantly: weren't all of the Robot Novels really about how the rules didn't work out the way we expect. The charm of the books were that they revealed logical puzzles: how unexpected behavior was in accordance with the rules, absolutely disastrous, and unexpected based on naive reading of the rules.

      Note: this point was entirely lost in the movie remakes of these books.

      So isn't it a little scary that we're actually comparing some policy to the three laws of robotics? To repeat myself: The whole poi
  • Episode 20 of the show My Life as a Teenage Robot [wikipedia.org] depicts one possible scenario where an AI-based robot is given complete, unhindered control over the destruction of weaponry it finds. While it does initially achieve its intended goal of ending all war on earth, the robot's AI system eventually falls into a state of unrest during peacetime and starts attacking anything that could conceivably be used as a weapon indiscriminately.

    What's interesting about this concept, is what would prevent an AI system with t
  • by Shihar ( 153932 ) on Saturday April 14, 2007 @04:15PM (#18734941)
    During the Vietnam War a unit armed with anti-aircraft autocannons were surrounded by Vietcong. Technically, they were not allowed to open fire on anything other then equipment with such weapons. Not really being a fan of dying, the leader of this unit order his men to open fire and slaughtered the VC. During his court marshal hearing he was asked if he understood the rules of engagement. He said that he did. He was then asked if he had violated the rules of engagement. He responded that he did not violate his rules of engagement. He was asked how opening fire with his weapons upon half-naked VC did not violate his rules of engagement. His answer? He did not order his men to fire at the VC. He told his men to shoot at the VCs guns and canteens, hence he was shooting that their equipment.
  • Of what I was told about .50 cal's. - You can't shoot at people, only equipment. So aim for their helmet.
  • A friend of mine in the first gulf war said that illegally modified weapons and anti-vehicular weapons for use against other soldiers were tolerated with a wink and a nod.
    The understanding was that you couldn't shoot it at any person.
    But you could shoot at at their helmets, boots, canteens, etc.

    He also said that before they left the c.o. established an amnesty box where items could go no questions asked as long as they weren't brought on the plane. Following the detailing of the contraband, souveniers that
  • it's ASIMOV (Score:2, Informative)

    by vruz ( 468626 )
    ASIMOV, ASIMOV, ASIMOV
    One of the greatest sci-fi writers ever.

    what are you ?

    a Microsoft Word Spellchecker user ?
  • by tripler6 ( 878443 ) on Saturday April 14, 2007 @04:21PM (#18734999)
    You see, killbots have a preset kill limit. Knowing their weakness, I sent wave after wave of my own men at them until they reached their limit and shut down.
  • by SuperBanana ( 662181 ) on Saturday April 14, 2007 @04:23PM (#18735017)

    'a robot could decide under Mr Canning's rules, to target a weapon system such as an AK47 for destruction on its own initiative, requiring no permission from a human. If the person holding it was thereby killed, that would be collateral damage and the killer droid would be in the clear.'"

    The geneva convention frowns upon collateral damage [spj.org], though someone is not a civilian if they're holding a weapon (see the "spontaneousy takes up arms" bit.) That's not a good enough excuse. A person holding a gun is not necessarily a soldier. The could be a homeowner, defending their property from looters, for example. That's why you are supposed to give a chance of surrender. Will a robot do that, reliably? Will a robot properly identify and treat hors de combat people?

    Here's a bigger, related question: a robot is a)not a person and b)maybe more durable. A human soldier is allowed to fire in defense. Picture a homeowner in wartime, guarding his house. Robot trundles by, x-rays the house, sees the weapon, charges in. He sees it heading for him, freaks out, fires at it. How can the robot possibly be justified in killing him? Even if it represents a threat, you're only threatening a machine!

    Second point: this is really just "killing by proxy." Regardless of whether you pull a trigger on a machine gun, or flip a switch on the General Dynamics Deathmachine 2000: if you knew your actions would cause injury or death, you're culpable. It's not the robot that is responsible when a civilian or hors de combat soldier is killed: it's the operators. Robots don't kill people: people build, program, and operate robots that kill people.

    • The geneva convention frowns upon collateral damage [spj.org], though someone is not a civilian if they're holding a weapon (see the "spontaneousy takes up arms" bit.) That's not a good enough excuse. A person holding a gun is not necessarily a soldier. The could be a homeowner, defending their property from looters, for example. That's why you are supposed to give a chance of surrender. Will a robot do that, reliably? Will a robot properly identify and treat hors de combat people?

      Whooooops. The first s

  • Let machines target other machines and let men target men

    ...and let women target women, and let small furry creatures from Alpha Centauri target small furry creatures from Alpha Centauri.

    How binding would this dichotomy be on human soldiers? Would we see a war-crimes tribunal in which a human soldier is charged with targeting an autonomous machine?

    And you know, as long as we're going to base protocols of engagement on superficial semantics, why not be more specific and only let generals target gener

  • Premature (Score:5, Insightful)

    by Spazmania ( 174582 ) on Saturday April 14, 2007 @04:34PM (#18735115) Homepage
    Is it just me or is a discussion of ethics laws for robots premature given the state of the art in artificial intelligence? If you want to teach a machine not to harm humans, it helps to first teach the machine the difference between a human and every other object it encounters.
    • by ardor ( 673957 )
      Well, this is possible already. In fact, automated sentry turrets are used in the Korean border.
    • Is it just me or is a discussion of ethics laws for robots premature given the state of the art in artificial intelligence?


      These aren't ethics laws for self-willed, scifi-like robots. These are ethics laws for users of fairly stupid (by scifi standards) autonomous armed combat systems. And since such systems are currently in active development, it is not at all premature to discuss what, if any, limits ought to be placed on their application.
    • Is it just me or is a discussion of ethics laws for robots premature given the state of the art in artificial intelligence? If you want to teach a machine not to harm humans, it helps to first teach the machine the difference between a human and every other object it encounters.

      The discussion is not premature. Land minds already kill humans, machines indiscriminately, and therefore would be in violation of this treaty. The interesting thing is that guided cluster bombs could violate the law if they were
      • indiscriminate killing machines would lead to [a] return to a no-mans-land style of warfare

        Which would be horrible. But you miss my point: We may as well debate how much lead alchemists are permitted to transmute into gold in order to avoid destabilizing the world economy. The argument is moot: there is no reliable, cost-effective way to transmute lead to gold, nor are we on the verge of creating one.

        Discussing whether or not to allow ED-209 to shoot at humans is silly. ED-209 can't tell the difference in t
  • The Robot Laws themselves were originally taken from safety design principles used in actual robots.

    In the way most people think of them, they cannot be practically implemented within an artificial intelligence of human level ability. This is simply because such an artificial brain would be massively parallel by design, and
    would require something as complex to detect the breaking of the laws, a second artificial brain. Such an artificial brain would in itself be subject to the same problem. There are ways t
  • Let your robots target only my robots... I promise, enemy mine, that my robots will also obey the rules... heheheh.

    Basic principle of warfare: Apply strength to the WEAKNESS. Humans are weak. Robots should actually target humans, they are far more effective that way. If there are no more humans, who is going to tell the enemy robots what to do?

    John S Canning fails for not understanding the nature of war. Go ahead and keep building battleships, and ignore those aircraft...
  • A small packbot with a rotating head. In the head are six devices:

    1. Riot control microwave pain gun.
    2. Taser
    3. Machine gun
    4. Small missile launcher
    5. Anti-tank Grenade thrower
    6. Mine dropper
    1 is for a crowd of humans. 2 is for a single human. 3 is for a small weapons outpost or another robot. 4 is for a flying vehicle or robot. 5 is for an armored vehicle. 6 is to help defend an area or when the robot is being chased.

    The idea is of course, robots are extremely good at being given an ar
  • Who the fuck is Azimov? I grew up reading Asimov, and reading books that used his robot laws as an effective cornerstone for his award winning science fiction. Do you have any idea how insulting mis-spelling a man's name like that can be to his memory and his body of work?
  • Is it just me, or was the entire point of all Asimov's (apropriate) works not that, even given His 3 laws, robots would Find A Way?

    Forgive my dramatic capitalisation, but Asimov's entire point seemed to be that these 3 laws, despite being pretty obvious, were deeply flawed and not at all thought through. Even in the movie (spoiler follws, even though the movie spoiled itself well enough) the whole point was that the computer interpreted enslavement as being better than the possibility of self-harm, from a s
  • War is Hell (Score:4, Insightful)

    by tacocat ( 527354 ) <(tallison1) (at) (twmi.rr.com)> on Sunday April 15, 2007 @07:12AM (#18740413)

    "War is Hell"

    Ever read All Quiet on the Western Front? Ever talked to someone who was there or a civilian in European WWII?

    War sucks. It's supposed to suck. Without the pain and suffering that war can bring to all sides of the battle, winners and losers alike. Perhaps the generals should go watch Star Trek Episode 23, A Taste of Armageddon, circa 1967.

    That society has done such a nice job making war "clean" that they have decided to continue fighting a war for 500 years rather than just figure out how to make peace.

    In most societies, people are taught that violence against others is fundamentally bad. This becomes a moral element that entwines all the people within that society. It also motivates the same people to find ways around doing violence.

    If you study anything about the Nazi camps in WWII they had a growing behaviour where the soldiers in the concentration camps knew what they were doing but absolved themselves of any responsibility by hiding behind the statement, "I was just following orders", thereby removing themselves morally from the actitivies. After WWII this was considered to be a War Crime and has been backed by hundreds of trials across the world.

    Fast forward 60 years and we are at a point where the soldiers who are operating a computer screen which operates these killer robots can absolve themselves from any responsibility of moral involvement because the Laws will simply allow them to say, I was just operating a computer program. And while this is going on, there is no one left to come back from the battlefield to serve as a reminder of just how bad war really is and how important it is to avoid it.

    At the same time if we are going to commit to a war, we had better be willing to do it to completion even when it gets ugly. I'm pretty pissed at the news for giving us daily body counts of 4 and 10 soldiers on a 5 year battle. In contrast, WWII was hundreds to thousands a day and everyone was sticking to their plan. Everyone was commited to the plan and everyone knew why they were fighting. Vietnam wasn't so clear cut. It was rather vague as to why were where there and even on day one, not everyone was convinced we needed that war. And now we are in the Middle East without a convincing and clear cut plan as to what we are doing, why we are there, what we hope to accomplish, and not enough people in the States give a shit. Perhaps in New York City, but no where else.

    They'll get their killer robots and their legal loopholes to kill anything they want and no one will really do much because it's clean and doesn't interfere with "Dancing with the Stars" and the sheep continue to bleat

  • by DarkOx ( 621550 ) on Sunday April 15, 2007 @04:04PM (#18744065) Journal
    I don't understand this or think its at all fair. We should not be imposing rules on the poor kill droids that are contrary to their nature. A kill droid so be free to romp and do what it does best kill anything and everything.

    Honestly this strikes me like thoes people who adopt dogs that are prone to barking and then put collars on them to shock them when they do it. Its unfair I say, and worng to force something to act contrary to its nature. If you don't want a dog that barks much you should adopt a bread which is not given to barking or maybe just get a cat.

    The same is true for robots. If you don't want a robot that runs around killing everything it detects especially you then you should forgo adopting or building a kill droid. Maybe go get yourself one of thoes more friendly industrial breads that enjoys welding steel pannels onto cars or if space is a concern some types of robots are very small and even enjoy roaming around your home vacuming the floor for you as they do. There is an appropriate type of robot for almost ever situation. Please be responsible and only adopt a kill droid if you have adequate supplies of victims for it to kill.

Seen on a button at an SF Convention: Veteran of the Bermuda Triangle Expeditionary Force. 1990-1951.

Working...