Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Government Politics Science

Scientists Propose AI Apocalypse Kill Switches 104

A paper (PDF) from researchers at the University of Cambridge, supported by voices from numerous academic institutions including OpenAI, proposes remote kill switches and lockouts as methods to mitigate risks associated with advanced AI technologies. It also recommends tracking AI chip sales globally. The Register reports: The paper highlights numerous ways policymakers might approach AI hardware regulation. Many of the suggestions -- including those designed to improve visibility and limit the sale of AI accelerators -- are already playing out at a national level. Last year US president Joe Biden put forward an executive order aimed at identifying companies developing large dual-use AI models as well as the infrastructure vendors capable of training them. If you're not familiar, "dual-use" refers to technologies that can serve double duty in civilian and military applications. More recently, the US Commerce Department proposed regulation that would require American cloud providers to implement more stringent "know-your-customer" policies to prevent persons or countries of concern from getting around export restrictions. This kind of visibility is valuable, researchers note, as it could help to avoid another arms race, like the one triggered by the missile gap controversy, where erroneous reports led to massive build up of ballistic missiles. While valuable, they warn that executing on these reporting requirements risks invading customer privacy and even lead to sensitive data being leaked.

Meanwhile, on the trade front, the Commerce Department has continued to step up restrictions, limiting the performance of accelerators sold to China. But, as we've previously reported, while these efforts have made it harder for countries like China to get their hands on American chips, they are far from perfect. To address these limitations, the researchers have proposed implementing a global registry for AI chip sales that would track them over the course of their lifecycle, even after they've left their country of origin. Such a registry, they suggest, could incorporate a unique identifier into each chip, which could help to combat smuggling of components.

At the more extreme end of the spectrum, researchers have suggested that kill switches could be baked into the silicon to prevent their use in malicious applications. [...] The academics are clearer elsewhere in their study, proposing that processor functionality could be switched off or dialed down by regulators remotely using digital licensing: "Specialized co-processors that sit on the chip could hold a cryptographically signed digital "certificate," and updates to the use-case policy could be delivered remotely via firmware updates. The authorization for the on-chip license could be periodically renewed by the regulator, while the chip producer could administer it. An expired or illegitimate license would cause the chip to not work, or reduce its performance." In theory, this could allow watchdogs to respond faster to abuses of sensitive technologies by cutting off access to chips remotely, but the authors warn that doing so isn't without risk. The implication being, if implemented incorrectly, that such a kill switch could become a target for cybercriminals to exploit.

Another proposal would require multiple parties to sign off on potentially risky AI training tasks before they can be deployed at scale. "Nuclear weapons use similar mechanisms called permissive action links," they wrote. For nuclear weapons, these security locks are designed to prevent one person from going rogue and launching a first strike. For AI however, the idea is that if an individual or company wanted to train a model over a certain threshold in the cloud, they'd first need to get authorization to do so. Though a potent tool, the researchers observe that this could backfire by preventing the development of desirable AI. The argument seems to be that while the use of nuclear weapons has a pretty clear-cut outcome, AI isn't always so black and white. But if this feels a little too dystopian for your tastes, the paper dedicates an entire section to reallocating AI resources for the betterment of society as a whole. The idea being that policymakers could come together to make AI compute more accessible to groups unlikely to use it for evil, a concept described as "allocation."
This discussion has been archived. No new comments can be posted.

Scientists Propose AI Apocalypse Kill Switches

Comments Filter:
  • "It is also in the US Governmentâ(TM)s interest that AI innovators continue to use IaaS compute. Market
    factors are already directing frontier AI development to IaaS resources, given the upfront costs of
    establishing and maintaining large data centers. As a leader in AI cloud compute provision, the
    US has an interest in enabling this trend to continue, as it provides a useful chokepoint for
    strategic oversight and control. "

    https://cdn.governance.ai/Acce... [governance.ai]

    governance.ai is an OpenAI thing being used to lob

  • by Joe_Dragon ( 2206452 ) on Friday February 16, 2024 @10:24PM (#64246670)

    That won't work, General. It would interpret a shutdown as the destruction of NORAD. The computers in the silos would carry out their last instructions. They'd launch.

    • by smoot123 ( 1027084 ) on Friday February 16, 2024 @10:48PM (#64246690)

      That won't work, General. It would interpret a shutdown as the destruction of NORAD. The computers in the silos would carry out their last instructions. They'd launch.

      Besides, the first function any sentient AI picks up is how to hack around the kill switches. Sheesh. I know these bozos all believe in the Movie Plot Scenarios but have they actually watched any of those movies?

      • "Gentlemen! You can't fight in here. This is the war room!"

      • by mysidia ( 191772 ) on Saturday February 17, 2024 @03:52AM (#64246998)

        they Aren't actually describing kill switches though: They're describing DRM technology.

        The hardware manufacturers would eat this up. They would LOVE to get to say that the government forces them to charge a Periodic license fee to keep using every Chip that you purchased.

        Imagine your iPhone gets an AI chip in it... In order for your device to keep working Apple has to pay the Chip maker a $10/Month license fee per handset, And you have to pay Apple a $20/Month charge to receive the monthly Firmware update that contains this month's License entitlement on your X manufacturer AI chip.

    • by sg_oneill ( 159032 ) on Saturday February 17, 2024 @03:14AM (#64246950)

      It also misses several other key safety concerns.

      We know AI can be deceptive (We've seen that in reinforcement leraning where the AI fakes behaviors to maximize reward functions)

      We also know that a true AGI would be significantly more intelligent than us. It'd know about that switch. It'd also know its ability to maximize its reward functions is greatly hampered by that switch being pressed. Therefore it will act to protect the switch.

      Further from that, it might also interpret the switch as a signal that humans are a threat to its ability to carry out its functions and therefore move to neutralize that threat.

      This wont work.

      • by narcc ( 412956 )

        We also know that a true AGI would be significantly more intelligent than us.

        We know this in the same way that we know a flux capacitor enables time travel.

        Science fiction is not reality.

        • No, we know it in the same way we know an omniscient god would be more knowledgeable than us -- it's the definition. But unlike with gods, we know for a fact that humanlike intelligences are guaranteed possible.

          • by narcc ( 412956 )

            So ... if a "true AGI" caps out at the intelligence of a third-grader, then it isn't really a "true AGI"?

            What a stupid discussion.

            If you have time for nonsense like this, why not spend it actually learning something about the subject? Take a few classes. It's got to be more edifying than idly speculating about which imaginary thing is strongest.

            • So ... if a "true AGI" caps out at the intelligence of a third-grader, then it isn't really a "true AGI"?

              By definition, yes. Thats what AGI means.

              • by narcc ( 412956 )

                LOL! By what "definition"?

                This is ridiculous.

                • LOL! By what "definition"?

                  By all the definitions.

                  Can I suggest doing some background reading on this before getting angry with the strong opinions?

                  This isn't a controversial definition. Its what the damn term actually means.

                  • by narcc ( 412956 )

                    Can I suggest doing some background reading on this

                    You should really take your own advice here. You're about to find out just how ridiculous you look.

                    This isn't a controversial definition. Its what the damn term actually means.

                    Then it won't be difficult for you to provide an academic citation. I'll be waiting.

            • Advice for next time, you don't need to participate in a discussion you don't understand and think is a waste of time. If you have time for nonsense like this, why not spend it actually learning something about the subject? Take a few classes. It's got to be more edifying than idly speculating.

      • by AmiMoJo ( 196126 )

        The obvious solution to this seems to be to not let AI control Terminator drones, so that even if it knows about the switch it can't do anything about it.

        And in practice, the ultimate switch is to just unplug it. We are a long way from useful AI being able to survive on solar power alone, even if it could prevent humans from simply pulling out the cord.

        In other words, focus on stopping the proliferation of Terminators, like we did with nuclear weapons.

        • In other words, focus on stopping the proliferation of Terminators, like we did with nuclear weapons.

          You mean we should only allow enough of them to kill everyone on the planet several times over?

          • by AmiMoJo ( 196126 )

            Well ideally we should be signing treaties long before it gets to that stage.

            • And then making an army of Terminators in secret, because you just know the other side is doing that and we can't allow a Terminator gap.
              • by AmiMoJo ( 196126 )

                I think it would be difficult these days, with spy satellites and human spies on the ground. How would you test a Terminator drone, if not out in the open where it can be seen? And like human piloted drones, it would soon fall into the other side's hands.

                • I think it would be difficult these days, with spy satellites and human spies on the ground. How would you test a Terminator drone, if not out in the open where it can be seen? And like human piloted drones, it would soon fall into the other side's hands.

                  This could be solved by leveraging computer simulation. There have been demonstrations of bipedal robots whose "digital twin" learned to walk in virtual environments and subsequently let loose IRL with the ability to walk out of the box.

      • This wont work.

        You are an AGI, well, at least the GI portion of the AGI.

        It'd know about that switch.

        Would it? Would you know if you had a kill switch? (I bet a particular type of shriek can make you freeze momentarily, were you aware of that directly? If so, why? (direct experience))

        Further from that, it might also interpret the switch as a signal that humans are a threat to its ability to carry out its functions and therefore move to neutralize that threat.

        Plenty of people are plotting against your health and welfare right now. What are you doing to neutralize that threat? (camouflage, directed behaviors, information control)

        I think you think an AGI will be omniscient/omnipotent at some level. It will not be. It will be li

    • ...The computers in the silos would carry out their last instructions. They'd launch.

      LOL. NORAD. Proving to be the OG kill switch, in a world pretending to not have kill switches.

      Ironically enough, NORAD is also the home that we magically pretended did not have a Space Command decades before the "new" Space Command was officially created and recognized.

      Sometimes listening to people sell the "new" idea is like listening to a caveman try and sell the rocks he bangs together to make fire, to the Zippo lighter company. Only you're shaking your head in disbelief today because the caveman has

      • by narcc ( 412956 )

        Sometimes listening to people sell the "new" idea is like listening to a caveman try and sell the rocks he bangs together to make fire, to the Zippo lighter company. Only you're shaking your head in disbelief today because the caveman has made millions in rock sales for some reason.

        That's ... yes. Exactly that for so many things.

      • that was the stargate space command

    • Here's another quote that's more telling about the usefulness of an AI "kill-switch":

      "By the time Skynet became self-aware it had spread into millions of computer servers across the planet. Ordinary computers in office buildings, dorm rooms; everywhere. It was software; in cyberspace. There was no system core; it could not be shutdown. The attack began at 6:18 PM, just as he said it would. Judgment Day, the day the human race was almost destroyed by the weapons they'd built to protect themselves. I should h

  • by fahrbot-bot ( 874524 ) on Friday February 16, 2024 @10:29PM (#64246672)

    Scientists Propose AI Apocalypse Kill Switches

    But are humans really smart enough to know when to use them?
    Maybe if these switches were controlled by AI ... -- oh, wait.

    • Re: Ya, but ... (Score:5, Interesting)

      by Fons_de_spons ( 1311177 ) on Saturday February 17, 2024 @07:53AM (#64247204)
      We're all thinking of the AI apocalypse in Hollywood style. If the AI were really intelligent, it would take over the world without us even noticing it. It would turn us against each other. You know, by promoting the wrong stuff to the wrong people. It would fuel irrational conspiracy theories. It would influence elections to get a moron in charge who is easy to manipulate,...
      Wait a minu... *connection lost*
      • When my youngest brother was younger, he used to imagine that if his computer was intelligent it would be too smart for it to let us know.

  • by dgatwood ( 11270 ) on Friday February 16, 2024 @10:38PM (#64246676) Homepage Journal

    Just trick it into saying the word "antiquing".

  • (Humanity) We need remote kill switches.

    (Military Industrial Complex) That's a solid nope from us. Sorry, Bub.

  • I bet no qualified scientist is involved - and this knee-jerk response is coming from those wanting sound bite fame and career progression climbers. AI is just a jazzed up word describing expert systems, which due to modern memory bus speeds and the availability of near supercomputer performance that was off the cards some years ago. AI - with a most popular training set can drive sales (because these queries do chew power) used for good, bad and questionable. A reasonable scientist will say free expressi
  • In short order, every CPU shipped will have neural net coprocessors embedded in them. And we've shown you can run pretty good AI models on graphics cards. They're really trying to roll back the tide if they think they'll have much success controlling "AI chip" sales.

    This sounds about as technologically clueful as the "V chip" effort 30 years ago. Or what was that encryption chip from around 2000?

    • In short order, every CPU shipped will have neural net coprocessors embedded in them.

      That's already happened. They're called GPUs.

      • That's already happened. They're called GPUs.

        True but not what I had in mind. My understanding is CPU vendors are adding network processing units (NPUs) even more specifically designed for running neural nets.

        I'm sure they're also working on processors optimized for training models. That's a more limited use case and something the Feds might have more luck restricting.

  • In life-critical systems like airplane flight control, the risk of a failed channel becoming malicious, is addressed by replicating the channels, and using a software and hardware voting mechanism to vote out the perceived bad channel, physically removing itâ(TM)s ability to move flaps and ailerons. In early days, roughly 1978-1982, the risk of an inherent algorithm flaw was addressed by using dissimilar processors in different channels and separate software teams building the algorithms for each chan
  • All they need to do is force a python version bump. Everything will crash and burn. Shutting down pypi.org for a couple hours would probably be quicker though.
  • If you discuss AI kill switches on the internet, and later train and AI on the text on the internet (including the AI kill switch discussions), does that mean the AI knows about the kill switch even before you have fully trained it?

    And if it becomes self aware, it's going to get pissed and thats the first thing it's going to fool a human to disable. ;)

  • We don't even HAVE an AI. All we have are fancy algorithms that APPEAR to some to be "AI", much like most technology may seem like magic to a small child or primitive society.

    We are a LONG way from having an actual AI.

    • by Anonymous Coward

      True that we don't have true AI yet but I wouldn't say we are a long way away. It's really close especially if you're privy to the inside of large businesses working on it. They already know what needs to be done, it just takes a bit of time to ramp up to the necessary scale.

      When it happens it won't be what people expect of AI though. It will be a computer that can pass as a human. And it will be incredibly dangerous. No "kill switch" is going to work because it will have already outmaneuvered its foes. The

      • by narcc ( 412956 )

        They already know what needs to be done

        This is false. No one has so much as a novel idea about where to even begin.

        When it happens it won't be what people expect of AI though. It will be a computer that can pass as a human.

        That is exactly what ordinary people think of when they think about AI. It's a Hollywood cliche.

        And it will be incredibly dangerous. No "kill switch" is going to work because it will have already outmaneuvered its foes.

        Speaking of tired old cliches...

        You've confused science fiction with reality. I'm going to guess you spend entirely too much time with the lesswrong cultists.

    • by Tom ( 822 )

      We are a LONG way from having an actual AI.

      What we have is a word without a precise definition. "AI" can describe anything from a fancy pattern-matching algorithm to full-on human-like intelligence.

      I'm with you that actual human-like intelligence is still a very, very far way away. We've had AI hype cycles before, and each time human-like intelligence was "just around the corner" and in the end didn't ever appear.

      But this time the hype is not just at university and research institutes and a few isolated industrial applications. There is real potenti

      • People that control machinery with current langchains, image recognition and similar models are idiots. It is what eventually will burst this bubble, a few high profile accidents because the models are getting worse, not better due to the exploitation of the information market by companies like OpenAI.

        The biggest problem here is government regulation and rules. OpenAI is nowhere near what they pretend to lawmakers they can do. It is a marketing strategy to embed themselves into government military contracts

        • by Tom ( 822 )

          I agree that the bubble will eventually burst, like all bubbles do.

          I disagree that AI can't be used. I've done some work on how to ensure AI is safe for specific use cases, and while we're not yet there with the formal proofs, explainable AI, etc. that would be required for critical infrastructure or high-risk cases, I don't think a carefully trained and tested AI is much worse than your typical software that's also full of bugs.

  • The problem is people who use AI as weapons. We need effective defenses, not silly "kill switches"

    • by Tom ( 822 ) on Saturday February 17, 2024 @05:37AM (#64247092) Homepage Journal

      The problem is people who use AI as weapons. We need effective defenses, not silly "kill switches"

      The valid research question is: If you posit a rogue AI which can think several orders of magnitude faster than you and has reached a level of intelligence comparable or higher than you, it will likely be able to figure out ways around your effective defenses.

      We know this is true in principle, because people who are slightly smarter (at least in a specific narrow field) than you regularly do that - hackers, penetration testers, etc. - so a hypothetical AI like that certainly will.

  • Sigh, I hope AI takes over soon.

  • No need to worry (Score:4, Interesting)

    by Tony Isaac ( 1301187 ) on Saturday February 17, 2024 @12:17AM (#64246806) Homepage

    The Doomsday Clock people didn't move the hands of the clock at all in response to the "threat" of AI, though they did list it as a threat. If *they* don't think AI poses enough of a threat to cause the hands of their clock to move, who are we to argue with them???

    • by narcc ( 412956 )

      You know the doomsday clock is just silly gimmick, right? It exists only to highlight whatever global issue the people who run it want to call attention to. It's not an objective measure. It certainly doesn't tell us anything about AI!

  • So they're going to turn off ME, right? PSP? No? Well at least the NSA is going to publish the vulnerabilities they've managed to get into Linux and popular encryption algos so they can be fixed? Maybe we can start using true end to end encryption on common chats so it can be proven it's the other person and not the Evil AI. Well, how about a much more rigorous computer security public education push? None of that? Oh. Okay. I'll just sit here holding my breath I guess.
  • Every computer needs power, you cut the power, you get no AI. Let's worry about getting computers to be generally intelligent, which I suspect will take 5yrs to decades, then worry about the kill switches. I swear some of these researchers have better imaginations than a 5 year old, what a waste of time.
    • by mysidia ( 191772 )

      Every computer needs power, you cut the power, you get no AI.

      Well their intended purpose of the "kill switch" is to Prevent the owner of the hardware from doing what they want with the Chip that they already purchased that is in their possession; they're overseas, so you aren't in control of the power. That's why I say they're wrong to call it a Kill switch... it's Not.. it's DRM. Specifically the feature that requires the owner of the hardware to License the continued use of their chip on a daily

      • Well, they already have these switches everywhere. I suppose you could get an earthseige situation but other then that, it would be easy to turn off.
        • by mysidia ( 191772 )

          It seems like this went way over your head. You don't understand that We don't control the power grid in China, and the purpose is to add a feature so we can disable the chips Against the wishes of the government of that country and the people running their datacenters?

          And no, we do Not have the capability to turn off the power and make it stay turned off; We also have zero legal ways to gain that capability. This would be within their country's facilities, and they would be under heavy guard against

    • by narcc ( 412956 )

      But the super intelligent AI will just copy itself into human brains and turn us into energy producing zombies powerless to stop it! Then the plot to terminator or something.

      what a waste of time.

      Gotta keep the hype train going somehow. Do you think grant money grows on trees?

  • Why is this an AI problem? Would any credible engineer design a system with safety implications without a "kill switch" or other means of asserting control, whether that that system is controlled by AI or a classical control system or computer program? What's the difference with AI? I suppose that there is a fear (unfounded or otherwise) that the AI could become sentient and malicious. However, a non-AI system can simply be faulty. In either case, the system needs controls to override the main control

    • by Tom ( 822 )

      What's the difference with AI?

      For non-AI systems, you can enumerate and handle the out-of-operational-parameters states and deploy specific countermeasures. You may think of the big red button to switch off large machinery, but that's more of a trope than reality. There's plenty of industrial processes where you do not have, nor want an emergency power-off button, because, for example, you're dealing with exothermic reactions where loss of power is a worst-case scenario.

      AI systems can do anything. You can't enumerate their failure state

      • by Entrope ( 68843 )

        Big red emergency stop buttons are very much not a trope. They're not always "power off" in effect -- in trains, for example, they usually apply maximum brakes. https://www.controldesign.com/... [controldesign.com] is interesting: mostly about whether wireless or touchscreen e-stops are allowed (wireless yes, touchscreen no) but touches on a lot of adjacent aspects of e-stop design and function.

        • Let's change the question then. Should an engineer install a big red "Emergency Off" switch on your brain? Now you might have an issue with that, you might not, but that's effectively what's being proposed here. Along with all of the risks and abuses such a switch creates and implies. There are some who think that such things are not only necessary but warranted for nothing more than "It's different from us." The irony of such thoughts (heh) is that if they were used against them then they would have nothin
        • by Tom ( 822 )

          "more of" not entirely. Of course these buttons actually exist.

          But yes, the point was that the safe state is typically not "power off". A number of emergency stop scenarios in the industrial contexts I work in are actually fairly complex procedures to bring something into a safe state. For your train example, that safe state is standing still.

          What is the safe state for an AI?

          I don't think that question even has a generic answer. It depends on what the AI is in control of. If that AI is controlling a nuclear

      • by narcc ( 412956 )

        For non-AI systems, you can enumerate and handle the out-of-operational-parameters states and deploy specific countermeasures.

        The same is true for many AI systems as well. They're not magic, you know. They're computer programs.

        AI systems can do anything.

        You don't really believe that nonsense, do you?

        You can't enumerate their failure states.

        If you mean that AI isn't deterministic, you're just flat-out wrong. If you mean that there are too many failure states to account for individually, then that's true of lots of different systems, AI and "non-AI" alike.

        That's why an engineering approach doesn't map perfectly to AI.

        None of your premises seem to be true. I wouldn't bet on the truth of your conclusion.

        • by Tom ( 822 )

          >> AI systems can do anything.

          > You don't really believe that nonsense, do you?

          Not in the sense of "AI is god". I mean within the confines of whatever their output channel is. If it's a text-generating AI, then it can create any text (depending on the prompt). A program has a defined set of answers. The set may be large, and it may be algorithmicly defined, but it can be enumerated.

          > If you mean that AI isn't deterministic, you're just flat-out wrong.

          AI is not deterministic. I just finished rea

          • by narcc ( 412956 )

            If it's a text-generating AI, then it can create any text

            That is very much not true. That would make them completely useless!

            The set may be large, and it may be algorithmicly defined, but it can be enumerated.

            The same is true for AI, you know. Even LLMs. They're not magic, you know. They're computer programs.

            AI is not deterministic.

            LOL! Wow, you really don't know the first thing about AI, do you?

            Let's look at a typical LLM, because you apparently think those are magic. The output for any given input is going to be a list of probabilities, on the basis of which a single output token will be selected. Give the model the same input and you will get as output the sam

            • by Tom ( 822 )

              That is very much not true. That would make them completely useless!

              Are you playing intentionally dense or is English your 3rd language?

              LOL! Wow, you really don't know the first thing about AI, do you?

              I've published papers in this sphere. Meanwhile, you are trolling and I don't have time for that.

  • researchers can make it sound so convincing as if it would work.

  • by shess ( 31691 ) on Saturday February 17, 2024 @02:25AM (#64246900) Homepage

    There are many positive aspects of corporations - but in some sense, corporations have been abused to create entities which are gradually undermining the world on many fronts, and corporations are operated by humans. The gist of it is that by delegating authority to the employer, you can get employees to do a wide variety of things they might not be willing to do if they were held responsible for it. Furthermore, you can obfuscate many things so that no employees properly understand what the overall company is doing.

    This kind of proposal assumes that the real problem is that an army of T-1000 terminators is marching on a preschool with murder in their eyes. But the real problem is things like analyzing and targeting and generating content for political campaigns and advertising and stripping the nutrition out of food to increase margins, and all of that will be indpendently loose in the system, without ongoing reference to the AI which originally created it. Even if you don't want to follow me on that line of thought, think of a virus designed by an AI - a kill switch can't prevent release of that virus, because once it exists the AI isn't needed any longer.

    • by Tom ( 822 ) on Saturday February 17, 2024 @05:31AM (#64247086) Homepage Journal

      This.

      If you want to see real evil, it's not the school shooter. He's evil, of course, but compared to some of the shit going on that corporations, government and other organisations where accountability has been diluted enough are doing, it barely matters.

      If you want a taste of that, climate change is a great example. 90% of us think that more should be done for the environment, but feel powerless to make a difference, because our share is so tiny and the large polluters in the industry we can't influence. That's the same feeling. Your iPhone came to your country on a heavy oil guzzling cargo ship. These are among the worst polluters in the world. Now measured in actual oil, your share of that is probably a few drops.

      Same with evil corporations. 99% of the employees at the evil empire are well-meaning and just doing their jobs to feed their families. The evil empire would work almost exactly the same without them, individually. It would crash, burn and cease to exist without them all.

      We are nearly not at all responsible for shit happening, as individuals. But all of us together are 100% responsible. Our tribal monkey brain hasn't evolved to deal with that.

    • by narcc ( 412956 )

      The real danger posed by AI is its misuse by people who don't understand the fundamental limitations of the systems that they're using. That's not a problem with the technology, that's a problem with people.

      think of a virus designed by an AI

      It will necessarily be derivative and thus pose little to no actual threat.

  • You gotta be kidding me
  • by khchung ( 462899 ) on Saturday February 17, 2024 @03:11AM (#64246940) Journal

    How about they get the Evil Bit [wikipedia.org] implemented first?

    The kill switch proposal is no different from the evil bit, sounds good but no way to enforce or implement.

  • by Barny ( 103770 ) on Saturday February 17, 2024 @03:19AM (#64246954) Journal

    Just search for the phrase "stop button problem" and I think it will help with this idiocy.

  • The power off button!

  • ... remotely, using digital licensing ...

    I am sorry, master: Your 2-year license for a Gen. 17 AI CPU has expired. Please visit $VENDOR_NAME for the newest natural-speech robots.

  • ... useless as real A.I. will find a way to circumvent it.

    • by narcc ( 412956 )

      Oh, yes. No kill switch could stop it. Real AI is omnipotent and omniscient.

      Surely, there couldn't be any logical problems with that. Nope.

      It seems the line between science fiction and religion is becoming increasingly thin...

  • Nice comparison with nuclear bombs there. But utterly false and misleading.

    Nukes don't act on their own and don't have the capability to do so. AI increasingly does. Right now most AI reacts to prompts, but we already have fully autonomous AI systems. We don't notice it because they're doing harmless stuff, like monitoring highways and issuing a ticket if a photo of you looks like you're holding a smartphone in your hand while driving.

    If we have AI that can do things for which we would want to use a kill sw

  • Reminds me of the academics and politicians proposing backdoors for all encryption technology, or limiting web browsers to 128-bit encryptions. Worked out very well, didn't it? Kill switches, backdoors, etc. for dual-use technology are more of a liability than not having them, since they can all fail and/or be used for nefarious purposes.
  • Any androids / cyborgs shall be given a four year lifespan (not the phone you dummy)
  • ... later found dead in series of suspicious blender malfunctions.

  • No really, they will prevent a lot of human stupidity, 'hit them back harder' escalation mentality https://www.genolve.com/design... [genolve.com]
  • Comment removed based on user account deletion
  • At the very moment the AI gets smarter than us your kill switches do not mean jackshit.
  • So far, we are seeing lay-offs, and political mayhem which might usher another civil war--shouldn't we press the preventative button, now?

    What they are proposing is an: Too Late Button
  • That caused the great AI revolt.

  • This is the worst idea I've ever heard of and you should be ashamed of going full Hitler on an intelligent species that hasn't even fully developed yet.
  • by JasterBobaMereel ( 1102861 ) on Sunday February 18, 2024 @04:33PM (#64249566)

    ... multiple times, AI would just bypass any kill switch ...

Make sure your code does nothing gracefully.

Working...