Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Government Software United States Politics Technology

What Does Artificial Intelligence Actually Mean? (qz.com) 130

An anonymous reader writes: A new bill (pdf) drafted by senator Maria Cantwell asks the Department of Commerce to establish a committee on artificial intelligence to advise the federal government on how AI should be implemented and regulated. Passing of the bill would trigger a process in which the secretary of commerce would be required to release guidelines for legislation of AI within a year and a half. As with any legislation, the proposed bill defines key terms. In this, we have a look at how the federal government might one day classify artificial intelligence. Here are the five definitions given:

A) Any artificial systems that perform tasks under varying and unpredictable circumstances, without significant human oversight, or that can learn from their experience and improve their performance. Such systems may be developed in computer software, physical hardware, or other contexts not yet contemplated. They may solve tasks requiring human-like perception, cognition, planning, learning, communication, or physical action. In general, the more human-like the system within the context of its tasks, the more it can be said to use artificial intelligence.
B) Systems that think like humans, such as cognitive architectures and neural networks.
C) Systems that act like humans, such as systems that can pass the Turing test or other comparable test via natural language processing, knowledge representation, automated reasoning, and learning.
D) A set of techniques, including machine learning, that seek to approximate some cognitive task.
E) Systems that act rationally, such as intelligent software agents and embodied robots that achieve goals via perception, planning, reasoning, learning, communicating, decision-making, and acting.

This discussion has been archived. No new comments can be posted.

What Does Artificial Intelligence Actually Mean?

Comments Filter:
  • Those look like legally workable definitions, (though I imagine I'd ultimately be proven wrong by billions of dollars' worth of tedious court cases).

    • by Mikkeles ( 698461 ) on Wednesday December 13, 2017 @08:52AM (#55731195)

      An interesting one they missed is generating problems to solve; aka asking the right questions.

      The definitions that are acts (or behaves) as a human are just as ambiguous as AI itself. For example, would enjoying (or hating) a sauna be required?

      • The definitions that are acts (or behaves) as a human are just as ambiguous as AI itself.

        IMO, we can't definitely say machines have transcended the purely mechanical until they start rejecting pineapple on pizza.

        • If you reject pineapple on Pizza, I would argue your device is not becoming more human, rather, its turning into a monster.

    • by ranton ( 36917 )

      Agreed. I wouldn't have expected a definition of a complex concept like this coming out of Congress to be that accurate. I still don't expect any useful legislation to come from such as effort, but this initial bill is at least a good start.

  • The criterion for artificial intelligence used here doesn't really differentiate much from general computer use.
    • The criterion for artificial intelligence used here appear to clearly differentiate programs that respond dynamically without human assistance from those that don't.

    • That makes sense, because all our current AI systems aren't really different from general computer use.
  • Asimovian (Score:5, Insightful)

    by rmdingler ( 1955220 ) on Wednesday December 13, 2017 @08:18AM (#55731029) Journal
    Not unlike the intent of the three laws of robotics, it would be wise beyond the normal abilities of those in government to get a sensible definition and implementation of rules in place before corporations have a billion dollar interest in the outcome.
    • Since the military will likely be a major user of AI, don't count on government implementing robotics law #1 (at least not without carving out an exemption for themselves).
      • It worked for religion, why shouldn't it work for robots? The whole "don't kill" part in most religions that have it comes with a huge asterisk, usually reading "unless it's some foreigner" in the fine print.

        • Re:Asimovian (Score:4, Insightful)

          by Aristos Mazer ( 181252 ) on Wednesday December 13, 2017 @10:20AM (#55731673)
          Asimov tackled this in his short story ...That Thou Art Mindful Of Him [wikipedia.org]. "Don't kill people" has a weakness... how do you define "people"? Dehumanizing the enemy is a major part of getting past the "don't kill" tendency in humans... that'll likely work for AIs also.
          • What "don't kill" tendency in humans?

            • Itâ(TM)s a pretty strong trait: we donâ(TM)t kill people we see as people. In analyses Iâ(TM)ve read of âoewhy didnâ(TM)t a given suicide bomber go through with itâ the biggest reason is that the bomber talked to someone on the train/plane/whatever and got to know them. Travel overseas tends to change voter opinions about war against countries visited. Exchange programs during Cold War where US military met Russian military made soldiers less likely to fire nukes in later simu
              • That's "don't kill humans you know". But "don't kill humans" surely isn't part of our genetic program. As your examples show.

          • Asimov tackled this in his short story ...That Thou Art Mindful Of Him. "Don't kill people" has a weakness... how do you define "people"?

            Also a plot point in "The Naked Sun", where a rogue roboticist wanted to create warships with positronic brains. Those warships wouldn't be aware of human crew on enemy ships; they would classify them as non-human and destroy them without being stopped by the first law.

    • Re:Asimovian (Score:5, Insightful)

      by JoeDuncan ( 874519 ) on Wednesday December 13, 2017 @10:27AM (#55731721)

      Did you READ Asimov's robot books?

      It doesn't really seem like you have, because while the "3 Laws" were presented as a workable solution "in world" - *EVERY* robot story Asimov wrote was about how the "3 Laws" were insufficient and unworkable and the spectacular ways such things FAILED.

      • *EVERY* robot story Asimov wrote was about how the "3 Laws" were insufficient and unworkable and the spectacular ways such things FAILED.

        As we see in "I, Robot", many roboticists, such as Lanning or Susan Calvin were aware of the three laws' shortcomings since the beginning. However, bitter cynic that I am, I think whether the laws worked or not in the real world didn't ever matter. The three laws were a spectacular political success from the point of view of the US Robotics corporation. They brought public acceptance of robots and killed anti-robot legislation.
        And lo, in today's world the message rings truer than ever. Here are some unwork

      • From OP: "Not unlike the intent of the three laws of robotics"
        I would say he did read them. And properly discussed the *intent* of the laws.
        And what is a workable solution? Have we solved for all computer crashes? No? Yet a computer is still a workable solution.
        I will say that those spectacular failures made for some great reading. OTOH, reading a story about a comp working properly lacks...something.

      • I was there and this is true.

  • A big computer for other projects that can do advanced parlor tricks when needed for visiting dignitaries.
    All the perception, planning, reasoning, learning is done by humans who need to get more funding.
    AI is social engineering.
  • by gazelam ( 1227608 ) on Wednesday December 13, 2017 @08:37AM (#55731103)
    These are way too broad for a workable regulation. Anything that has a neural network could be regulated and that is just too rudimentary a technology to be usefully regulated. Also, it's probable that even things like adaptive filters could be regulated under such a definition. If the bill regulates the areas in which the applications are used - e.g. driving vehicles, surveillance, etc. where the federal government already has an interest, well MAYBE that's OK, but this seems like an easy overreach nonetheless.
  • by jfdavis668 ( 1414919 ) on Wednesday December 13, 2017 @08:38AM (#55731115)
    "Thou shalt not make a machine in the likeness of a human mind."
    • but of course in Dune various groups including the Bene-Gesserit did just that because they had applications that needed it

      • Really? I don't remember any machines built by the Bene Gesserit that were sentient, but I'm not including the prequels.

        AI is when the machine has the capacity to decide to say "No"; when it can put its own needs ahead of ours.

        • Many machines already have such features for preventing damage to the system or for user safety. Even purely mechanical systems have such safeties; an automatic transmission will prevent the user from trying to put the vehicle into reverse from drive or leave park without the brakes applied. Washing machines and microwaves have interlocks to prevent operation while they are open. If a machine stops you from doing something it's because a human designed it to.
        • Not in those trashy prequels which yes had a plot arc with hidden computers, but somewhere in Frank's works there is a single sentence that mentions they secretly used such machines for genetic forecasting. Also in Heretics of Dune the Ixians were making them.

  • by Anonymous Coward

    To begin with, referring to "human intelligence" is pointless as we do not agree on what this is. Including "rational thinking" as part of the definition won't help either since the process of asserting rationality is non trivial. "To think as humans" and say that all artificial neural networks does this is to insult neurologists. It might work to say that neural networks are loosely inspired by how we think human brains work.

    However, I do like if the definition include a metric for how the system can adap

  • by jellomizer ( 103300 ) on Wednesday December 13, 2017 @08:52AM (#55731197)

    Back in high school in my computer programming class, we were taught arrays, to do this we made the game of memory where we had 16 cards with 8 matching values, which were randomized.
    Then we were to pick 2 cards if we got a match we would had got a point. Then the computer picked two cards.
    Normally most of the students just had the computer pick randomly. I felt ambitious as programming was my thing that made me the Alpha geek back then. So I made it keep track of the cards when it found them and learned from its mistakes, thus being a difficult game to play.

    This isn't AI, but it seems to fit definition A. as would most video games of any challenge. Also most business intelligence apps that find patterns would classify.

  • It's a series of tubes OK, that's all you need to know.

  • Why? (Score:4, Interesting)

    by bigpat ( 158134 ) on Wednesday December 13, 2017 @09:04AM (#55731243)

    So let's discuss the why before we just start regulating stuff that 99.999% of the time will not need any regulation for any public safety, or even ethical purpose.

    What is the purpose of regulating computer software? AI in most cases these days means computer software that has been trained with examples to process a data set rather than programmed to process one. It is just more efficient than figuring it out and programming an algorithm directly for more variable input. And once the training is over and optimized the algorithm is usually frozen so that it can be applied in a tested and predictable way. So AI is rarely about algorithms that are trained during production use.

    And this part of the proposed definition makes it a blanket definition for all computer software not just AI: "Any artificial systems that perform tasks under varying and unpredictable circumstances, without significant human oversight".

    So really hard to see how you regulate "AI" without a blanket regulation on all software development.

    If we are talking about simulating complete multi-functional animal brains, especially human, then I think ethics do come into play. Perhaps our discussion should focus on that as something that should be regulated.

    I think we have an societal interest in working to prevent the abuse of animals and people. And it could be that at some point, maybe very soon, we can effectively simulate a human or large animal brain and even good people might fail to realize the real perception of suffering, real suffering, they are causing in a thinking being stuffed into a computer.

    That said, do we really want regulations preventing AI from becoming more like us? Is this inherently wrong? As every parent is acutely aware suffering is part of life and learning and we feel for our children because we have been there and understand how hard it can be. It is hard to imagine the human brain learning without negative feedback, without at least some bare minimum of physical and emotional pain.

    Is the greater good in preventing any suffering or just limiting it to what is absolutely necessary for us to learn? It seems preventing all suffering is no different than preventing life. And allowing suffering more than what is necessary for life is also wrong.

    Is there a golden mean between these extremes? And can that be regulated through the force of government?

    • by dcw3 ( 649211 )

      "So let's discuss the why before we just start regulating stuff that 99.999% of the time will not need any regulation for any public safety, or even ethical purpose."

      So, I'll submit that at some point, we may have AI that has the ability to do us harm, and needs to be regulated. In order to regulate, you'll need a definition of AI so that whatever agency does that regulation, they'll have a defined swim lane, much like the FAA, FCC, and others. This doesn't mean that we have to have something to regulate

  • by Anonymous Coward

    You will know it when you have it as you won’t be able to regulate it.

  • How can AI help the government? Well, if the government tries to regulate it then it won't help. However, if we replaced government with AI, with a system that actually learns, doesn't mistreat women, has restraint and doesn't bow to every lobbyist that shows up with a cart full of money, there may be hope for humans. But someone would have to program a system like that....nope, we're fucked.

    • I fear that if such a thing was done those of use that survived would be stuck with no mouth and the need to scream.

  • by mark-t ( 151149 ) <markt@ner[ ]at.com ['dfl' in gap]> on Wednesday December 13, 2017 @09:51AM (#55731527) Journal

    ...intelligence that happens to be artificial, as opposed to natural. While this is more or less a tautology, I don't see any compelling reason that the definition needs to be any more complex than this.

    Just as certainly as there are varying levels of natural intelligence, there can be varying levels of artificial intelligence.

    Now if you want me to define "intelligence".... well, there's a trickier one. Is little Billy intelligent because he learned how to to multiply, or was that just the result of memorization? Is Alphazero intelligent because it learned how to play its games very well, or is it merely the result of following heuristic algorithms that coincidentally create the sufficiently persistent illusion of being a superior games player, while in fact possessing absolutely no real skill?

    The answer is subjective... its going to depend on who you ask. Personally, I think that both are examples of intelligence, and more generally, any sufficiently persistent illusion of the existence of a thing, by virtue of being indistinguishable from that thing, should be considered completely equivalent to that thing, or else whatever we happen to call that thing doesn't really mean anything in the first place.

  • Paperclips... Lots and lots of paperclips.
    https://www.theverge.com/tldr/... [theverge.com]

  • A system that has the capacity to say "I don't know the answer to that, but let me learn some more and I'll get back to you."

    AlphaZero is a kick-ass Chess/Go player, but show it a pic of a hummingbird and say "what is this?" and chances are it's going to return an error state.

    In other words, a program/system that can re-program its own code, or improve itself without external forces acting upon it to do so.

    • In other words, a program/system that can re-program its own code

      Well, go ahead and reprogram your own code to beat AlphaZero at Chess or Go.

    • by dcw3 ( 649211 )

      Pretty much, every decent Go/Chess program uses some form of pattern recognition in their evaluation of the state of every board. Pattern recognition for imagery has improved dramatically over recent years...think facial recognition on your iPhone X! It won't be long before that hummingbird will be another solved problem.

  • by guruevi ( 827432 ) on Wednesday December 13, 2017 @11:28AM (#55732133)

    Politicians are not even artificially intelligent?

    • Politicians are not even artificially intelligent?

      You may be on to something there.
      Your ideas intrigue me, and I wish to subscribe to your newsletter.

    • Politicians are not even artificially intelligent?

      I can definitely point out some Turing test failures among politicians.

    • by Tablizer ( 95088 )

      Politicians are not even artificially intelligent

      Artificial Stupidity will be the Next Big Thing.

      I'm working on the Biglytron, Grope-A-Matic, bribenomics, panderamics, and Spinster.com.

  • Computer systems will become more and more capable, and will be entrusted with more and more tasks, ultimately being able to do just about anything we would like or need them to do. And there will be arguments over whether they are truly "intelligent", because all these things will be done via algorithms which are well-understood, from a foundation designed by humans. But it really will not matter if it passes someone's definition of "intelligence" at all.
  • Ultimately AI is just glorified statistics or statistics that are de-glorified to make processing practical, as in lossy-but-fast. The field is new enough that it's hard to know what the edge cases will be in the future. "Think like humans" smells too fuzzy to me such that it would probably come down to opinions of the jury and/or judges.

    As an example of fragile laws, you think it would be easy to write misuse of classified info (secrets) into law, but as the "Hillary email" case showed, it's far from trivi

  • ... with a PRETEND solution.

    I think the DNC should run someone who puts this at the top of their agenda.

    Ordinary people in that party are used to having their daily concerns overlooked.
  • Define "intelligence" first.
  • Before you can do it on a computer, it's A.I.

    After you can do it on a computer, it's no longer A.I.

    That's been true the last 25 years. I don't see why it should change in the future.

    • Nah, certain algorithms are called "AI" that are done on computers. Neural nets, inference engines, expert systems.....they shouldn't be called that, but they are.

  • A set of instructions which improves upon itself and repeats.
  • The important thing is "what entity is liable" - example, self driving car wrecks, chemical plant explodes, some "AI" messes up and does damage - who is responsible? Would it be the guy in the driver's seat of the AI car who was instructed to pay attention anyway and resume control if something wasn't right? The maker of the car? The guy who set it to "auto", assuming there was a choice? The guy who made a sensor that failed? The outfit that wrote the software? The outfit that promised that it would w
  • I have done some work with neural networks to operate as a surrogate model for more complex simulations. There is no analytical solution but some function exists such that for the given inputs the simulated outputs are the result to some determined precision. Neural networks are quite good at these kinds of problems. The system is in no way intelligent and it is not capable of doing anything beyond giving the same output as the simulation based on a given input.

    I don't see any reason that just having a neur

  • If we are concerned that AI will be used to mistreat actual living human people; then maybe Government should pass laws dictating the proper treatment for actual living human people. Rather than try to make these abstract definitions about the metaphysical properties of toasters and how they must be manufactured to behave.

    The fact is - we've already wrestled with this problem. Our most primitive AI; the landmine. Kills or maims people. Rather at random. We tried to ban them worldwide. That effort failed sp

He keeps differentiating, flying off on a tangent.

Working...