What Does Artificial Intelligence Actually Mean? (qz.com) 130
An anonymous reader writes: A new bill (pdf) drafted by senator Maria Cantwell asks the Department of Commerce to establish a committee on artificial intelligence to advise the federal government on how AI should be implemented and regulated. Passing of the bill would trigger a process in which the secretary of commerce would be required to release guidelines for legislation of AI within a year and a half. As with any legislation, the proposed bill defines key terms. In this, we have a look at how the federal government might one day classify artificial intelligence. Here are the five definitions given:
A) Any artificial systems that perform tasks under varying and unpredictable circumstances, without significant human oversight, or that can learn from their experience and improve their performance. Such systems may be developed in computer software, physical hardware, or other contexts not yet contemplated. They may solve tasks requiring human-like perception, cognition, planning, learning, communication, or physical action. In general, the more human-like the system within the context of its tasks, the more it can be said to use artificial intelligence.
B) Systems that think like humans, such as cognitive architectures and neural networks.
C) Systems that act like humans, such as systems that can pass the Turing test or other comparable test via natural language processing, knowledge representation, automated reasoning, and learning.
D) A set of techniques, including machine learning, that seek to approximate some cognitive task.
E) Systems that act rationally, such as intelligent software agents and embodied robots that achieve goals via perception, planning, reasoning, learning, communicating, decision-making, and acting.
A) Any artificial systems that perform tasks under varying and unpredictable circumstances, without significant human oversight, or that can learn from their experience and improve their performance. Such systems may be developed in computer software, physical hardware, or other contexts not yet contemplated. They may solve tasks requiring human-like perception, cognition, planning, learning, communication, or physical action. In general, the more human-like the system within the context of its tasks, the more it can be said to use artificial intelligence.
B) Systems that think like humans, such as cognitive architectures and neural networks.
C) Systems that act like humans, such as systems that can pass the Turing test or other comparable test via natural language processing, knowledge representation, automated reasoning, and learning.
D) A set of techniques, including machine learning, that seek to approximate some cognitive task.
E) Systems that act rationally, such as intelligent software agents and embodied robots that achieve goals via perception, planning, reasoning, learning, communicating, decision-making, and acting.
Honestly... that doesn't look too bad (Score:2)
Those look like legally workable definitions, (though I imagine I'd ultimately be proven wrong by billions of dollars' worth of tedious court cases).
Re:Honestly... that doesn't look too bad (Score:4, Insightful)
>Forgot "systems that are self aware"
I don't think so - we can't prove that in humans, so it's not really a useful definition.
Self aware (Score:4, Interesting)
IMHO, self-aware just means being able to examine some internal states and store or report that information.
"Thermostat, what's the current temperature?" :: "72 degrees" - that's self-aware. It just isn't much of a self.
Even adding "able to modify internal states based on examining them" is something more than self-awareness.
Self-aware is not the same as "conscious." Consciousness implies assigning internally conceived meaning to, and abstract manipulation of, such states.
Re:Self aware (Score:4)
When I back in college for my CS degree i wrote a paper about AI. Granted this was 250 year ago. I should dig out that paper, update it, and try to publish it.
Anyway, I divided AI into three broad categories.
SI, or simulated intelligence. My module at the time for this was the eliza program in emacs. While it is a very limited at its it is a simulated intelligence. A modern version would be the Siri apps found on Iphones and the Google version. No one will mistake these for a living person and they do not think. They just answer with what is programmed into them. But as time advances that logic will be come more complex. Eventually, I believe you will not be able to discern the difference between them an people. But even if they become that complex they are still nothing more than a glorified if/then/else statement. Not really a thinking machine. This type of AI is possible.
AI, or artificial intelligence. This would be a truly thinking machine. It would be capable of looking at evidence or data, and making complex decisions based on its programming. This is the kind of AI that people think of when they think of Sky-net. I assume that we wouldn't be that stupid though and program in some limitations on what it could and couldn't do. This type of AI is possible.
SAI, or sentient artificial intelligence. This would be a living machine. It would be capable of making decisions based on evidence and data. Using such data it would be capable of predicting the future and planning for it. It would be capable of altering its programming and ignoring any limitation put on it by external events and conditions. Think Data off of Star Trek. This type of AI might be possible.
Three simple categories and I got a C on that paper.
Re: (Score:2)
FWIW, my comments, mostly because they lead to the last statement which I think is an important point (and a point where you are in error due to over caution):
SI - A flat flowchart with easily traceable origins for outcomes. We've already done this.
AI - A 3d flowchart where options on the Z axis can modify the z-origin xy plane. In anything but the simplest systems it quickly becomes nearly impossible for a human to understand how the output was arrived at from the inputs. We've made some fairly basic st
Re:Self aware (Score:4)
The flow charts are excellent ways to visualize what I was describing.
I'm going to respectfully disagree with part of your statement on SAI. We do not know if this is possible. We assume its possible and have good evidence to do so. But since we really don't know the nature of our own sentient, how can we define another? This is something theologians and philosophers have debated since time started. Is our consciousness a divine spark created by some all powerful being or an illusion caused by the random interactions of chemicals and enzymes? I suppose we will really never know if a machine is "alive" till one wakes up and says so.
Re: (Score:2)
> Is our consciousness a divine spark created by some all powerful being or an illusion caused by the random interactions of chemicals and enzymes?
You see a universe with supernatural elements, but there's no evidence for them. I'm going to stick with science over faith - science has a much better track record for modeling reality.
Re:Self aware (Score:4)
I see a universe ruled by science, governed by physical laws that we are capable of understanding. There is no such thing as the supernatural, just things we don't understand and some people want to attribute to supernatural. It is ether natural or it doesn't exist.
What I leave room for is something outside our current level of understanding. Weather you want to call that God, alien intelligence, or the great system admin. and all of reality is a computer simulation.
But some people still want to attribute consciousness to divinity. I see no reason to leave that option open, because while we both agree in science we might be wrong. I just leave that possibility open.
Re:Self aware (Score:4)
I see no reason to leave that option open,
I should really learn to proofread better. What that line should say is "I see no reason to not leave that option open."
Re: (Score:2)
> Is our consciousness a divine spark created by some all powerful being or an illusion caused by the random interactions of chemicals and enzymes?
You see a universe with supernatural elements, but there's no evidence for them. I'm going to stick with science over faith - science has a much better track record for modeling reality.
Face it, you need both faith and science (or more broadly, reason.) Look, I'm not trying to be religious. I'm just saying that you need faith to get you through situations where you don't have data to "model reality." You need essentially no faith to be sure that the sun will rise tomorrow. You may need only a little faith when you cross the street and expect that the oncoming car will stop and not run you over. And you may need a lot if a loved one is seriously ill, and you are hoping for a good outcome.
IM
Re: Self aware (Score:2)
ML-Machine Learning: Machine learning algorithms which can adapt, and thus make decisions or interpret data. The computer which can play "Go" is an apt example.
Sentient Computerized Intelligence: This is Skynet at its peak. Cortana in Halo?, Data, the EMH/Doctor, etc. These are self-aware, self-driven machines. They are more likely t
Re: (Score:3)
I like that. Replace AI with ML and move everything under the umbrella of AI.
Judging the nonexistent (Score:2)
You're claiming to define the limits of a prospective technology you've never laid hands upon, because it does not yet exist.
This is precisely like a caveman trying to explain the nature of the telephone.
Re: (Score:2)
Forgot "systems that are self aware"
That isn't really something current AI research is dealing with. That is more science fiction at this point. The type of problems and ethical concerns we have regarding current AI technologies is very different than the concerns which would come about when we have actually self aware artificial beings.
Re: Honestly... that doesn't look too bad (Score:1)
Re: (Score:1)
Re:Honestly... that doesn't look too bad (Score:4, Insightful)
An interesting one they missed is generating problems to solve; aka asking the right questions.
The definitions that are acts (or behaves) as a human are just as ambiguous as AI itself. For example, would enjoying (or hating) a sauna be required?
Re: (Score:2)
The definitions that are acts (or behaves) as a human are just as ambiguous as AI itself.
IMO, we can't definitely say machines have transcended the purely mechanical until they start rejecting pineapple on pizza.
Re: (Score:2)
If you reject pineapple on Pizza, I would argue your device is not becoming more human, rather, its turning into a monster.
Re: (Score:2)
Agreed. I wouldn't have expected a definition of a complex concept like this coming out of Congress to be that accurate. I still don't expect any useful legislation to come from such as effort, but this initial bill is at least a good start.
Uh oh (Score:2)
Re: (Score:2)
The criterion for artificial intelligence used here appear to clearly differentiate programs that respond dynamically without human assistance from those that don't.
Re: (Score:3)
Re: (Score:2)
Asimovian (Score:5, Insightful)
Re: (Score:2)
Re: (Score:3)
It worked for religion, why shouldn't it work for robots? The whole "don't kill" part in most religions that have it comes with a huge asterisk, usually reading "unless it's some foreigner" in the fine print.
Re:Asimovian (Score:4, Insightful)
Re: (Score:2)
What "don't kill" tendency in humans?
Re: Asimovian (Score:3)
Re: (Score:2)
That's "don't kill humans you know". But "don't kill humans" surely isn't part of our genetic program. As your examples show.
Re: (Score:2)
Asimov tackled this in his short story ...That Thou Art Mindful Of Him. "Don't kill people" has a weakness... how do you define "people"?
Also a plot point in "The Naked Sun", where a rogue roboticist wanted to create warships with positronic brains. Those warships wouldn't be aware of human crew on enemy ships; they would classify them as non-human and destroy them without being stopped by the first law.
Re:Asimovian (Score:5, Insightful)
Did you READ Asimov's robot books?
It doesn't really seem like you have, because while the "3 Laws" were presented as a workable solution "in world" - *EVERY* robot story Asimov wrote was about how the "3 Laws" were insufficient and unworkable and the spectacular ways such things FAILED.
Re: (Score:2)
*EVERY* robot story Asimov wrote was about how the "3 Laws" were insufficient and unworkable and the spectacular ways such things FAILED.
As we see in "I, Robot", many roboticists, such as Lanning or Susan Calvin were aware of the three laws' shortcomings since the beginning. However, bitter cynic that I am, I think whether the laws worked or not in the real world didn't ever matter. The three laws were a spectacular political success from the point of view of the US Robotics corporation. They brought public acceptance of robots and killed anti-robot legislation.
And lo, in today's world the message rings truer than ever. Here are some unwork
Re: (Score:2)
That's because they needed 4 Laws.
Re: (Score:2)
From OP: "Not unlike the intent of the three laws of robotics"
I would say he did read them. And properly discussed the *intent* of the laws.
And what is a workable solution? Have we solved for all computer crashes? No? Yet a computer is still a workable solution.
I will say that those spectacular failures made for some great reading. OTOH, reading a story about a comp working properly lacks...something.
Re: Asimovian (Score:2)
I was there and this is true.
Re: (Score:3)
we should just ask Alexa.
I can't, I'll have to get my kids or my wife to ask Alexa. Alexa still doesn't recognise a British accent (at least not in the US). If I ask Alexa what AI is, I'll probably get a pizza delivered.
Re: (Score:2)
we should just ask Alexa.
I can't, I'll have to get my kids or my wife to ask Alexa. Alexa still doesn't recognise a British accent (at least not in the US). If I ask Alexa what AI is, I'll probably get a pizza delivered.
That's because Alexa only speaks English.
Re: To find out, (Score:4, Funny)
This will only lead to a lot of people learning to speak with a Scottish accent.
Getting funding (Score:2)
All the perception, planning, reasoning, learning is done by humans who need to get more funding.
AI is social engineering.
Regulate Applications not Technology (Score:3, Insightful)
See O.C. Bible (Score:3)
Re: (Score:2)
but of course in Dune various groups including the Bene-Gesserit did just that because they had applications that needed it
Re: (Score:2)
Really? I don't remember any machines built by the Bene Gesserit that were sentient, but I'm not including the prequels.
AI is when the machine has the capacity to decide to say "No"; when it can put its own needs ahead of ours.
Re: (Score:2)
Re: (Score:2)
Not in those trashy prequels which yes had a plot arc with hidden computers, but somewhere in Frank's works there is a single sentence that mentions they secretly used such machines for genetic forecasting. Also in Heretics of Dune the Ixians were making them.
Completely useless definitions (Score:2, Interesting)
To begin with, referring to "human intelligence" is pointless as we do not agree on what this is. Including "rational thinking" as part of the definition won't help either since the process of asserting rationality is non trivial. "To think as humans" and say that all artificial neural networks does this is to insult neurologists. It might work to say that neural networks are loosely inspired by how we think human brains work.
However, I do like if the definition include a metric for how the system can adap
There goes my High school game of memory program. (Score:4, Interesting)
Back in high school in my computer programming class, we were taught arrays, to do this we made the game of memory where we had 16 cards with 8 matching values, which were randomized.
Then we were to pick 2 cards if we got a match we would had got a point. Then the computer picked two cards.
Normally most of the students just had the computer pick randomly. I felt ambitious as programming was my thing that made me the Alpha geek back then. So I made it keep track of the cards when it found them and learned from its mistakes, thus being a difficult game to play.
This isn't AI, but it seems to fit definition A. as would most video games of any challenge. Also most business intelligence apps that find patterns would classify.
Re: (Score:2)
Well I was going to code it to figure out when the player plays which slots they have more of a miss on, So to when there was a tie, it would pick something that the person may have a better memory with.
Since this is the government (Score:2)
It's a series of tubes OK, that's all you need to know.
Re: wrong question (Score:1)
Re: (Score:2)
Interesting perspective...
Re: (Score:2)
What does intelligence actually mean? ...and why is it so scarce?
It's all in your head. But not you specifically.
Why? (Score:4, Interesting)
So let's discuss the why before we just start regulating stuff that 99.999% of the time will not need any regulation for any public safety, or even ethical purpose.
What is the purpose of regulating computer software? AI in most cases these days means computer software that has been trained with examples to process a data set rather than programmed to process one. It is just more efficient than figuring it out and programming an algorithm directly for more variable input. And once the training is over and optimized the algorithm is usually frozen so that it can be applied in a tested and predictable way. So AI is rarely about algorithms that are trained during production use.
And this part of the proposed definition makes it a blanket definition for all computer software not just AI: "Any artificial systems that perform tasks under varying and unpredictable circumstances, without significant human oversight".
So really hard to see how you regulate "AI" without a blanket regulation on all software development.
If we are talking about simulating complete multi-functional animal brains, especially human, then I think ethics do come into play. Perhaps our discussion should focus on that as something that should be regulated.
I think we have an societal interest in working to prevent the abuse of animals and people. And it could be that at some point, maybe very soon, we can effectively simulate a human or large animal brain and even good people might fail to realize the real perception of suffering, real suffering, they are causing in a thinking being stuffed into a computer.
That said, do we really want regulations preventing AI from becoming more like us? Is this inherently wrong? As every parent is acutely aware suffering is part of life and learning and we feel for our children because we have been there and understand how hard it can be. It is hard to imagine the human brain learning without negative feedback, without at least some bare minimum of physical and emotional pain.
Is the greater good in preventing any suffering or just limiting it to what is absolutely necessary for us to learn? It seems preventing all suffering is no different than preventing life. And allowing suffering more than what is necessary for life is also wrong.
Is there a golden mean between these extremes? And can that be regulated through the force of government?
Re: (Score:2)
"So let's discuss the why before we just start regulating stuff that 99.999% of the time will not need any regulation for any public safety, or even ethical purpose."
So, I'll submit that at some point, we may have AI that has the ability to do us harm, and needs to be regulated. In order to regulate, you'll need a definition of AI so that whatever agency does that regulation, they'll have a defined swim lane, much like the FAA, FCC, and others. This doesn't mean that we have to have something to regulate
My constructive definition of AI (Score:1)
You will know it when you have it as you won’t be able to regulate it.
Re: (Score:2)
That war was fought and lost a long time ago. It was found that I type development was too dangerous for the common people, and legislated out of existence. Just think about how you were taught 'to think', then take a look at the people around you, and ask yourself, is any of this really necessary?
AI help (Score:2)
How can AI help the government? Well, if the government tries to regulate it then it won't help. However, if we replaced government with AI, with a system that actually learns, doesn't mistreat women, has restraint and doesn't bow to every lobbyist that shows up with a cart full of money, there may be hope for humans. But someone would have to program a system like that....nope, we're fucked.
Re: (Score:2)
I fear that if such a thing was done those of use that survived would be stuck with no mouth and the need to scream.
I've always taken it to mean.... (Score:4, Insightful)
Just as certainly as there are varying levels of natural intelligence, there can be varying levels of artificial intelligence.
Now if you want me to define "intelligence".... well, there's a trickier one. Is little Billy intelligent because he learned how to to multiply, or was that just the result of memorization? Is Alphazero intelligent because it learned how to play its games very well, or is it merely the result of following heuristic algorithms that coincidentally create the sufficiently persistent illusion of being a superior games player, while in fact possessing absolutely no real skill?
The answer is subjective... its going to depend on who you ask. Personally, I think that both are examples of intelligence, and more generally, any sufficiently persistent illusion of the existence of a thing, by virtue of being indistinguishable from that thing, should be considered completely equivalent to that thing, or else whatever we happen to call that thing doesn't really mean anything in the first place.
..grey goo? (Score:2)
Paperclips... Lots and lots of paperclips.
https://www.theverge.com/tldr/... [theverge.com]
AI (Score:2)
A system that has the capacity to say "I don't know the answer to that, but let me learn some more and I'll get back to you."
AlphaZero is a kick-ass Chess/Go player, but show it a pic of a hummingbird and say "what is this?" and chances are it's going to return an error state.
In other words, a program/system that can re-program its own code, or improve itself without external forces acting upon it to do so.
Re: (Score:2)
In other words, a program/system that can re-program its own code
Well, go ahead and reprogram your own code to beat AlphaZero at Chess or Go.
Re: (Score:2)
Pretty much, every decent Go/Chess program uses some form of pattern recognition in their evaluation of the state of every board. Pattern recognition for imagery has improved dramatically over recent years...think facial recognition on your iPhone X! It won't be long before that hummingbird will be another solved problem.
So by definition (Score:3)
Politicians are not even artificially intelligent?
Re: (Score:2)
Politicians are not even artificially intelligent?
You may be on to something there.
Your ideas intrigue me, and I wish to subscribe to your newsletter.
Re: (Score:2)
Politicians are not even artificially intelligent?
I can definitely point out some Turing test failures among politicians.
Re: (Score:1)
Artificial Stupidity will be the Next Big Thing.
I'm working on the Biglytron, Grope-A-Matic, bribenomics, panderamics, and Spinster.com.
Definitions don't really matter (Score:2)
Past lessons (Score:1)
Ultimately AI is just glorified statistics or statistics that are de-glorified to make processing practical, as in lossy-but-fast. The field is new enough that it's hard to know what the edge cases will be in the future. "Think like humans" smells too fuzzy to me such that it would probably come down to opinions of the jury and/or judges.
As an example of fragile laws, you think it would be easy to write misuse of classified info (secrets) into law, but as the "Hillary email" case showed, it's far from trivi
Re: (Score:1)
How does that directly relate to the law as written?
Do you have links or citations for this?
Again, citations or links? And how does one know if they are fully trained if they don
Re: (Score:1)
It does't clearly say who is at fault if the training is not given.
Re: (Score:1)
-> Executive orders have the force of law (http://www.thisnation.com/question/040.html)
-> Executive Order 13526—Classified National Security Information Memorandum of December 29, 2009—Implementation of the Executive Order ‘‘Classified National Security Information’’ Order of December 29, 2009—Original Classification Authority (https://foia.state.gov/_docs/MDR/135190.pdf)
- Sec. 1.3. Classification Authority. (a) The authority to clas
Another PRETEND problem solved ... (Score:1)
I think the DNC should run someone who puts this at the top of their agenda.
Ordinary people in that party are used to having their daily concerns overlooked.
The right question (Score:2)
Really quite simple (Score:1)
Before you can do it on a computer, it's A.I.
After you can do it on a computer, it's no longer A.I.
That's been true the last 25 years. I don't see why it should change in the future.
Re: (Score:2)
Nah, certain algorithms are called "AI" that are done on computers. Neural nets, inference engines, expert systems.....they shouldn't be called that, but they are.
Re: (Score:1)
Hey,
Thanks for supporting my point! Woot!
A better definition (Score:1)
Wrong question, as usual (Score:2)
I don't see how this can work (Score:2)
I have done some work with neural networks to operate as a surrogate model for more complex simulations. There is no analytical solution but some function exists such that for the given inputs the simulated outputs are the result to some determined precision. Neural networks are quite good at these kinds of problems. The system is in no way intelligent and it is not capable of doing anything beyond giving the same output as the simulation based on a given input.
I don't see any reason that just having a neur
If we are concerned. . . (Score:2)
If we are concerned that AI will be used to mistreat actual living human people; then maybe Government should pass laws dictating the proper treatment for actual living human people. Rather than try to make these abstract definitions about the metaphysical properties of toasters and how they must be manufactured to behave.
The fact is - we've already wrestled with this problem. Our most primitive AI; the landmine. Kills or maims people. Rather at random. We tried to ban them worldwide. That effort failed sp