Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Businesses Government Politics

Artificial Intelligence Has 'Great Potential, But We Need To Steer Carefully,' LinkedIn Co-founder Says (cnbc.com) 73

LinkedIn co-founder Reid Hoffman joined other tech moguls in voicing concern about artificial intelligence on Wednesday. From a report: "It has great potential, but we need to steer carefully," Hoffman said on Halftime Report. Hoffman stressed corporate transparency when asked what happens if companies use AI to attack nation-states. The possibility of manipulating how people consume information remains an unanswered question. During last year's U.S. presidential election, Facebook advertisements linked to Russia mainly focused on the states of Michigan and Wisconsin, and Hoffman says information battles are "in the very early days." AI must be improved, Hoffman says, to "[hold] corporations accountable" when nation-states are using the technology to attack. "Corporations normally deal with other corporations, not with governments," Hoffman said. The "ultimate" solution, he says, is "having more kinds of functions and features within AI that show abhorrent patterns." That way patterns raise a red flag for humans to investigate, Hoffman noted.
This discussion has been archived. No new comments can be posted.

Artificial Intelligence Has 'Great Potential, But We Need To Steer Carefully,' LinkedIn Co-founder Says

Comments Filter:
  • by xxxJonBoyxxx ( 565205 ) on Wednesday October 04, 2017 @03:48PM (#55311113)
    >> Facebook advertisements linked to Russia mainly focused on the states of Michigan and Wisconsin

    It was "specifically" (as in "some") rather than "mainly" according to TFA:
    http://www.cnn.com/2017/10/03/politics/russian-facebook-ads-michigan-wisconsin/index.html?sr=twCNN100317russian-facebook-ads-michigan-wisconsin0933PMStory

    Wasn't most political advertising aimed at the battleground states? Did those Facebook ads somehow keep someone from campaigning there?
  • ... about computer stuff.

    Oh, wait ... [cnn.com]

    Hackers selling 117 million LinkedIn passwords

  • Seems to me most people really confuse Automation and Artificial Intelligence.
    Automation has been growing for years and will grow even faster over the next 5-10 years, replacing many more jobs.
    Artificial Intelligence has been growing and will replace some jobs, but I think the real advances and break through s are at least 5-10 years away if not more.
    • AI is just a bigger form of automation.

      There was animal powered automation. Then steam powered automation. Then electrical grid powered automation. Still, they couldn't replace jobs requiring intelligence, such as rating someone's credit worthiness. AI is simply the next step of automation replacing workers.

      There was this coal miner. The coal mine shut down.
      So he retrained and became an assembly line worker. But the auto plant replaced him with robots.
      So he became a truck driver, because those
      • Naw, AI is just self-learning software.

        Plenty of automation is built these days using some sort of AI. A hand-crafted expert system (just a big ass decision graph), would be automation without any AI. If you have an AI generate an expert system from a big-ass data set of medical records, that's AI helping automate away the job of doctors. Chess programs are almost exclusively made by an AI training some algorithm. Once you have that algorithm and play it against a human, that's automating the game of

  • AI is fascinating but we really do need to steer carefully and ask ourselves what are we doing. As automation increasingly enters our lives, so does the rapid decline of jobs. The human population continues to rise faster than there are means to support it. Thus far no one (at least in the United States) is willing to discuss the eventual need for a Universal Basic Income. We are heading down a very slippery slope towards large scale unemployment.
    • Universal Basic Income would cost a lot more money. A more fiscally responsible plan would be to put the unemployed to use as fuel powering the automation. Of course, being a legislator still counts as being employed.
    • We HAVE been here before though. The industrial revolution automated away a lot of skilled labor. And... that represents about the worst-case scenario: A ton of suddenly poor people riot and smash a lot of looms. The factory owners get the nobles to send the army to go shoot them. The rabble backs down and suffers 3 generations of soul-crushing unemployment and poverty. Hopefully we can do better this time: steering kids towards jobs that will actually exist when they come of age, retraining existing wo

    • by pubwvj ( 1045960 )

      "As automation increasingly enters our lives, so does the rapid decline of jobs. The human population continues to rise faster than there are means to support it."

      This is a logical fallacy based on a political viewpoint of dependency instead of self actualization.

      We do not need to create jobs for people. Rather people need to take responsibility for creating their own work, jobs and support activity.

      We used to do that. We can again.

  • We can barely create functional software. There is no such thing as "AI". It is just parlor tricks at this point.
    • > We can barely create functional software.

      Spoken like a true visual basic programmer.


      > There is no such thing as "AI". It is just parlor tricks at this point.

      "true" AI may turn out to be nothing more than a combination of parlor tricks. Just like other machines are combinations of what were once amazing parlor tricks. What!?!? If you run that magnet by a wire it induces a current flow? That's friggin' amazifying!! Just like the human brain has dedicated structures for different functio
      • Intelligence and consciousness are still questions for philosophy, not science/biology/engineering. How are we to devise and build something that we still struggle to adequately explain?

    • A parrot repeating back words is a parlor trick. And everything you or I do isn't so much more advanced.

      AI is any sort of self-learning software. That can be anything from learning how to play tic-tac-toe to making medical diagnosis. Just because one of those things seems a lot simpler doesn't change the classification of software that performs the task. You're alive, but so are bacteria. Same sort of complexity difference.

      Hollywood has ruined so many people on the idea of what is and isn't AI.

  • The "ultimate" solution, [Hoffman] says, is "having more kinds of functions and features within AI that show abhorrent patterns." That way patterns raise a red flag for humans to investigate, Hoffman noted.

    So, the ultimate solution for the dangers of AI is ... more AI?

    Well okay, maybe. But this argument does sound familiar. I don't remember where, but it has been applied to AI ... and guns.

  • by zlives ( 2009072 ) on Wednesday October 04, 2017 @04:41PM (#55311459)

    linked-in still sucks a lot of ass

  • I trust AI more than I trust large corporations.

  • Must be programmed into any AI.
    • There is a forth law added. Law zero, which sates that a robot cannot cause or by omission allow the human species to become extinct. Then modify the other three laws so that this one has priority even over killing a human to protect the entire species.

      Did you see the I Robot movie with Will Smith? Wasn't the whole point that the 3 laws would eventually lead to computers controlling us. For our own good. To protect us. Because:
      [x] Think of the children!
      [x] Terrorists
      [_] Self driving cars
      [x] Glo
    • Remember, the entire point of the "I, Robot" stories was that things like the Laws of Robotics can't actually work as intended.

  • Every Joe Blow seems to have an opinion about AI.
    Pig farm I know things AI is the greatest thing since spam.
    Probably right.
    Artificial Insemination... That was what LinkedIn was thinking of... right?

  • As I understand, "weak" AI would be an AI (the real deal) that is as intelligent as a human. So you would be matching wits with presumably your equal.

    The real concern is "strong" AI. That is AI which is superior to human intelligence. As I understand, it comes in two flavors.
    1. The same intelligence as a human, but at the speed (possibly scale) of computers. Scale can help if you're thinking about something and you have to explore several different possible solutions. The computer AI do what you ca
    • As I understand, "weak" AI would be an AI (the real deal) that is as intelligent as a human.

      Nope. Weak AI is literally any sort of decision made by a computer. Liiiiiike, the sad little goomba in Super Mario that reverses direction when he hits a ledge. That's "weak AI". Or "soft AI". The threshold is REALLY low for qualifying as weak AI. But it's also includes impressive stuff like voice recognition, chess programs, and self-driving cars. Anything that limits the task to a specific function and puts a boundary of what the AI has to deal with is weak AI.

      The alternative is "strong" or "Hard" AI, o

  • by sexconker ( 1179573 ) on Wednesday October 04, 2017 @06:47PM (#55312131)

    Who the FUCK gives a shit what this skeezeball thinks?

    He created a spam network that's so useless and dead that it was the subject of a joke on the Simpsons a couple of years back. Whooptydoo!

  • These are not new problems. If you're concerned about the explainability/predictability of what you implement, do something about it. You're going to be held responsible for its results/actions one way or another, and that is absolutely not a new concept, nor a concept unique to AI. To illustrate, try replacing every instance of "AI" in that quote with "powerful technology." See: "Powerful technology has great potential, but we need to steer carefully," Hoffman said on Halftime Report. Hoffman stressed

You are always doing something marginal when the boss drops by your desk.

Working...