TMC is an independent, primarily volunteer organization that relies on ad revenue to cover its operating costs. Please consider whitelisting TMC on your ad blocker and becoming a Supporting Member. For more info: Support TMC

A.I.ophobic? Elon Musk Says, "Yeah" We Should Be

Discussion in 'Technical' started by rachelawilson, Jul 16, 2017.

  1. rachelawilson

    rachelawilson Member

    Joined:
    Jul 13, 2017
    Messages:
    12
    Location:
    Philadelphia Pennsylvania
    I, for one, do not fear change nor the imminent advancement of modern technology. Actually a world without good coffee is quite a bit more horrifying an existence. Yet, Tesla and Space X CEO Elon Musk shared at a recent broad discussion at the National Governors Conference his more than a general concern about artificial intelligence. More specifically the regulation while A.I. is yet in BETA try mode is crucial. Sooner rather than later won't quite cut the mustard.
    Musk's tone Saturday was a disconcerting we're probably already way late in our knowledge of what's already out there and the impact on human ingenuity in transportation and automotive industries. Yes he's right that as A.I. evolves at nanospeed it will outdo us. Will it undo us? THIS is the right question.



    Regulation means as Musk so distinctively defines it, "Getting the rules right." Preferably before game on. With A.I. that's too late to set parameters then make them retroactive to Day 1. We have to expose ourselves to what's available now. Get to know the science and technology. Hands on nuts and bolts are easy. Ends and outs of their inner workings far more sophisticated. Greatest threat is within A.I.'s grey matter. Code it correctly to acknowledge and yield to its bio-organic master, i.e., manual override by human being, and eliminate any future threat of a bionic slash robotic hashtag hostile takeover. It's not ever typically a hardware issue but always the software, wrinkle in the logic if you will, which proves to be mastermind behind all chaos.

    As I always like to soapbox, technology is absolutely phenomenal when used for positive advancement. Should it fall prey to greed or unholy lust for power? There has to be an end game. And that is where regulation needs to be a queen. Protecting a king's position wherever he, or in this case it being technology, should land. Sustainable energy empowers preservation. Compels conservation of Earth's natural resources. Her richest being humanity.
     
  2. Troy

    Troy Member

    Joined:
    Aug 24, 2015
    Messages:
    944
    Location:
    Apparently, deep AI works like a black box. You provide an incredible amount of data and then let it loose on a problem and it figures out the solutions but you don't actually know how it works. There is no coding for a specific problem. It's more about processing large amounts of data and continuous self-improvement towards a specific goal. Therefore connecting this kind AI to data and communication networks could be dangerous.
     
  3. SageBrush

    SageBrush Active Member

    Joined:
    May 7, 2015
    Messages:
    3,608
    Location:
    Colorado
    I listened to the governor's conference chat by Elon and came away somewhat confused. Was he warning against a Terminator type future, or was it a luddite moment where robots replace workers ? The latter seems at best irony given his alien dreadnaught plans for Model 3 production.
     
  4. SageBrush

    SageBrush Active Member

    Joined:
    May 7, 2015
    Messages:
    3,608
    Location:
    Colorado
    Good description. It sounds similar to evolution.

    Apologies to Elon for reasoning by analogy.
     
  5. ecarfan

    ecarfan Well-Known Member

    Joined:
    Sep 21, 2013
    Messages:
    12,538
    Location:
    San Mateo, CA
    Did you listen to his example of a scenario that could be created by an AI that could do incredible damage to humans? Listen again. He was not talking about a "Terminator type" future or bots replacing human workers. He was talking about something much worse.

    I share his concern about AI. I recommend you read this The Artificial Intelligence Revolution: Part 1 - Wait But Why
     
    • Like x 1
  6. SageBrush

    SageBrush Active Member

    Joined:
    May 7, 2015
    Messages:
    3,608
    Location:
    Colorado
    Yes, I heard.
    I particularly remember the quiet in room as the politicians digested the idea that a computer could be every bit as machiavellian as they are. I even told the story to my wife.

    But an important distinction should be drawn here: the motivation to pursue that course has to be human, at least for now.
     
  7. EinSV

    EinSV Active Member

    Joined:
    Feb 6, 2016
    Messages:
    1,744
    Location:
    NorCal
    I thought the point Elon was making is that you could provide a completely benign instruction to the AI, but it could decide that the best way to accomplish that is through very destructive means -- an example he gave was AI starting a war to maximize investment returns.

    Never mind intentional misuse such as having Russian hackers using AI to gain access to your retirement accounts or much worse things that I don't even want to think about.
     
    • Like x 1
  8. SageBrush

    SageBrush Active Member

    Joined:
    May 7, 2015
    Messages:
    3,608
    Location:
    Colorado
    Granted.

    The difference between problem solver and problem maker is one of intent. For now, that distinction is human. I presume that already today AI is being used to try to steal and AI is being used to to counter. I don't discount the possible harm from any arms race, but I'll get really worried when AI has independent desires.
     
  9. BillO

    BillO Member

    Joined:
    Oct 14, 2015
    Messages:
    29
    Location:
    San Francisco, CA
    At that point, it is too late...
     
  10. ecarfan

    ecarfan Well-Known Member

    Joined:
    Sep 21, 2013
    Messages:
    12,538
    Location:
    San Mateo, CA
    Not really...
    Exactly.
    Not the way I see it. In the scenario Elon described, the AI was given a problem to solve: how to maximize investment yield in a portfolio that included weapons manufacturing companies. Solution: start a war. Problem "solved": major new problem "made".
     
    • Like x 1
  11. SageBrush

    SageBrush Active Member

    Joined:
    May 7, 2015
    Messages:
    3,608
    Location:
    Colorado
    The last step requires a human consent .. at least for now
     
  12. ecarfan

    ecarfan Well-Known Member

    Joined:
    Sep 21, 2013
    Messages:
    12,538
    Location:
    San Mateo, CA
    Yes it does, but just barely. In my lifetime, automated systems have brought the US and the USSR to the brink of war multiple times. Non-automated systems, i.e. human emotions and passions have resulted in war many many times in my lifetime. A smart AI would be able to assess that inflaming human passions would be a surefire way to start a war and boost the stock value of military contractors.
     
  13. ItsNotAboutTheMoney

    ItsNotAboutTheMoney Well-Known Member

    Joined:
    Jul 12, 2012
    Messages:
    5,341
    Location:
    Maine
    If it were someone you didn't recognize, you'd call him a panic-mongering loon. He came up with a hypothetical on something illegal that a person could already do, and when asked directly for an idea on how to regulate, he had nothing. He just said to start by learning what's happening.
     
  14. ecarfan

    ecarfan Well-Known Member

    Joined:
    Sep 21, 2013
    Messages:
    12,538
    Location:
    San Mateo, CA
    Based on what I have read about the future of AI, I would most definitely not react as you describe.
     

Share This Page