Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

A.I.ophobic? Elon Musk Says, "Yeah" We Should Be

This site may earn commission on affiliate links.

Anon2

Of Course I Trust You Will Do The Right Thing
I, for one, do not fear change nor the imminent advancement of modern technology. Actually a world without good coffee is quite a bit more horrifying an existence. Yet, Tesla and Space X CEO Elon Musk shared at a recent broad discussion at the National Governors Conference his more than a general concern about artificial intelligence. More specifically the regulation while A.I. is yet in BETA try mode is crucial. Sooner rather than later won't quite cut the mustard.
Musk's tone Saturday was a disconcerting we're probably already way late in our knowledge of what's already out there and the impact on human ingenuity in transportation and automotive industries. Yes he's right that as A.I. evolves at nanospeed it will outdo us. Will it undo us? THIS is the right question.


Regulation means as Musk so distinctively defines it, "Getting the rules right." Preferably before game on. With A.I. that's too late to set parameters then make them retroactive to Day 1. We have to expose ourselves to what's available now. Get to know the science and technology. Hands on nuts and bolts are easy. Ends and outs of their inner workings far more sophisticated. Greatest threat is within A.I.'s grey matter. Code it correctly to acknowledge and yield to its bio-organic master, i.e., manual override by human being, and eliminate any future threat of a bionic slash robotic hashtag hostile takeover. It's not ever typically a hardware issue but always the software, wrinkle in the logic if you will, which proves to be mastermind behind all chaos.

As I always like to soapbox, technology is absolutely phenomenal when used for positive advancement. Should it fall prey to greed or unholy lust for power? There has to be an end game. And that is where regulation needs to be a queen. Protecting a king's position wherever he, or in this case it being technology, should land. Sustainable energy empowers preservation. Compels conservation of Earth's natural resources. Her richest being humanity.
 
Apparently, deep AI works like a black box. You provide an incredible amount of data and then let it loose on a problem and it figures out the solutions but you don't actually know how it works. There is no coding for a specific problem. It's more about processing large amounts of data and continuous self-improvement towards a specific goal. Therefore connecting this kind AI to data and communication networks could be dangerous.
 
I listened to the governor's conference chat by Elon and came away somewhat confused. Was he warning against a Terminator type future, or was it a luddite moment where robots replace workers ? The latter seems at best irony given his alien dreadnaught plans for Model 3 production.
 
I listened to the governor's conference chat by Elon and came away somewhat confused. Was he warning against a Terminator type future, or was it a luddite moment where robots replace workers ? The latter seems at best irony given his alien dreadnaught plans for Model 3 production.
Did you listen to his example of a scenario that could be created by an AI that could do incredible damage to humans? Listen again. He was not talking about a "Terminator type" future or bots replacing human workers. He was talking about something much worse.

I share his concern about AI. I recommend you read this The Artificial Intelligence Revolution: Part 1 - Wait But Why
 
  • Like
Reactions: jkn
Did you listen to his example of a scenario that could be created by an AI that could do incredible damage to humans?
Yes, I heard.
I particularly remember the quiet in room as the politicians digested the idea that a computer could be every bit as machiavellian as they are. I even told the story to my wife.

But an important distinction should be drawn here: the motivation to pursue that course has to be human, at least for now.
 
Yes, I heard.
I particularly remember the quiet in room as the politicians digested the idea that a computer could be every bit as machiavellian as they are. I even told the story to my wife.

But an important distinction should be drawn here: the motivation to pursue that course has to be human, at least for now.

I thought the point Elon was making is that you could provide a completely benign instruction to the AI, but it could decide that the best way to accomplish that is through very destructive means -- an example he gave was AI starting a war to maximize investment returns.

Never mind intentional misuse such as having Russian hackers using AI to gain access to your retirement accounts or much worse things that I don't even want to think about.
 
  • Like
Reactions: jkn
I thought the point Elon was making is that you could provide a completely benign instruction to the AI, but it could decide that the best way to accomplish that is through very destructive means -- an example he gave was AI starting a war to maximize investment returns.

Never mind intentional misuse such as having Russian hackers using AI to gain access to your retirement accounts or much worse things that I don't even want to think about.
Granted.

The difference between problem solver and problem maker is one of intent. For now, that distinction is human. I presume that already today AI is being used to try to steal and AI is being used to to counter. I don't discount the possible harm from any arms race, but I'll get really worried when AI has independent desires.
 
But an important distinction should be drawn here: the motivation to pursue that course has to be human, at least for now.
Not really...
I thought the point Elon was making is that you could provide a completely benign instruction to the AI, but it could decide that the best way to accomplish that is through very destructive means -- an example he gave was AI starting a war to maximize investment returns.
Exactly.
The difference between problem solver and problem maker is one of intent
Not the way I see it. In the scenario Elon described, the AI was given a problem to solve: how to maximize investment yield in a portfolio that included weapons manufacturing companies. Solution: start a war. Problem "solved": major new problem "made".
 
  • Like
Reactions: AMPd and EinSV
The last step requires a human consent .. at least for now
Yes it does, but just barely. In my lifetime, automated systems have brought the US and the USSR to the brink of war multiple times. Non-automated systems, i.e. human emotions and passions have resulted in war many many times in my lifetime. A smart AI would be able to assess that inflaming human passions would be a surefire way to start a war and boost the stock value of military contractors.
 
If it were someone you didn't recognize, you'd call him a panic-mongering loon. He came up with a hypothetical on something illegal that a person could already do, and when asked directly for an idea on how to regulate, he had nothing. He just said to start by learning what's happening.
 
Apparently, deep AI works like a black box. You provide an incredible amount of data and then let it loose on a problem and it figures out the solutions but you don't actually know how it works. There is no coding for a specific problem. It's more about processing large amounts of data and continuous self-improvement towards a specific goal. Therefore connecting this kind AI to data and communication networks could be dangerous.

In the future these can probably be analyzed somehow, but early data suggests they become decision trees: