Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Perspectives on Elon's Perspectives

This site may earn commission on affiliate links.
I honestly think he has a way of thinking that I envy: he sees things for what they are instead of letting himself be swayed by preconcieved notions, by the status quo, by what others might think of him or his views. He listens to others, respects them but still he makes up his own mind. This approach may seem easy but it's actually very hard to do. I'm trying hard to learn from him.

Did you see his recent interview? He was asked about AI and gave his answer. Then another question followed. He asked her to repeat the question because he wasn't listening the first time. He admitted this, excused him self for not having listened but he said "Sorry, I was still figuring out the AI thing". On stage, in the spotlight, he just kept his mind focused on the issue he had just talked about because ha was finishing maybe an important train of thought. He didn't care at that moment why he was sitting there taking questions but he just was deeply focused in a thought process. To me it was impressive. He seems to be to combine in some sense the very pure logical thinking of a savant or a highly intelligent person with Aspbergers while at the same time having all the sides to him that makes for a complete functioning individual: empathy, morals, social considerations (he said hiring people with good hearts was even more important than talented people). Believe me his persona is a rare mix that makes for extraordinary results, as his track record proves. Therefore, when he expresses so much concern about AGI I take heed, listen and reflect. Like Bonnie said he makes you stop and think twice and consider things in another light.
 
Musk is absolutely not losing it. He's just one of relatively few famous technical people who has the balls to say out loud what they're thinking about AI in the long term. This is a really interesting subject, both from a philosophical and a technological perspective. If you want to read more about the idea of AI as an existential threat, a researcher named Eliezer Yudkowski has written a lot of interesting stuff.

In short: Advanced artificial intelligence is a Pandora's box. We are not certain if advanced artificial intelligence can in fact be created, but there are also no reasons to believe it is physically impossible. And it would be a massive risk to human life (and other life) if it was created without great care. Human intelligence is the result of evolution, and all our experience with intelligence is from other humans and intelligent animal life. But there is no reason to believe that humans are the most sophisticated intelligence that can exist in the physical universe, and we have done little to try and understand how more advanced intelligences would behave. (I am using the word "intelligence" to mean any advanced process that tries to achieve some goal). If more advanced intelligences than humans can exist, we would be defenseless and completely at its mercy.

Basically, to reason by analogy, imagine how a chimpanzee would defend itself if humans moved into its environment. There is simply no contest. Humans don't hate chimpanzees or require them to die, but we have no (or few) moral qualms about removing them if they are an inconvenience in our everyday life. Even more so with rodents, bacteria or grass.

One idea of dangerous AI would be if we were to create an artificial system that is equally capable (or more capable) than humans, and is able to improve its own capability through research, adding more computational power, building machines to affect the physical world and other things. Such a system could lead to an "intelligence explosion" where we would be left with an immensely capable system, outside of human controls, that acts according to whatever motivations it ends up with. A fictional, somewhat satirical and stylized example of such a system is the "Paperclip maximizer", which turns Earth into a giant paperclip factory in order to achieve its goals.

The problem in thinking about this stuff, is that humanity has ever encountered anything that is more intelligent than other humans. Our brains exist in the physical world; it must certainly be possible that other and more capable intelligences can exist. Perhaps we can create them. More sophisticated intelligences could act in ways that are completely foreign to humans. Perhaps there is no language, perhaps there is no notion of morals, emotions or a balanced lifestyle, etc. But if such a system is able to outwit humans, it would be incredibly dangerous if our motivations happened to be at odds with each other.

Pandora's box. Really fascinating subject, and I'm very happy to see that serious scientists and engineers are taking it seriously. In principle, the first super-human AI we create could be the last. It's really important to be ahead of the curve on this question. It is possible to reason about the likely motivations of most intelligent systems, different classes of intelligences, etc. Some resources should be spent on this, if nothing else than just for the same reasons Musk states we should explore the solar system and establish colonies on other worlds. It's an insurance policy.
 
Last edited:
Elon Musk is a Martian. His kind went extinct due to both climate change and AI taking over in Mars. He was the last Martian sent to Earth, kinda like superman being sent to Earth before the planet went extinct. As he grows up, he remembers his origins, and what caused other Martian to go extinct. Now he wants to go back to Mars and rebuild the colony, while at the same time warning others of the dangers of climate change and AI. :tongue:
+1 I think we should tell him that we figured it out and that he can admit it, as well as being Iron Man. :biggrin:
 
There are many SF stories (in books and movies) that mull over how AIs might evolve and interact with humans. I've been reading Iain M. Banks' Culture series with great pleasure. Banks creates a world where AIs have vastly surpassed humans by any metric, but who tolerate and protect humans nonetheless. That's a utopian outcome, where AIs and robotics frees humans from all needs and cares, leaving us to create and enjoy art, socializing, and the like. There's obviously another direction that could have gone...
 
Did you see his recent interview? He was asked about AI and gave his answer. Then another question followed. He asked her to repeat the question because he wasn't listening the first time. He admitted this, excused him self for not having listened but he said "Sorry, I was still figuring out the AI thing". On stage, in the spotlight, he just kept his mind focused on the issue he had just talked about because ha was finishing maybe an important train of thought. He didn't care at that moment why he was sitting there taking questions but he just was deeply focused in a thought process. To me it was impressive. He seems to be to combine in some sense the very pure logical thinking of a savant or a highly intelligent person with Aspbergers while at the same time having all the sides to him that makes for a complete functioning individual: empathy, morals, social considerations (he said hiring people with good hearts was even more important than talented people). Believe me his persona is a rare mix that makes for extraordinary results, as his track record proves. Therefore, when he expresses so much concern about AGI I take heed, listen and reflect. Like Bonnie said he makes you stop and think twice and consider things in another light.


When I admit that I stopped listening to my wife because my thoughts drifted elsewhere, I receive nowhere near the credit that Elon does. :(
 
Hell yes he's losing it. You'd have to be crazy to take on big auto, big oil, power companies, dealers associations, big aerospace.... hmm did I forget anything?

Oh I forgot big auto parts, big muffler and big tune-up. :)

But seriously, the AI thing can become an economic disruption too. As more and more service-level or even knowledge jobs are automated, the average worker could be made worse off.
 
Elon is not losing it, the media is being sloppy with it's quotes. I believe Elon is tacitly referencing this book -'Daemon' by Daniel Suarez. It's basically a book about AI taking over the world. It's a fascinating read, which I highly recommend. Elon didn't clarify, so the Media is spelling out Demon's rather than Daemons (I believe they are actually pronounced the same way).

BTW - I'm not a programmer, so when I first heard of the book, I had to figure out what the difference was!
 
Elon Musk is a Martian. His kind went extinct due to both climate change and AI taking over in Mars. He was the last Martian sent to Earth, kinda like superman being sent to Earth before the planet went extinct. As he grows up, he remembers his origins, and what caused other Martian to go extinct. Now he wants to go back to Mars and rebuild the colony, while at the same time warning others of the dangers of climate change and AI. :tongue:

'Stranger in a Strange Land'
 
I honestly think he has a way of thinking that I envy: he sees things for what they are instead of letting himself be swayed by preconcieved notions, by the status quo, by what others might think of him or his views. He listens to others, respects them but still he makes up his own mind. This approach may seem easy but it's actually very hard to do. I'm trying hard to learn from him.

Did you see his recent interview? He was asked about AI and gave his answer. Then another question followed. He asked her to repeat the question because he wasn't listening the first time. He admitted this, excused him self for not having listened but he said "Sorry, I was still figuring out the AI thing". On stage, in the spotlight, he just kept his mind focused on the issue he had just talked about because ha was finishing maybe an important train of thought. He didn't care at that moment why he was sitting there taking questions but he just was deeply focused in a thought process. To me it was impressive. He seems to be to combine in some sense the very pure logical thinking of a savant or a highly intelligent person with Aspbergers while at the same time having all the sides to him that makes for a complete functioning individual: empathy, morals, social considerations (he said hiring people with good hearts was even more important than talented people). Believe me his persona is a rare mix that makes for extraordinary results, as his track record proves. Therefore, when he expresses so much concern about AGI I take heed, listen and reflect. Like Bonnie said he makes you stop and think twice and consider things in another light.
Agree. And, as a shrink, Elon is definitely not "loosing it". Can we change the title to perhaps "No, Elon is definitely not loosing it"
 
[...] And the tech industry is not used to proceeding with caution. Imagine applying the "you're always in beta" philosophy to a big AI system. What if there are bugs in the code for Asimov's robot laws?
[...]

The thing about computer programs and AI, they don't work like Asimov imagined. They don't know what a person is. So you can't instruct your system to say don't hurt a person, or through inaction allow a person to be hurt. These are computer programs that only do what you tell them, and they don't work on high level concepts like "people" or "injury".

You might tell your robot to stop moving when someone presses the stop button, but if you walk up to a robot arm that is moving it will just keep moving. Perhaps the next step, don't move the arm around if there is something nearby that is warm, say between 95-105 F say. But if you have that, then you can't walk down the factory aisle between your robots. Or if what it is working on gets warm into that range while it's being assembled or something. So you see the problem here is that it's hard to have a sensor that determines when a "person" is standing there, in such a way that it doesn't stop it during normal operation. And you can't easily define harm, especially in the sense of don't allow that.

There's little practical danger right now of an AI hurting us, but if it did, that could be the end. There's little practical danger of a nuclear war tonight, but it's not impossible. The US, Russia, and now China have subs with nuclear missile (boomers) hiding around the ocean, ready to destroy the other guys, just in case. It's not impossible that an accident happens there either.
 
The thing about computer programs and AI, they don't work like Asimov imagined. They don't know what a person is. So you can't instruct your system to say don't hurt a person, or through inaction allow a person to be hurt. These are computer programs that only do what you tell them, and they don't work on high level concepts like "people" or "injury".
Let me preface this with the fact that I know absolutely nothing about the current state of AI development. I am an experience computer programmer so I do know how today's computers work, just not in the AI field. My guess would be that no one is going to create anything remotely similar to SciFi AI using today's programming practices. It would never scale well enough and, as you point out, the programs only do what the programmer instructed them to do.

For AI to truly work the programming needs to be based upon a set of rules that can be modified or extended by the program as the program learns. With this model, rules could then be created to "define" what it means to be human. On Solaria (Asimov's books), the definition was modified to include only people that spoke Solarian. This is where AI starts to become dangerous if the learning goes well beyond what the designers anticipated. Even in Asimov's world, the Roboticists didn't fully understand how their robot brains truly worked -- the technology had gone well beyond their abilities and everything depended upon the "three laws" to keep them in check.

I think this is where Elon is concerned: are the people developing AI building in the proper checks and balances to keep the AI from growing beyond our control and turning on us? I have no idea if this concern is valid.
 
I think ai as in the IBM machines that won people in chess and jeopardy. It just learns from input and understands things organically at the speed of the internet. You just cannot underestimate anything that learns like that