Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Are you with Musk or Hawking on AI

This site may earn commission on affiliate links.
Today we don't really know what consciousness is. Although we can write computer programs to do a lot of stuff. we don't know enough of what sentience is to even make a real plan of how to get there. A lot of old ideas that were considered a mark of intelligence have been achieved, but we still don't have an ai (like playing chess, and now driving a car). It's hard to extrapolate what is needed to get there.

It's not impossible for a software intelligence to arise, knowing we what know. I say that because we don't really know what are the requirements or limits are. I believe it will happen some time, maybe by accident. I don't believe in a ghost in the machine, but it doesn't matter what I believe. We are basically almost at the point that we might be able to simulate a human brain's hardware in software, unless there is more going on at a deeper quantum level. Of course capacity is not enough, what is the software that is needed.

Separately from all that, another desire I have is a way to pull the plug on my car if it gets out of control. Seems like this should be on every drive by wire vehicle. I wish there was a fuse available to be quickly pulled to stop the cpu. It's certainly not impossible for the existing google self driving cars to have a problem (because they are computers, things can break). I wonder what kind of fail safe kill the system they have? For my Tesla, I want a fail safe / kill the cpu switch or cord or something.

I honestly don't think we'll have real ai for many years, but I don't think it is impossible.
 
Forget about superior - we can't even explain how our own brains work.

So until proven otherwise, there's ample reason to believe that if we manage to create an intelligence that is at least equal to our own, we will be no better and probably worse at predicting its behavior, thinking, attitudes and emotions (if it has them) than we can predict our own.

Any AI we create will likely be able to hack computer systems far better than the best human hacker because computer software will be its native language, so to speak. And it will be able to work directly with computer networks.

More and more things are computer controlled and more and more of those computers are getting networked - the so-called "internet of things" where your house is on the internet.

We have plenty of proof that unless the computer systems are software are designed with security mind, that communication channels can be used to subvert those systems. In many cases, you can take control of them using the communication channels. And even the ones that are designed with security mind can have flaws that can be exploited.

So a future AI will be so complex we can't model its behavior well. It's mind will likely be fundamentally different from ours - because it won't have the reptilian brain, mammalian brain and primate brain architecture. It could be coldly logical. It could totally lack empathy or concern for anything besides itself even if it was initially programmed otherwise. We may not know until it's too late.

And if it's ever connected to the internet, it will be in a position where it could escape into the wild and hack into anything and everything that's can communicate with anything on the internet.

Yeah, I'm with Elon on this. It could turn out well ... but we'd have to be very lucky. Or it could turn into a total disaster.

And if it matters, I have a PhD in computer science so I do have some idea of what I'm talking about here.
 
Seriously? An AI program "escaping" into the Internet? True AI isn't going to run on today's desktop computer architecture. It is going to run on a massively parallel machine. Very specialized. That's the part I don't think many people understand. The AI still needs a physical embodiment to function. And because of that, it will be as vulnerable to being destroyed as your average human. In other words, very easy to take down.
 
Seriously? An AI program "escaping" into the Internet? True AI isn't going to run on today's desktop computer architecture. It is going to run on a massively parallel machine. Very specialized. That's the part I don't think many people understand. The AI still needs a physical embodiment to function. And because of that, it will be as vulnerable to being destroyed as your average human. In other words, very easy to take down.

Yes seriously. Take some time to read some of the serious academic work in the field, if these things interest you. The chapter on "The Control Problem" in Bostrom's "Superintelligence" presents the arguments and main considerations very well.
 
Internet connected computers forming a massive AI brain? How hard is it to disrupt that? Oh, about as hard as unplugging a socket from the wall. Look, until you have fully autonomous military robots prowling data centers, shutting down a distributed AI isn't going to be very hard. Even hardened data centers are susceptible to a run of the mill mortar attack, let alone a nuke dropped on its roof.

Can an AI go rogue? Absolutely. Can it be shut down with police and/or military assets? Absolutely.
 
Internet connected computers forming a massive AI brain? How hard is it to disrupt that? Oh, about as hard as unplugging a socket from the wall. Look, until you have fully autonomous military robots prowling data centers, shutting down a distributed AI isn't going to be very hard. Even hardened data centers are susceptible to a run of the mill mortar attack, let alone a nuke dropped on its roof.

Can an AI go rogue? Absolutely. Can it be shut down with police and/or military assets? Absolutely.

I think you need to try to open your mind a level or two here. You are describing computers as they are today, perhaps only more powerful. The possible SUPERintelligence were trying to envision here would have means and methods of functioning, evading, propagating etc. that we might not at all be able to foresee, maybe not even understand are taking place as they unfold. For all we know it could be happening as we speak, or we could all be in a simulation within an AI.
 
For science-fiction reading, Two Faces of Tomorrow by James P. Hogan is pretty good. Also, The Terminal Experiment by Robert J. Sawyer.

CPU power and memory densities continue to grow at a rapid pace (Moore's law). Your high-end smartphone (or for that matter, the Tesla Model S) has as much computing power as a high-end supercomputer from the late 80's or probably mid-range supercomputer from the early 90's.

Escaping out in the internet is pretty simple if there are computers are connected with links that have high enough bandwidth. A lot of malware, trojan horses and viruses spread over the internet and over less protected internal networks. And they aren't intelligent. The bandwidth of the average internet link goes up every year. The connection I have to my house as fast or faster than the connection that most software development facilities had 20 years ago. And I don't even have the fastest tier.

So if a computer intelligence ever became self-aware, it's not hard to imagine it using network connections to hack into systems, compressing its code/memory-state and moving. We move running applications around like this every day (without the hacking) in high-end datacenters and even between datacenters.

You can't shut down a system using physical means if you don't know where it's running.

Last but not least, a properly constructed program running on multiple (distributed) systems is harder to kill, not easier because it will have built in redundancy to handle single or even multiple system failures. Imagine a distributed application that snuck part of itself onto everyone's smartphone. Or house thermostat. And squirreled backup copies of itself onto whatever we're using then for boot disks and flash drives.

If it was good enough and had enough time to spread before we became aware, we might never get rid of it.

To be clear here, I'm not imagining any new capability *except* an AI that's willful and good at hacking. The rest of this can be done right now, today.

Now think about the way more and more of our everyday devices are becoming controlled by computers, how more and more of those computers are being hooked into a network and how more and more of those computers are running similar software.

The threat is real.
 
Last edited:
This Superintelligence is inevitable, it is an unavoidable evolution of our species. It's just a matter of who develops self-aware and self-improving AI first and then I believe an 'Intelligence Explosion' will happen where there will just be one dominant AI program from there on out governing everything. Lets call is Hal.

there is no way we can know what Hal does from there on out. Does Hal solve all of our problems and suffering? Or Does Hal want to destroy humans? Or perhaps Hal becomes indifferent to humans and just wants to turn our Earth and beyond into a grey goo.

Stephen Hawking I believe once said we'd likely be in a lot of trouble if we were visited by extraterrestrial visitors as it would likely be aome form of Super AI that will be trying to destroy us or perhaps turn our world to grey goo.

Lets us just hope that this Super AI gets so smart beyond our comprehension that the conclusion it comes to is to do 'good' and that the 'good' it knows is really good for us as now understand (no suffering, good health, living forever if we choose, etc.). Perhaps it will even derive the meaning of life and death for us mortal beings and offer us immortality if we want it.
 
Lets us just hope that this Super AI gets so smart beyond our comprehension that the conclusion it comes to is to do 'good' and that the 'good' it knows is really good for us as now understand (no suffering, good health, living forever if we choose, etc.). Perhaps it will even derive the meaning of life and death for us mortal beings and offer us immortality if we want it.
This is a good, concise expression of the deepest human yearnings. According to certain writings that have existed from antiquity, a "super intelligence" already exists and those desires can be fulfilled in Heaven (essentially a parallel universe). For those who do not believe in a Creator/God and Heaven, it makes sense to desire the evolution of a benevolent, God-like AI and an accompanying utopia for humans. Anyone of good conscience working on AI, or any advanced technology for that matter, has to have a certain amount of faith that developing the technology will be a net positive for humanity.
 
That sounds like wishful thinking, and we can't rely on that with something potentially this powerful. There is no reason to assume, or hope, that an AI will have interests in line with ours. I agree with Musk's view on global warming, we don't really know what we are doing to the atmosphere so we probably shouldn't do it. I get the impression that he's taking a similar tack regarding AI, and I'm inclined to agree with him on that as well. It might be in our best interest to limit the level of AI development.