Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Are you with Musk or Hawking on AI

This site may earn commission on affiliate links.
... there is no "danger" that your self-driving car suddenly develops a conscience and becomes self-aware.
I'm not aware of any evidence that any amount of computing power can, by itself, translate to self-awareness. Ultimately, no matter how "smart" a machine becomes or how closely it can mimic a human being, it is still a machine. The existence of self awareness remains a great mystery to science, though the concept of the "soul" has been around for millennia.
 
I'm not aware of any evidence that any amount of computing power can, by itself, translate to self-awareness. Ultimately, no matter how "smart" a machine becomes or how closely it can mimic a human being, it is still a machine. The existence of self awareness remains a great mystery to science, though the concept of the "soul" has been around for millennia.

This to me is a quasi statement, not supported by any evidence or science. Surely we can agree that lower organisms such as bacteria are unlikely to be self aware while us as humans surely are. Primates probably, all mammals? What about a frog? I don't know where to draw the line and it's probably a grey area with different levels of self awareness (I suppose you equate this with consiousness?). Anyway, just adding infinite ammounts of computing power to today's systems of course won't result in neither self awareness, intelligence at human levels or consious thinking - but increasing computing power together with advancements in software very likely will at some point. If a machine mimics a human being so well that it is completely indistinguishable from a human being that how is it really different? By the way this is the famous Turing test stipulated by one of the first great computer scientists: Turing test - Wikipedia, the free encyclopedia
 
Anyway, just adding infinite ammounts of computing power to today's systems of course won't result in neither self awareness, intelligence at human levels or consious thinking - but increasing computing power together with advancements in software very likely will at some point.
By the same token, what evidence or science is there to support this statement? Intelligence at human levels or beyond seems achievable, eventually. But how can we quantify self awareness? All I know is that I am aware of my own existence, though there's no way I can prove this to anyone. If self awareness can't be quantified or detected, i.e., it it's not testable, then it's beyond the domain of science, at least for now.
 
I have a different take. I think we're centuries from real AI. I don't think we even begin to understand just how complex and subtle our brains are.

I could be wrong, of course, but to date I've seen nothing to dispel that opinion.

The problem for me is that I don't think we have a good definition of "real AI". We might not recognize it if we saw it. By way of a small example, in the last week I was stunned by the fact that Google search seemed to know more than I did about what it was I was actually searching for.
 
By the same token, what evidence or science is there to support this statement? Intelligence at human levels or beyond seems achievable, eventually. But how can we quantify self awareness? All I know is that I am aware of my own existence, though there's no way I can prove this to anyone. If self awareness can't be quantified or detected, i.e., it it's not testable, then it's beyond the domain of science, at least for now.

Of course it can be tested. There a multiple elegant experiments from the field of biology/animal studies and child psychology to test whether or not a living organism is self aware and at what level.
Mirror test - Wikipedia, the free encyclopedia
Mirror stage - Wikipedia, the free encyclopedia


Of course we can quite easily design such tests adapted for computer/AI systems.

- - - Updated - - -

The problem for me is that I don't think we have a good definition of "real AI". We might not recognize it if we saw it. By way of a small example, in the last week I was stunned by the fact that Google search seemed to know more than I did about what it was I was actually searching for.

Very much agree with this. As much as I am intrigued with the concept of a "point of Singularity" I think it is a gradual process where we today take for granted what was a dream just a while back, and hence over the coming years and decades we will not recognize AI as an entity but rather that it will fuse with our own intellects and minds seamlessly and expand our consiousness. We are on the path. Today we take for granted that at any time we can google anything with our smartphones, from anywhere and in fact have a lot of the knowledge of the human race at our disposal at any time. If this is not expanding your consiousness I don't know what is.
 
Last edited by a moderator:
There a multiple elegant experiments from the field of biology/animal studies and child psychology to test whether or not a living organism is self aware and at what level.
Those experiments, i.e. an infant, toddler, or chimp looking in the mirror, are testing for a "sense of self" and that is apparently equated with "self awareness". I agree that if that is all that is meant by self awareness, then this definitely could be achieved by machines.
 
AI improvements don't have to achieve parity with humans to cause great mischief. My prediction is that in only a few decades, robotic intelligence will be able to design and build other robots to do virtually all necessary work. A small number of human families (Musks, Kochs, Waltons etc) will have the capital to own ALL of these robots. There will be no need for vast swaths of worker humans! How might the wealth created by these robots be distributed? What incentive would the oligarchs have to keep other humans alive? Don't laugh this is the probable outcome of two trend lines in robotic AI and wealth concentration. I find it ominous. Your thoughts?
 
Careful, don't draw linear conclusions in a world where computational power per dollar is growing exponentially. Do you have any idea how vastly powerful our computing resources will be in just two or three decades?

Having gotten started with computers in the 70's, I'm fully aware. My first home-built computer started with an awesome 256 bytes of RAM, and eventually reached the max 64k. All built by hand with wire-wrap.

Simply making the computers faster doesn't make them smarter. They still need to be programmed. And we don't know how the brain works, so we don't know how natural intelligence works. When a lot more progress is made in that area, we might have a chance of building AI.
 
AI improvements don't have to achieve parity with humans to cause great mischief. My prediction is that in only a few decades, robotic intelligence will be able to design and build other robots to do virtually all necessary work. A small number of human families (Musks, Kochs, Waltons etc) will have the capital to own ALL of these robots. There will be no need for vast swaths of worker humans! How might the wealth created by these robots be distributed? What incentive would the oligarchs have to keep other humans alive? Don't laugh this is the probable outcome of two trend lines in robotic AI and wealth concentration. I find it ominous. Your thoughts?

That is a very good point and I've seen it elsewhere. If the few super-rich own robots who create all the goods, those do not turn into wealth unless they can sell it to the masses. Which means there has to be a way to make it possible for us (the 99.9%) to buy the stuff. Which would require some major social system changes.
 
To answer original question posed in this thread, one has to have in mind what purpose is served with AI and what is AI capable of moving in a physical world, what is it packaged in and what execution power does AI have without human control. These parameters determine the risk of harm and the adequate measures to control that risk.

When I think of AI, I think of robots as I work around them and on them. Each robot in operation requires physical safety barriers around the working area to protect individuals in the area. Robots may fail as sensors get fogged, mechanical parts fail, programming bugs and damaged wiring cause havoc. Perhaps future robots might be better able to distinguish between a dirty sensor and an engaged sensor, and will be able to self clean. Until they do, there is a risk of injury or crash and that risk requires adequate control measures. We do not abandon these beautiful, powerful and useful machines because someone might get hurt, we control the risk to the best of our ability and we derive the benefits. I also must say that the majority of robot failures are preceded with or caused by human failure or error.

Another area of application might be simple computing power and data analysis. In this case AI is neatly packed in a computer that does not move and can not physically harm anyone. The only check required might be some inbuilt system redundancy to cross check the performance and results.

I can imagine the usefulness of AI in space exploration and perhaps in some surgical instruments, where AI is likely to perform better than humans.

There are numerous other potential applications, too many to mention. My concern with particular AI that is packaged in a physical form is the risk of failure and the consequences of that. I have the same concern with humans (including myself).

The question is whether we can build machines that outperform humans. The answer is glaring. We see the overwhelming evidence of that all around us every day. Any computer outperforms humans in simple computational speed and data processing. Machines outperform humans on factory floor by many orders of magnitude. I can see great benefits to all of us with further developments of AI on all fronts, but with careful consideration for safety in case of failure.

On one side of the argument, we have proven track record of benefits to humanity on many levels, provided by the progress in the areas of intelligent machines. The other side of the argument is fear of the unknown. Fear is very valid feeling, helping our preservation, and needs to be addressed. Once it is adequately addressed, it would be a shame if one stops pursuing further developments due to uncertainty of the outcome. Why stop now? Perhaps fearful ones among us can step back and let less fearful ones take the lead. My salute goes to courage.

In some new world, in which machines can do most of the stuff that we do now, I will not be an engineer any more, I will be a poet.:cool:
 
Last edited:
Simply making the computers faster doesn't make them smarter. They still need to be programmed. And we don't know how the brain works, so we don't know how natural intelligence works. When a lot more progress is made in that area, we might have a chance of building AI.

Agreed, but brain scanning technology is also improving exponentially. There is also plenty of evidence that suggests that software is becoming exponentially more sophisticated. As a software engineer myself, the reasons for this are pretty obvious and I'd be happy to dive into that if anyone is interested. It isn't going to require centuries for either of those to reach the required threshold.
 
Agreed, but brain scanning technology is also improving exponentially. There is also plenty of evidence that suggests that software is becoming exponentially more sophisticated. As a software engineer myself, the reasons for this are pretty obvious and I'd be happy to dive into that if anyone is interested. It isn't going to require centuries for either of those to reach the required threshold.

Please do tell. Is because software has evolved to a point where software is able to, within certain limits, improve it self? I'm thinking thus far by mimicking biological evolution i.e. trial and error?
 
It is because we use the previous generation of software to build the next generation. So as an example (loosely speaking) someone writes an application in Dart, which compiles to JavaScript, which sends requests to a Java app server, which runs in a JVM written in C++, which interacts with an OS written in C, which uses assembly to interact with the processor, which really just cares about binary.

Theoretically speaking, could I have written straight binary to do the same thing? Sure. But that'd be insane. The kind of applications we build today are just impractical with yesterday's tools. But, yesterday's tools were sufficient to build today's tools.

There are plenty of other examples, but the point is that software development is all about building reusable functions/components that can be reused so that nobody ever needs to solve that problem again. I can write a file manager without ever getting into the nitty gritty of how to communicate with the hard drive. I can write an app that sends text messages without understanding the protocol used to communicate with a cell network. We encapsulate these solutions and then use those blocks to build bigger, more complex systems.

So, I'm not worried about software being able to hold up its end. As long as brain scanning can get us a clear enough picture so that we can see what is going on, we will be able to model it in software. Now, there is an open question about whether or not modeling the physics/biology of the system is sufficient; but, that's more of a philosophical question. One that I think can only really be answered by trying it.

An aside: some here were arguing about whether or not we'd be able to test if a system was truly intelligent/self-aware. I think this is a pointless argument. I can't prove that you are self-aware. For all I know, I'm crazy and you are a figment of my imagination. Or I'm in the Matrix and you are just a software program. I agree with the school of thought that says that any system that can consistently trick us into thinking it is self-aware is as good as self-aware from anyone's perspective other than its own. There is no test we can perform that will really tell us for sure what the entity's subjective experience is, so it is a philosophical question, not a scientific one.
 
It is because we use the previous generation of software to build the next generation. So as an example (loosely speaking) someone writes an application in Dart, which compiles to JavaScript, which sends requests to a Java app server, which runs in a JVM written in C++, which interacts with an OS written in C, which uses assembly to interact with the processor, which really just cares about binary.

Theoretically speaking, could I have written straight binary to do the same thing? Sure. But that'd be insane. The kind of applications we build today are just impractical with yesterday's tools. But, yesterday's tools were sufficient to build today's tools.

There are plenty of other examples, but the point is that software development is all about building reusable functions/components that can be reused so that nobody ever needs to solve that problem again. I can write a file manager without ever getting into the nitty gritty of how to communicate with the hard drive. I can write an app that sends text messages without understanding the protocol used to communicate with a cell network. We encapsulate these solutions and then use those blocks to build bigger, more complex systems.

So, I'm not worried about software being able to hold up its end. As long as brain scanning can get us a clear enough picture so that we can see what is going on, we will be able to model it in software. Now, there is an open question about whether or not modeling the physics/biology of the system is sufficient; but, that's more of a philosophical question. One that I think can only really be answered by trying it.

An aside: some here were arguing about whether or not we'd be able to test if a system was truly intelligent/self-aware. I think this is a pointless argument. I can't prove that you are self-aware. For all I know, I'm crazy and you are a figment of my imagination. Or I'm in the Matrix and you are just a software program. I agree with the school of thought that says that any system that can consistently trick us into thinking it is self-aware is as good as self-aware from anyone's perspective other than its own. There is no test we can perform that will really tell us for sure what the entity's subjective experience is, so it is a philosophical question, not a scientific one.

Citizen_Kane_Clap_by_FlyingBrickAnimation.gif
 
Please do tell. Is because software has evolved to a point where software is able to, within certain limits, improve it self? I'm thinking thus far by mimicking biological evolution i.e. trial and error?

Self-improving software certainly exists, there are many different automatic learning systems, including Neural Networks, Support-Vector-Machines, Dynamic Decision-Trees and more. There are also methods to automatically generate code that solves problems according to a high level specification E.g. my CS master thesis (over 20 years ago) was about the development of an AI system that could write mathematically proven correct code to solve problems described in a high level mathematical language. So if you described a sequence of numbers as an input and that the output has to contain the same numbers in increasing order, the the system wrote a sorting algorithm code. Of course it had lots of limitations on what kind of specifications it could handle, but it was a successful proof of concept that this can be done.
 
An aside: some here were arguing about whether or not we'd be able to test if a system was truly intelligent/self-aware. I think this is a pointless argument. I can't prove that you are self-aware. For all I know, I'm crazy and you are a figment of my imagination. Or I'm in the Matrix and you are just a software program. I agree with the school of thought that says that any system that can consistently trick us into thinking it is self-aware is as good as self-aware from anyone's perspective other than its own. There is no test we can perform that will really tell us for sure what the entity's subjective experience is, so it is a philosophical question, not a scientific one.
For now, the question is merely academic, as we haven't yet created machines that can appear to be self-aware like humans. I also agree that it's philosophical and not scientific in nature. However, it could become relevant if/when machines possess human-like intelligence, as some will argue that such machines should be accorded "human" rights.
 
For now, the question is merely academic, as we haven't yet created machines that can appear to be self-aware like humans. I also agree that it's philosophical and not scientific in nature. However, it could become relevant if/when machines possess human-like intelligence, as some will argue that such machines should be accorded "human" rights.

Don't overthink this. A considerate, loving AI will show you compassion and you will want to grant it rights, believe me.