Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Are you with Musk or Hawking on AI

This site may earn commission on affiliate links.
... we're not that far from understanding the nitty gritty details of how neurons interact. Once that is modeled with high enough accuracy we will, to borrow your own words, "know how the brain works". This is not to say that we will be able to do real-time whole brain emulation, just that we will (in fact in most senses of the word we know today) know the basics of how the brain works. This takes away the magic or mystical aspect of consciousness and reduces it an emergent phenomenon from a complex enough system.
Do you believe that all human subjective experience arises in the brain and nervous system (if you believe something other than that, it's difficult to follow any line of logical argumentation since one would be mixing in magical thought)?
This question is key. I believe your assumption, that what we understand as "consciousness" is simply an emergent phenomenon, necessarily follows from the belief that each human being's consciousness derives solely from the computer that is the brain. But therein lies the mystery. I subjectively "know" that I exist, though I cannot prove this to anyone. I seem to possess some "spark of life", yet I cannot prove outside myself that I am anything but a complex machine. Out of all of the lives in this world, how is it that I am experiencing this particular life? It seems so improbable that I would even exist at all. A state of nothingness would seem to be so much more reasonable, and likely.

Faced with questions like this, a great many of us turn to the ideas that seem to have the greatest explanatory power. The idea that the Big Bang was meticulously planned by an eternal, transcendent, conscious entity who then bestows a spark of life upon each brain in the created universe is logical even if not necessarily provable. While it strongly appears that in our day to day lives here on Earth, our memories and perceptions do derive entirely, or almost entirely, from measurable activity in our brains, there seems to be something that makes us "alive". In any case, there are enough mysteries in our understanding of reality, including in the realm of physics, that I feel humility is warranted. At this point, as to the emergence of what we seem to be calling consciousness, all we really have is theories.

At any rate, if I cannot prove that I am anything more than a supercomputer, how could anyone prove that an AI is conscious in the same manner as a human? For some time, the question as to whether AIs should have "human rights" may be impossible for everyone to agree on.
 
This question is key. I believe your assumption, that what we understand as "consciousness" is simply an emergent phenomenon, necessarily follows from the belief that each human being's consciousness derives solely from the computer that is the brain. But therein lies the mystery. I subjectively "know" that I exist, though I cannot prove this to anyone. I seem to possess some "spark of life", yet I cannot prove outside myself that I am anything but a complex machine. Out of all of the lives in this world, how is it that I am experiencing this particular life? It seems so improbable that I would even exist at all. A state of nothingness would seem to be so much more reasonable, and likely.

Faced with questions like this, a great many of us turn to the ideas that seem to have the greatest explanatory power. The idea that the Big Bang was meticulously planned by an eternal, transcendent, conscious entity who then bestows a spark of life upon each brain in the created universe is logical even if not necessarily provable. While it strongly appears that in our day to day lives here on Earth, our memories and perceptions do derive entirely, or almost entirely, from measurable activity in our brains, there seems to be something that makes us "alive". In any case, there are enough mysteries in our understanding of reality, including in the realm of physics, that I feel humility is warranted. At this point, as to the emergence of what we seem to be calling consciousness, all we really have is theories.

At any rate, if I cannot prove that I am anything more than a supercomputer, how could anyone prove that an AI is conscious in the same manner as a human? For some time, the question as to whether AIs should have "human rights" may be impossible for everyone to agree on.

We had this discussion before in this thread :)

See your post nr 39, my reply post 40 and the posts following it.

Are you with Musk or Hawking on AI - Page 4
 
We had this discussion before in this thread :)

See your post nr 39, my reply post 40 and the posts following it.

Are you with Musk or Hawking on AI - Page 4
Yes, certainly. Some of the key questions haven't really changed. Perhaps the best we can do is continue to try to understand one another's viewpoints more clearly. So I hope that my most recent post (above) was helpful at some level.
 
Faced with questions like this, a great many of us turn to the ideas that seem to have the greatest explanatory power. The idea that the Big Bang was meticulously planned by an eternal, transcendent, conscious entity who then bestows a spark of life upon each brain in the created universe is logical even if not necessarily provable.

That doesn't sound even remotely logical.
 
Yes, certainly. Some of the key questions haven't really changed. Perhaps the best we can do is continue to try to understand one another's viewpoints more clearly. So I hope that my most recent post (above) was helpful at some level.

Your post was very good, for further discussion. I just disagree with some of the core concepts, especially the magical thinking concerning human consciousness and the argument that artificial intelligences will not evoke our empathy, sympathy and that we will not want to respect them and give them rights.
 
This question is key. I believe your assumption, that what we understand as "consciousness" is simply an emergent phenomenon, necessarily follows from the belief that each human being's consciousness derives solely from the computer that is the brain. But therein lies the mystery.

This summarizes my view on this subject:

"Consciousness is in fact so weird, and so poorly understood, that we may permit ourselves the sort of wild speculation that would be risible in other fields. We can ask, for instance, if our increasingly puzzling failure to detect intelligent alien life might have any bearing on the matter. We can speculate that it is consciousness that gives rise to the physical world rather than the other way round. The 20th-century British physicist James Hopwood Jeans speculated that the universe might be ‘more like a great thought than like a great machine.’ Idealist notions keep creeping into modern physics, linking the idea that the mind of the observer is somehow fundamental in quantum measurements and the strange, seemingly subjective nature of time itself, as pondered by the British physicist Julian Barbour. Once you have accepted that feelings and experiences can be quite independent of time and space (those causally connected but delocalised cogwheels), you might take a look at your assumptions about what, where and when you are with a little reeling disquiet.

I don’t know. No one does. And I think it is possible that, compared with the hard problem, the rest of science is a sideshow. Until we get a grip on our own minds, our grip on anything else could be suspect. It’s hard, but we shouldn’t stop trying. The head of that bird on the rooftop contains more mystery than will be uncovered by our biggest telescopes or atom smashers. The hard problem is still the toughest kid on the block."

Will we ever get our heads round consciousness? — Ae...
 
Thank you for passing that along. Clearly we all have our own worldviews and biases, but the key takeaway here, I think, is that solving the hard problem of consciousness should not be presumed to be as simple as improving our understanding of the brain's circuitry.

I just disagree with some of the core concepts, especially the magical thinking concerning human consciousness and the argument that artificial intelligences will not evoke our empathy, sympathy and that we will not want to respect them and give them rights.
As to "magical thinking", it seems to me that reality itself is quite magical. I mean, why should anything even exist at all? It seems logical to me that *something* exists eternally, that is, not bound by time or with beginning or end. Our universe does have a beginning, if you can call emerging from something resembling a singularity a beginning. Today, of course, there is great speculation that we are actually in a multiverse, but that doesn't seem to remove the need for something that is eternal. Some speculate that the multiverse itself, if it exists, is eternal. But it's really a bunch of guesses, and in your words, "magical thinking". So my belief that the universe arose from an eternal, conscious entity, which remains deeply connected with the universe, and from which our consciousness springs, is really not that crazy. While I cannot prove this is true, and I respect other viewpoints, I have for a couple of decades found this belief to be consistent with my life experience and learning. And it is significant here, in this discussion, because from this belief it would follow that no AI is likely to ever achieve what we are calling consciousness. That isn't to say that an AI couldn't appear to be conscious when viewed from the outside.

So, given that an AI could potentially appear to be conscious, an AI could certainly evoke our empathy and sympathy. This isn't much of a stretch, as fictional characters in novels evoke empathy and sympathy today. It seems to me, however, that granting "human rights" to AIs would be quite problematic. If many copies of a given AI are made, for instance, shouldn't we humans be free to delete some or all of those copies at will if a better version comes along or not all are needed? I'd hate to see this equated with murder. On the other hand, if one subscribes to the view that consciousness is merely an emergent property of a sufficiently complex computer or software program, then it would follow, I think, that such systems should not be created and destroyed at will, much as we consider it immoral to commit infanticide against humans; we'd more or less be able to turn them on but not off.
 
Thanks for your thoughts abasile. Unfortunately it sounds too much to me like Deepak Chopra interpreting quantum dynamics infusing it with mysticism.

As to the point of the appearance of consciousness vs. "true consciousness" if you think about that a little more I'm sure you'll see how absurd that notion is.

If an entity acts and behaves as a conscious being, connects to us humans on an intellectual and emotional level in the same way we do with other humans and if when interacting with it there's absolutely no way for us be tell if it's a computer or a biological living organism then of course it is conscious.

One more thing: basing a core part of your argument on personal experience and personal incredulity is a (big) logical fallacy.
 
Last edited:
Thank you for the conversation!

As to the point of the appearance of consciousness vs. "true consciousness" if you think about that a little more I'm sure you'll see how absurd that notion is.

If an entity acts and behaves as a conscious being, connects to us humans on an intellectual and emotional level in the same way we do with other humans and if when interacting with it there's absolutely no way for us be tell if it's a computer or a biological living organism then of course it is conscious.
To be clear, while I would not rule out the possibility of such a machine being created, neither would I regard such a development as inevitable. To address your point, "true consciousness" (in your words) would be defined as possessing an inner awareness of one's own existence. (Previously, in this thread, I referred to this as "self awareness", but I fear that terminology doesn't quite capture my intended meaning; we are talking about something deeper than an entity having the ability to recognize itself in a mirror.) Anyway, my position is that I really have no way of knowing with absolute certainty whether any other human is "truly conscious" as I subjectively know myself to be. I can deduce that this is true, as all other humans came into the world in substantially the same manner as I did; it seems quite reasonable for me to assume that if I not a "zombie", neither are other humans.

One more thing: basing a core part of your argument on personal experience and personal incredulity is a (big) logical fallacy.
The problem is that "personal experience" is central to what we are discussing, i.e., what does it mean to exist and to be human? My goal here isn't to provide logical proof of my beliefs about our existence and what I see as a fundamental difference between humans and AIs. If I can merely demonstrate the reasonableness and logical consistency of these beliefs relative to other viewpoints, then I'm happy.

In interpreting reality, we have to make key assumptions. You appear to be assuming that we as conscious beings, and the universe in general, ultimately arose from inanimate matter, energy, or substance of some form. Thus, it is logical for you to believe that consciousness is simply an emergent property of sufficiently complex matter. When we, on the other hand, assume that the universe arose from a conscious entity, it follows that consciousness is a fundamental yet not quite understood part of reality. Even in this case, consciousness may be an emergent property of matter, but that is not a foregone conclusion. As to the reasonableness of assuming that the universe arose from a conscious entity, there is an extremely high degree of apparent fine tuning in the physical constants of the universe, in its composition, in our particular galactic and stellar neighborhoods, in our solar system and earth system, in the timing of our arrival on earth, etc., etc. It seems logical to assume that either we were planned by an eternal, conscious entity, or our universe is but one of a virtually infinite number of universes, and that ours just happened to be fit for advanced life. In the latter, multiverse scenario, I suppose one could argue that a "universe generating machine" could be eternal in nature, creating new universes without beginning or end. Either way, it seems that you would refer to this as "magic".
 
Thank you for the conversation!

To be clear, while I would not rule out the possibility of such a machine being created, neither would I regard such a development as inevitable. To address your point, "true consciousness" (in your words) would be defined as possessing an inner awareness of one's own existence. (Previously, in this thread, I referred to this as "self awareness", but I fear that terminology doesn't quite capture my intended meaning; we are talking about something deeper than an entity having the ability to recognize itself in a mirror.) Anyway, my position is that I really have no way of knowing with absolute certainty whether any other human is "truly conscious" as I subjectively know myself to be. I can deduce that this is true, as all other humans came into the world in substantially the same manner as I did; it seems quite reasonable for me to assume that if I not a "zombie", neither are other humans.

I agree with this loose definition of [actual] consciousness and I agree that it is reasonable to assume other humans are not philosophical zombies.

The problem is that "personal experience" is central to what we are discussing, i.e., what does it mean to exist and to be human? My goal here isn't to provide logical proof of my beliefs about our existence and what I see as a fundamental difference between humans and AIs. If I can merely demonstrate the reasonableness and logical consistency of these beliefs relative to other viewpoints, then I'm happy.

In interpreting reality, we have to make key assumptions. You appear to be assuming that we as conscious beings, and the universe in general, ultimately arose from inanimate matter, energy, or substance of some form. Thus, it is logical for you to believe that consciousness is simply an emergent property of sufficiently complex matter.

You've captured one of the essential points of my argument very well here.

When we, on the other hand, assume that the universe arose from a conscious entity, it follows that consciousness is a fundamental yet not quite understood part of reality.

I don't understand why one would make such an assumption. I don't see any particular good reason to assume this.

Even in this case, consciousness may be an emergent property of matter, but that is not a foregone conclusion. As to the reasonableness of assuming that the universe arose from a conscious entity, there is an extremely high degree of apparent fine tuning in the physical constants of the universe, in its composition, in our particular galactic and stellar neighborhoods, in our solar system and earth system, in the timing of our arrival on earth, etc., etc. It seems logical to assume that either we were planned by an eternal, conscious entity, or our universe is but one of a virtually infinite number of universes, and that ours just happened to be fit for advanced life. In the latter, multiverse scenario, I suppose one could argue that a "universe generating machine" could be eternal in nature, creating new universes without beginning or end. Either way, it seems that you would refer to this as "magic".

Aha, the fine tuning argument to support the assumptions you're making. You really should read about the concepts of what's called the Anthrophic principle. In my attempt of a very short summary consider this: Any intelligent and conscious life form with the ability to observe itself and reflect upon its own existence will inevitably find itself in a universe with the properties required for intelligent life to arise, and on a planet with the right conditions, and at the end of an evolutionary chain leading up to intelligent life. However, this fact says nothing at all about the underlying probabilities for this apparent fine tuning. The probability of life and afterwards intelligent life arising in the universe, and the universal constants having the values they have allowing this to happen, may well be 1 in 1^100 - We would still find ourselves in just such a universe, at such a time and on such a planet.

My personal view is that most likely life arose without intent or meaning, that there isn't any particular reason, will or meaning behind anything existing in the first place. The phenomenon of consciousness has also just as probably (or improbably however you want to view it) arisen as a consequence of evolution, which in itself is a stochastic processes that doesn't have any other purpose or meaning than to keep evolving life further by an iterative process that is in its foundation driven by randomness but at the same time guided by natural selection (a search for an optimized fitness function). Apparently consciousness is a beneficial evolutionary trait, seeing as the species having the most of it has taken reign of this planet in a very short time span since evolving it.

Still, there really is no reason to invoke magic thinking.
 
Last edited:
It seems logical to me that *something* exists eternally, that is, not bound by time or with beginning or end. Our universe does have a beginning, if you can call emerging from something resembling a singularity a beginning. Today, of course, there is great speculation that we are actually in a multiverse, but that doesn't seem to remove the need for something that is eternal. Some speculate that the multiverse itself, if it exists, is eternal. But it's really a bunch of guesses, and in your words, "magical thinking". So my belief that the universe arose from an eternal, conscious entity, which remains deeply connected with the universe, and from which our consciousness springs, is really not that crazy.

Why are you comfortable with an eternal conscious entity, (outside of the universe and reality), but not an eternal universe? That seems to be simply moving the goal posts, or, more appropriately, the starting line.
Obviously our universe did not begin with the big bang, something preceded it, and a multiverse is not the only explanation from whence it came. It could be simply that the current form of the universe traces back to the big bang, (currently expanding from there), but the big bang could have resulted from a big collapse, which may at some point happen again.
 
Aha, the fine tuning argument to support the assumptions you're making. You really should read about the concepts of what's called the Anthrophic principle. In my attempt of a very short summary consider this: Any intelligent and conscious life form with the ability to observe itself and reflect upon its own existence will inevitably find itself in a universe with the properties required for intelligent life to arise, and on a planet with the right conditions, and at the end of an evolutionary chain leading up to intelligent life. However, this fact says nothing at all about the underlying probabilities for this apparent fine tuning. The probability of life and afterwards intelligent life arising in the universe, and the universal constants having the values they have allowing this to happen, may well be 1 in 1^100 - We would still find ourselves in just such a universe, at such a time and on such a planet.
I am familiar with the Anthropic Principle, and my understanding is that the probably of a universe with advanced life may be substantially lower than 1 in 10^100, though we cannot know for sure, and in any event the exact probability doesn't really change the arguments. It appears that we can agree that the key question here is, are we merely the lucky beneficiaries of a series of extremely improbable events, or was there intentionality?

Why are you comfortable with an eternal conscious entity, (outside of the universe and reality), but not an eternal universe? That seems to be simply moving the goal posts, or, more appropriately, the starting line.
Obviously our universe did not begin with the big bang, something preceded it, and a multiverse is not the only explanation from whence it came. It could be simply that the current form of the universe traces back to the big bang, (currently expanding from there), but the big bang could have resulted from a big collapse, which may at some point happen again.
Based on current science, I think an eternal universe is less likely than either a multiverse or an eternal conscious entity. It appears that our universe is destined to continue expanding forever, becoming a darker and colder place as the eons pass. The alternative you propose would seem to involve an infinite oscillation between Big Bang and "Big Crunch" events. We don't seem to be undergoing such a cycle. I'm not saying the probability is zero, though.

Finally, I'd be lying if I were to claim to be unbiased here. I certainly prefer believing that consciousness is more than simply an emergent property and that human beings are special. At the same time, I want to understand reality for what it is, and I feel that I should be open to following hard evidence wherever it leads.
 
Based on current science, I think an eternal universe is less likely than either a multiverse or an eternal conscious entity.

Yet there is no current science, or any evidence, of an eternal conscious entity.

It appears that our universe is destined to continue expanding forever, becoming a darker and colder place as the eons pass.

Yes, at this point in time that's how it appears. That does not mean that's how it actually will work. Think of a balloon which keeps being pumped up, it's continually expanding, until it is stretched too thin, rips apart, and collapses on itself.

The alternative you propose would seem to involve an infinite oscillation between Big Bang and "Big Crunch" events. We don't seem to be undergoing such a cycle.

Or, we could be in the early stages of such a cycle.

Finally, I'd be lying if I were to claim to be unbiased here. I certainly prefer believing that consciousness is more than simply an emergent property and that human beings are special.

I guess I don't care either way, I'm just interested in learning the truth. As for humans being special, if every one of us disappeared tomorrow I doubt the universe would even notice.
 
Getting back to the topic of "artificial" intelligence - one of my thoughts is that we won't know for certain when consciousness arises in machines. That is, if they don't have a way to express themselves outside of the program they're executing, it could be a form of imprisoned torture. For all we know, this is already the case. That said, I'm with Andrew Ng, as I may have mentioned before, in saying that my concerns about AGI are on par with my concerns about overpopulation on Mars. It could certainly become an issue, but nothing I can do today will make a huge difference.

Getting back to what was said upthread about how AI is a moving target - it's definitely true. The most recent results on Imagenet are 95% proper classification. This sounds impressive, and it is, but maybe it's more impressive when you realize the human results are 94%. Remember, too, that this isn't a program humans wrote to classify the images. It's a program we wrote which tells a computer how to classify images. That's a pretty important distinction. Teach a man to fish, and all that..

Headed to NIPS in a week and I'm really eager to take in the latest advancements. My opinion is that this is a field which has the greatest current opportunity to advance humanity. The concerns about AGI are real, but they're very, very far in the future. Once you see "under the hood", you realize we are a ways off from the real thing, no matter how well things are going.
 
Yet there is no current science, or any evidence, of an eternal conscious entity.
As modern science is generally conducted with materialist assumptions, this viewpoint is sort of expected. Atheists and theists have access to the same data, literature, etc., but they choose to interpret it differently. However, I'll leave it at that out of respect for the desire to keep further posts focused more narrowly on AI and consciousness.

Getting back to the topic of "artificial" intelligence - one of my thoughts is that we won't know for certain when consciousness arises in machines. That is, if they don't have a way to express themselves outside of the program they're executing, it could be a form of imprisoned torture. For all we know, this is already the case.
It's hard to escape questions about the nature of consciousness, and by extension reality, isn't it? If we believe AI can be conscious, that really opens up a moral can of worms, I think.
 
A very good and thought provoking article countering Musk's (and other's) fear of AGI:
Superintelligence: fears, promises, and potentials | KurzweilAI

Some quotes to highlight relevance to this thread:
In fact, it seems that Bostrom’s book may have been part of what raised the hackles of Gates, Musk, and Hawking regarding the potential near-to-medium term risks of advanced AI R&D.

Bostrom presents the “Scary Idea” more rigorously and soberly than anyone involved directly with SIAI/MIRI has so far, but ultimately his arguments for it aren’t any stronger than theirs. The basic gist remains the same:
1. There is a wide variety of possible minds, many of which would be much more powerful than humans and also indifferent to humans.
2. It is [possible/highly probable] that the highly powerful AGIs we humans create, and their descendants, will be indifferent to humans.
3. Advanced AGIs that are indifferent to humans and human values will likely do bad things to humans and do bad things from a human-values perspective.
Point 1 seems hard to argue with. Yes, human-like or human-friendly minds would seem to constitute a small fraction of the kinds of minds that are physically possible.
Point 3 is hard to know about. Maybe AGIs that are sufficiently more advanced than humans will find some alternative playground that we humans can’t detect, and go there and leave us alone. We just can’t know, any more than ants can predict the odds that a human civilization, when moving onto a new continent, will destroy the ant colonies present there.
Point 2 is the core. What can be rationally argued is that the outcome described is possible. It’s hard to rule out, since we’re talking about radically new technologies and unprecedented situations. However, what prompted my “Scary Idea” essay was the frequency and vehemence with which various SIAI staff and associated people argued that this outcome is not only possible but highly probable, or even almost certain. These SIAI folks liked to talk about how a “random mind,” plucked from mind-space, almost surely wouldn’t care about humans. Quite possibly not. But an artificial mind engineered and taught by humans is not random, and its descendants are not random either.

...
I read these passages and thought — Whoa!! What a huge win for the Yudkowsky/SIAI folks, to have their ideas written up and marketed in a way that managed to appeal to Bill Gates, Elon Musk, Stephen Hawking and so forth! And indeed the win was real financially as well as media-wise: Bostrom’s book helped entice Elon Musk to donate $10M to the Future of Life Institute for research into how to make AI beneficial rather than destructive, and the two biggest chunks of the FLI funding went to Bostrom’s Future of Humanity Institute (FHI) and Yudkowsky’s MIRI (in that order).


Yudkowsky and Bostrom are Bayesian rationalists and they both understand probability well. They know they have no rational basis to estimate the odds that AGI will prove massively destructive; otherwise they would give numerical estimates of destructive AGI-triggered events. Instead they use terms like “probably” and “almost surely” in such contexts, to indicate that what they are expressing as near-certainties are actually just their intuitions.
 
Last edited:
Kurzweil is putting words in the mouths of both Boström and Yodkowsky. Their cautios approach stems from their ideas on existential risk. This is Boström's reason for venturing in to the field of AI, if you read his work from back in the 90's he was concerned with extensional risks to humanity. I believe I have to research he ended up concluding that future AI was one of the most potentially risky technologies.

Even if it's very hard to put a numerical risk on how probable dangerous AI would be, the risk is such a large one that we should consider it nonetheless. This isn't very different from say the way you handle nuclear weapons: it's very improbable that a nuke should go off accidentally but the ramifications would be so catastrophic that you had numerous safeguards in place to prevent that.

Bostrom writes in one of his papers, or maybe it's in the super intelligence book, that one could look at technological progress as us humans pulling out technology after technology from A bag. Just because most all of the technologies we've pulled out so far have been beneficial doesn't necessarily mean that everything we pull out of the bag is inherently good. They can in the future be technologies that we wish we didn't discover. The thing is once we pulled it out of the bag is not possible to put it back in. We can't undiscovered what we discovered. Boström's main argument is we should think properly about, if not something else, thedesired order in which we discover future technologies. For example it could be a good thing to try and delay the advent of full-scale general AI until after we have better global coordination.