Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Are you with Musk or Hawking on AI

This site may earn commission on affiliate links.
Thank you Johan for your well reasoned post and thank you ZsoZso for reminding us that as our machines become more and more capable, we humans regularly change the rules on what constitutes "machine thinking" because we want to continue to believe that only humans can really "think".
As my friend Edsger Dijkstra once said "The question of whether machines can think is about as relevant as the question of whether submarines can swim
I am not sure if Edsger is making the point that machines can really "think" (analyze their environment and take action based on that analysis) and submarines can effectively "swim" (move through the water) -- or if only humans can think because by definition that is a strictly human capability and only living organisms can swim and submarines propel themselves through the water using human-designed mechanical power.
I suspect he means the former, and not the latter. In which case I would agree with him.
 
Interesting thread...FYI, here is a short text I wrote a while ago (I don't have anything against AI research of any kind, though, I find it very interesting):

As you can see, you are not a computer:
A computer knows everything as logical conclusions, basing them on the assumption that input data is correct. It knows everything abstractly, via symbols and numbers. It cannot really distinguish reality and simulation.

Humans also have this kind of knowledge, yet additionally, we know about the subjective experience of feelings, colors, as a fact. Although the objects we see might be illusions or dreams, the conscious experience of color, feeling and meaning, we can realize is a fact. Not abstractly, symbolically or simulated, but actual.

This is a kind of knowledge a computer (at least the kind programmed in languages like C++ or Lisp) cannot have. A computer only has symbolic logic and assumptions, not fact [in that sense].
 
Norbert, if you rely on your instincts telling you that because "you are you" and "you are a human" that you somehow have access to some more "actual" or factual understanding of reality than some other entity, being or agent then I suggest you haven't thought hard or deep enough about the issues of subjective experience and what is known as the Hard Problem of Consciousness.

I suggest this read:
http://philsci-archive.pitt.edu/4888/1/Two_Conceptions_of_Subjective_Experience.pdf

And I'll lure you with the leading paragraph:

ABSTRACT: Do philosophers and ordinary people conceive of subjective experience in the same way? In this article, we argue that they do not and that the philosophical concept of phenomenal consciousness does not coincide with the folk conception. We first offer experimental support for the hypothesis that philosophers and ordinary people conceive of subjective experience in markedly different ways. We then explore experimentally the folk conception, proposing that for the folk, subjective experience is closely linked to valence. We conclude by considering the implications of our findings for a central issue in the philosophy of mind, the hard problem of consciousness.
 
Hi Johan,

That's definitely an interesting article. I will read it completely later on and report back. For now I'll just mention that I'm familiar with writings of Daniell Dennett, David Chalmers, John Searle and Colin McGuinn.

It seems both the article and your response raise the question what the terms 'conscious experience' and 'subjective experience' refer to. So let me already clarify that I am, for example, referring to the simple fact that we see colors. Of course the 'blue' of the sky is not really out there in the sky. It seems a too obvious fact to us to even think about it. Yet we do know as a fact that we see colors. Don't you? And we also dream in colors (or in black and white which are colors too).
 
So let me already clarify that I am, for example, referring to the simple fact that we see colors.

What about the fact that we "see" only some colors? There are nuances of colors that most of us cannot see, and also the fact that the people with various levels of colorblindness see fewer colors than most of us. Seeing colors only means we've evolved receptors for those wavelengths.
 
What about the fact that we "see" only some colors? There are nuances of colors that most of us cannot see, and also the fact that the people with various levels of colorblindness see fewer colors than most of us. Seeing colors only means we've evolved receptors for those wavelengths.

Makes one wonder what a computer "sees" when it looks through Radar and Sonar... more colors than us? ;-)
 
Makes one wonder what a computer "sees" when it looks through Radar and Sonar... more colors than us? ;-)

What Is it Like to Be a Bat? - Wikipedia, the free encyclopedia

Nagel is getting at the core of the issue, with his notion that conscious experiences is for something "to be like something". But there are methods by which one can imagine or place one self in the position of a bat (or a computer using radar og sonar to scan the environment), so to speak we can try to translate how such a sensory input could be experienced in a modality that we are familiar with. IR cameres do just that for example, they take the infrared spectrum which their sensors "sees" and translate it in to the visible light spectrum that we can percieve with our sensors (eyes).
 
I believe the question “can computer have consciousness” is the same as the problem with philosophical zombies.

https://en.wikipedia.org/wiki/Philosophical_zombie

I.e. We can´t even be sure whether the other people have consciousness :wink:

However since we know how a computer functions, we can say that *within its programming*, it will not know anything as a fact. So it would also not know that it would be conscious (as a fact), even if it were.
 
However since we know how a computer functions, we can say that *within its programming*, it will not know anything as a fact. So it would also not know that it would be conscious (as a fact), even if it were.

This view is based on the assumption that consciousness is a phenomenon that can only arise from something "too complex to understand how it functions" (i.e. the human brain). The "problem" is we're not that far from understanding the nitty gritty details of how neurons interact. Once that is modeled with high enough accuracy we will, to borrow your own words, "know how the brain works". This is not to say that we will be able to do real-time whole brain emulation, just that we will (in fact in most senses of the word we know today) know the basics of how the brain works. This takes away the magic or mystical aspect of consciousness and reduces it an emergent phenomenon from a complex enough system.

So your assumption (that since a computer functions by fundamentally different mechanisms than a human brain and we understand these mechanisms to a minute detailed level, a computer will never be truly conscious, is perfectly fine, but it's only an assumption - in no way a fact. And I, and many prominent philosophers and neuroscientists, disagree with this assumption.
 
This view is based on the assumption that consciousness is a phenomenon that can only arise from something "too complex to understand how it functions" (i.e. the human brain). The "problem" is we're not that far from understanding the nitty gritty details of how neurons interact. Once that is modeled with high enough accuracy we will, to borrow your own words, "know how the brain works". This is not to say that we will be able to do real-time whole brain emulation, just that we will (in fact in most senses of the word we know today) know the basics of how the brain works. This takes away the magic or mystical aspect of consciousness and reduces it an emergent phenomenon from a complex enough system.

So your assumption (that since a computer functions by fundamentally different mechanisms than a human brain and we understand these mechanisms to a minute detailed level, a computer will never be truly conscious, is perfectly fine, but it's only an assumption - in no way a fact. And I, and many prominent philosophers and neuroscientists, disagree with this assumption.

I'm not making an assumption about how consciousness arises, or if it is there in the first place. I am simply acknowledging what we know about computers (they don't know anything as a fact), and what we know about ourselves (we know as a fact that we see colors).

I don't think you have addressed that point, regardless of what you think philosophers are saying about their assumption that consciousness arises from the brain.
 
I'm not making an assumption about how consciousness arises, or if it is there in the first place. I am simply acknowledging what we know about computers (they don't know anything as a fact), and what we know about ourselves (we know as a fact that we see colors).

I don't think you have addressed that point, regardless of what you think philosophers are saying.

Fair enough. But still it seems you make assumptions that don't stand on any particular firm og logical ground i.e. that computers "don't know anything as a fact" while we humans "know xxx as fact". I don't see how this somehow follows from some logical conclusion. Without being confrontative but still: how would you even begin to "know" what a computer knows or doesn't know as a fact?
 
Fair enough. But still it seems you make assumptions that don't stand on any particular firm og logical ground i.e. that computers "don't know anything as a fact" while we humans "know xxx as fact". I don't see how this somehow follows from some logical conclusion. Without being confrontative but still: how would you even begin to "know" what a computer knows or doesn't know as a fact?

Well, based on working with computers, and also understanding their underlying logical concepts, I can say a computer has two ways of "knowing" something: (and I wouldn't expect anyone to disagree who in detail understands the kind of computer we have today, and their logical extension in the future).

Either it has received it as input (from sensors, databases, and the internet, for example, and from those things implicit in its program code). Which for a human is like saying: I know it because someone told me so, and/or because it was in the news. Or the computer has derived it from making logical conclusions. Which for a human is like saying: I know it because it must be so.

However, when we see a color, directly in our consciousness, it is not something someone told us. It's not something we know because of something else. It is not an interpretation of our perception. It is the fact itself. And it is not a logical conclusion. It is not that we would think "I must be seeing a color, it is the only logical thing". Instead we know "I actually am seeing a color".
 
Well, based on working with computers, and also understanding their underlying logical concepts, I can say a computer has two ways of "knowing" something: (and I wouldn't expect anyone to disagree who in detail understands the kind of computer we have today, and their logical extension in the future).

Either it has received it as input (from sensors, databases, and the internet, for example, and from those things implicit in its program code). Which for a human is like saying: I know it because someone told me so, and/or because it was in the news. Or the computer has derived it from making logical conclusions. Which for a human is like saying: I know it because it must be so.

However, when we see a color, directly in our consciousness, it is not something someone told us. It's not something we know because of something else. It is not an interpretation of our perception. It is the fact itself. And it is not a logical conclusion. It is not that we would think "I must be seeing a color, it is the only logical thing". Instead we know "I actually am seeing a color".

I disagree. When you see a color, thats EXACTLY equivalent to a sensor input to a computer. A certain lightwave reaches the receptors in your eyes that trigger an electric impulse in the appropriate nerves in your brain, which you interpret as seeing that color. Furthermore, your "fact" of seeing that color can be very easily fooled, by triggering the same electric impulse in the neuron via an electrode attached to your brain -- in which case you have a wrong fact by misinterpreting the input just the same as a computer can be fed input that is not factual.

There are two more problems with seeing a color as fact:
1. there are a lot of color-blind people, who can't distinguish between certain colors. So how is it a fact that I see those colors different, for them it is not a fact, for them the fact is those are the same color -- so we see the same "input" as contradicting "facts"
2. if you raise a child by reversing the definition of red and green (call the greens red and vica versa), then for that person green will be what I call red, simply because we learned to interpret that same input signal differently. In fact, you can't really prove that the sensation you get when looking at a red color is the same as I get. Maybe we see soemthing totally different, but we both call it red, because that what we learned to call it. What I am trying to express here, is that it is simply a learned definition that we associate names with the sensory input of the color, no more of afact than how the computer interprets its sensory inputs.
 
Last edited:
I disagree. When you see a color, thats EXACTLY equivalent to a sensor input to a computer. A certain lightwave reaches the receptors in your eyes that trigger an electric impulse in the appropriate nerves in your brain, which you interpret as seeing that color. Furthermore, your "fact" of seeing that color can be very easily fooled, by triggering the same electric impulse in the neuron via an electrode attached to your brain -- in which case you have a wrong fact by misinterpreting the input just the same as a computer can be fed input that is not factual.

We are talking about two completely different things. The process you describe is that of processing information. But that is not what defines a color: we also see colors when dreaming (or at least many do).

I am talking about the event of eventually seeing a color, regardless of the reason, and regardless of which name you give to the color. The fact of seeing colors at all.
 
We are talking about two completely different things. The process you describe is that of processing information. But that is not what defines a color: we also see colors when dreaming (or at least many do).

I am talking about the event of eventually seeing a color, regardless of the reason, and regardless of which name you give to the color. The fact of seeing colors at all.

Do you believe that all human subjective experience arises in the brain and nervous system (if you believe something other than that, it's difficult to follow any line of logical argumentation since one would be mixing in magical thought)? And if so, what would be so different about the human brain as an information processing device rather than a computer? Point being: why wouldn't a computer advanced enough have subjective experience ("know things for a fact" or for example have the notion of color), just like we humans do, dogs do, maybe mice, maybe flies, less likely nematodes.
 
Do you believe that all human subjective experience arises in the brain and nervous system (if you believe something other than that, it's difficult to follow any line of logical argumentation since one would be mixing in magical thought)? And if so, what would be so different about the human brain as an information processing device rather than a computer? Point being: why wouldn't a computer advanced enough have subjective experience ("know things for a fact" or for example have the notion of color), just like we humans do, dogs do, maybe mice, maybe flies, less likely nematodes.

What do you mean with "advanced enough"? Would it do something else than collecting numerical, abstract information, and performing logical conclusions?
 
What do you mean with "advanced enough"? Would it do something else than collecting numerical, abstract information, and performing logical conclusions?

I meant complex enough.

What does the brain do differently? The workings of individual neurons are just like in a computer, the big difference being 1) parallel rather than serial architecture and 2) analog rather than digital processing.
 
I meant complex enough.

What does the brain do differently? The workings of individual neurons are just like in a computer, the big difference being 1) parallel rather than serial architecture and 2) analog rather than digital processing.

Reminds of someone saying that quantum physics means that everything is sort-of digital. It sounds like now we ask vaguely similar questions. I have looked for anyone who might know something to help answer such questions. I have found people who had interesting things to say, but none who had an answer. ;)

Colors are funny things. We point to the beautiful colors of a sunset as if they were outside of us. Do they have a specific location like a hologram in our brain? Do they exist in a sense that we could call physical? It seems to me they can't be an epiphenomenon, since then we wouldn't know that they are "there".