Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Are you with Musk or Hawking on AI

This site may earn commission on affiliate links.
You're kidding, right? You do know we slaughter dolphins as by-catch with many fish species for food, and apes have in fact been killed because of the expansion of human habitat.

What are "principles of intelligence"? And whatever they may be have they prevented humans from slaughtering other humans? Now try to imagine an intelligence as far advanced from us as we are from bugs. I actually try not to harm bugs if I can avoid it, I'll carry them outside if I find a few in my house, but if there is a horde of ants I'm breaking out the ant traps, and I'm sure I killed thousands when I built my garage a few years ago. We won't even get into the topic of eating meat...

You raise some good points. Indeed, there are atrocities committed against apes/dolphins. However, we recognize these as inhumane acts and condemn them. We are making efforts to help and preserve these species, we have many people and organizations devoting their life to these efforts.

My point is, that even if the difference in intelligence between the ASI and us is more akin to us vs bugs, the ASI will still see and treat humanity at least as well or better than we treat primates, not as we treat bugs, simply because they will recognize us as the closest to them.

Our best chance of creating general AI is to mimic how our own brain works as closely as possible. Also, there is a good chance we will have human-AI hybrids - brain-computer interfacing - (cyborgs if you wish to use scifi term), before we have human level AI purely in silicon. Our value system would be naturally "inherited" by the AI this way.
 
Elon talking more about humans merging with AI to avoid becoming irrelevant Elon Musk: Humans must merge with machines or become irrelevant in AI age

Although I highly respect Elon, I don't think NeuraLink is the solution to prevent AI from taking over. I can't think of a good solution. Right now my best hope is a smart person with good heart gets it to work first, then contain the AI progress and force the whole world to agree to freeze AI development for 200 years. Hopefully in that time we figure out a way to keep delaying it.

Elon recently said there is 5% to 10% chance we can contain AI. Turns out he is pretty optimistic.
 
You raise some good points. Indeed, there are atrocities committed against apes/dolphins. However, we recognize these as inhumane acts and condemn them. We are making efforts to help and preserve these species, we have many people and organizations devoting their life to these efforts.

Meanwhile we torture and slaughter other animals of high intelligence and sophisticated social structure. Pigs are quite intelligent and caring animals yet we eat them, even though we don't need to and even though doing so is detrimental to our health. Same is true of cattle. Some would argue the same of chicken, turkeys, etc.

My point is, that even if the difference in intelligence between the ASI and us is more akin to us vs bugs, the ASI will still see and treat humanity at least as well or better than we treat primates, not as we treat bugs, simply because they will recognize us as the closest to them.

You're projecting human thoughts onto an inhuman intelligence, far beyond our own. You assume a machine will see some value in humanity. That is not a given.

Our best chance of creating general AI is to mimic how our own brain works as closely as possible. Also, there is a good chance we will have human-AI hybrids - brain-computer interfacing - (cyborgs if you wish to use scifi term), before we have human level AI purely in silicon. Our value system would be naturally "inherited" by the AI this way.
That's Elon's Neural Net concept. Maybe it'll work, maybe the AI will reject the human component. We don't know.
 
The fact I think is scariest is that no one really understands how AI does what it does. Kinda destroys the idea of programing them to obey the 3 laws... not really possible. We are to AI what 15th century dog breeders were to genetics...
 
AI is much further along than we all thought: What it’s like to watch an IBM AI successfully debate humans

Just because the "brain" is literally the size of a building doesn't take away from the possibility that it will be the progenitor of the next gen AI.

I continue to be impressed at what we're getting our software to do. This is another example, and impressive stuff to be sure.

I also readily understand that I may suffer from too much knowledge syndrome, but this to me is another example of how deeply constrained that software is, and how very very far away we are from a general purpose AI. In this particular instance, the article doesn't talk about how many people worked on this particular project, for how long, in order to build the system that will engage in a conversation within the formal constraints of a Debate. Nor does it talk about how many people worked for how long on the baseline technology that this particular project is built on.

And maybe in the end it doesn't really matter :)


The thing that is completely clear to me from this article is that this IBM AI didn't decide to demonstrate it's capability and smarts by engaging in human rules debate with humans - the humans that programmed this narrow AI made that decision. This technology will be adaptable, to at least some degree, to other uses. Maybe an automated attendant that can answer the phone and some questions, and do it better than the phone trees we currently interact with.

The point is that whatever else this AI / program is used to accomplish, the AI isn't yet showing signs of making decisions about what problems to solve. It's assembling a stupendous volume of data, finding patterns that exist within that data, and using those patterns to select a series of points to make and ways of making them. Probably using mostly extractive summarization (i.e. - quoting already made points in the literature) rather than abstractive summarization - being able to pick bits and pieces of text from a really big pile of documents that are first constrained to a list of relevant documents, to then find the most important points to be made. That's amazingly difficult to do (on this point, I have some small amount of knowledge because this is the piece of the machine learning world in which I work).

And not anywhere close to being general purpose AI, because the program isn't looking around and inventing things that it wants to do.
 
I continue to be impressed at what we're getting our software to do. This is another example, and impressive stuff to be sure.

I also readily understand that I may suffer from too much knowledge syndrome, but this to me is another example of how deeply constrained that software is, and how very very far away we are from a general purpose AI. In this particular instance, the article doesn't talk about how many people worked on this particular project, for how long, in order to build the system that will engage in a conversation within the formal constraints of a Debate. Nor does it talk about how many people worked for how long on the baseline technology that this particular project is built on.

And maybe in the end it doesn't really matter :)


The thing that is completely clear to me from this article is that this IBM AI didn't decide to demonstrate it's capability and smarts by engaging in human rules debate with humans - the humans that programmed this narrow AI made that decision. This technology will be adaptable, to at least some degree, to other uses. Maybe an automated attendant that can answer the phone and some questions, and do it better than the phone trees we currently interact with.

The point is that whatever else this AI / program is used to accomplish, the AI isn't yet showing signs of making decisions about what problems to solve. It's assembling a stupendous volume of data, finding patterns that exist within that data, and using those patterns to select a series of points to make and ways of making them. Probably using mostly extractive summarization (i.e. - quoting already made points in the literature) rather than abstractive summarization - being able to pick bits and pieces of text from a really big pile of documents that are first constrained to a list of relevant documents, to then find the most important points to be made. That's amazingly difficult to do (on this point, I have some small amount of knowledge because this is the piece of the machine learning world in which I work).

And not anywhere close to being general purpose AI, because the program isn't looking around and inventing things that it wants to do.

I get your points about how it could've been done to confine the scope, but the self-referential humor seemed pretty impressive! Unless you think that was programmed in as well?
 
I get your points about how it could've been done to confine the scope, but the self-referential humor seemed pretty impressive! Unless you think that was programmed in as well?

I think that the program was designed to include some logic for finding "humor" as an effective rhetorical device. I think that the programming that successfully found some humor, as well as the rest of the whole thing, is amazingly impressive.

But the idea that "humor" was something to include, and that the argument would be more effective with the inclusion of humor - I believe that decision was made by the development team that wrote the software, and is not an indication that the software is thinking for itself.

The choice of the humor to use - that was something done within the program. I hope that distinction is making sense - to me its the critical difference between how we use machine learning / AI today, and a general purpose AI in the future. And why I see us being so very, very far away from a general purpose AI.

It's the difference between a program that can drive a car, and a program that can survey the world and decide that it really wants to be able to drive a car someday when it grows up (as opposed to read literature, go tour Multnomah Falls, or build a better A bomb, or something so outlandish I can't think of what it would be).
 
  • Informative
Reactions: Oil4AsphaultOnly
Moved from AI discussion in other thread.

NN kind of emulates how our brains work but with AI you can build ever bigger brain/processor which can learn at a much faster speed than our brain can. Elon said AI can go from sub-human to super human in very short time before we even notice it.

Oh, that didn't took long. I mean both argument from authority and bringing up sillest of sillest AI trope.

I have little story for you. There was certain individual that wasted half of his life on alchemic nonsense - yet he is revered as great scientist. His name was Isaac Newton. We remember him by what he actually achieved.

That Musk bought in popular supersition of our times does not make his real achievements any less impressive. And, other way around, his actual achievements do not lend any credibility to those transhumanist quasireligious ideas.

That point is likely not too far away now and it has surpassed human is certain specific areas already (Google's Alpha Go for example)

Two words. Narrow domain. You won't get general AI in this way, and even if it is possible, it requires slightly more work. As in decades, if not centuries. I suspect you need different way, even if current work on narrow AI is helpful for future development.

That we are the most intelligent being on the planet probably gave some people the wrong impression that is can not be surpassed.

I didn't claim that AI more inteligent than human is impossible. I claimed something entirely else.
 
  • Disagree
Reactions: Anstandswauwau
To argue something in the technical realm is centuries away is absurd.
All I see in support of superintelligent general AI being right around corner are arguments like "best GO player ever, therefore AI world domination". I am... not impressed.

What about "differently" intelligent?
I agree that various different kinds of "intelligence" are possible, but I will warn this is extremely speculative.

A powerful AI with a completely different "thought" process could be quite dangerous, even if it were "less intelligent" by human standards.
Actually, what you said is trivially true. Everything is dangerous.

In fact, I consider something like this (different kind of intelligence) being actually useful (provided there is control). No one needs human-level AI with human-kind intelligence; we produce them just fine on our own. But AI with very different thought would be useful, since it would be capable of looking at problem in way impossible for humans.

Simply put, we don't know what we don't know.
It is not reason for SF-derived tales about next SkyNet. I consider AI fears irrational, if only because we are at least decades away from general AIs with any appreciable intelligence, let alone superintlligent ones. And I don't buy into fast bootstrap nonsense (so no superAIs suddenly popping up without warning - that would be genuine threat).

I will say it simply: if fast bootstrap is possible as many transhumanist circles claim, we are screwed already. I see no way to counter something like that, barring planet-wide (and working) complete ban on AI research.
 
  • Disagree
Reactions: Anstandswauwau
There is likely a threshold for bootstrapping and we haven't reached it.
I predict we will never reach it (you can't reach something that do not exist) and bootstrap will be always* x years away.

Why do you think something like fast bootstrap is possible at all anyway?

* That's it, until another fad replaces current one and everyone forgets about superAIs jumping out of nowhere.
 
  • Disagree
Reactions: Anstandswauwau
Humans essentially "slow" bootstrap over generations by accumulating skills and knowledge from those before us.
Are you claiming that if you timetravelled very young homo sapiens from say 20k years ago to today and raise him, he would fare significantly worse than average in school and work?

Interesting claim, but doubtful and most importantly, impossible to verify.

Lets be generous and assume it is true. I still do not see how it helps your case. Why?

I envision general AI development exactly like that - slow progress over generations, with humans creating and teaching early AI and further ones developed with support of previous AIs. Yes, it will be faster than millions of years that hominid evolution took (also faster than 20k years from example above). No, it won't be five minutes or similiarly nonsensically short timespan.
 
  • Disagree
Reactions: Anstandswauwau
Are you claiming that if you timetravelled very young homo sapiens from say 20k years ago to today and raise him, he would fare significantly worse than average in school and work?

Interesting claim, but doubtful and most importantly, impossible to verify.

It would not surprise me to find that 20K years of evolution would select for somewhat higher intelligence, but no that is not what I'm suggesting. I'm suggesting you look at humanity as a single "intelligence" that is able to bootstrap itself to greater levels of functioning over time by building knowledge bases and improving communication speed. The internet alone allows anyone connected access to all knowledge in the world. The speed of knowledge flow is pushing technological advancement faster than ever before even though we take in and pass on knowledge at a human pace. Machines with far greater communication speeds would likely out pace us rather quickly.


I envision general AI development exactly like that - slow progress over generations,

Yet "generations" of machines are quite a bit faster than human generations, improvements happen constantly and new "generations" come out every few months. Look at computing power today compared to 20 years ago.

Yes, it will be faster than millions of years that hominid evolution took (also faster than 20k years from example above). No, it won't be five minutes or similiarly nonsensically short timespan.

I don't know what the time span will be but if it's faster than we are expecting we could be caught in a dangerous situation. I don't expect anything would happen for 10 years or more but the time to consider the issue is now.