Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
  • We just completed a significant update, but we still have some fixes and adjustments to make, so please bear with us for the time being. Cheers!

Are you with Musk or Hawking on AI

Johan

Ex got M3 in the divorce, waiting for EU Model Y!
Feb 9, 2012
7,473
9,545
Drammen, Norway
Seriously, has anyone seen Ex Machina yet? I want to discuss it so bad. I love how the character created the mappings of the AI brain and put it all together. Go see it!

So I just now. Cool film, I like the key elements to the story but actually to me the movie wasn't really great because a lot of the social interactions that was very crucial to the plot was just not believable to me.

I don't want to give away too much of the plot but if you're smart enough to create an AI you would think that you had given the control problem a bit more thought.
 

ZsoZso

Supporting Member
Apr 24, 2014
1,713
9,968
Brampton, Ontario
I liked the film (Ex Machina), but it was not as good as I expected. IMHO, the "transcendence" is a much better ASI film.
Ex Machina was selling AI capability too short, I think. The plot was much more of a social-engineering game, 2 humans + an AI outsmarting / manipulating each-other (I hope I made my statement vague enough not to spoil it for anyone) rather than an AI vs human confrontation.
 

Cosmacelf

Well-Known Member
Mar 6, 2013
8,360
19,855
San Diego
I'm starting to dread all these rather silly AI movies (I just saw Ex Machina). Seriously guys, just ... carry ... a .... gun. At the end of the day, AI is embodied in hardware. One bullet will do wonders.

And don't say that the AI can escape to the network or the cloud. It just doesn't work that way. You need massive time and space coherence for the billions or trillions of synaptic-like computations and communications that need to occur every millisecond.

Anyways, all the doom sayers like Musk are doing is inviting government regulation into something that is a nascent pure research project. Not helpful.
 

Johan

Ex got M3 in the divorce, waiting for EU Model Y!
Feb 9, 2012
7,473
9,545
Drammen, Norway
I'm starting to dread all these rather silly AI movies (I just saw Ex Machina). Seriously guys, just ... carry ... a .... gun. At the end of the day, AI is embodied in hardware. One bullet will do wonders.

And don't say that the AI can escape to the network or the cloud. It just doesn't work that way. You need massive time and space coherence for the billions or trillions of synaptic-like computations and communications that need to occur every millisecond.

Anyways, all the doom sayers like Musk are doing is inviting government regulation into something that is a nascent pure research project. Not helpful.

I agree with you with regards to the movies.

When it comes to real life and the AI research studies being done I do think it's a telling thing that some of the top research groups have joined together in discussing how to go about the research in a cautious and controlled way, so that you don't all of a sudden create strong AI without having considered the control problem well enough. It doesn't have to be government regulation, it could rather be wise researchers who try to think one step ahead.
 

JRP3

Hyperactive Member
Aug 20, 2007
19,540
42,967
Central New York
I'm starting to dread all these rather silly AI movies (I just saw Ex Machina). Seriously guys, just ... carry ... a .... gun. At the end of the day, AI is embodied in hardware. One bullet will do wonders.

And don't say that the AI can escape to the network or the cloud. It just doesn't work that way. You need massive time and space coherence for the billions or trillions of synaptic-like computations and communications that need to occur every millisecond.

Carry a gun? Really? You have simply not grasped any of the ways an AI might gain control, which is rather the whole point. Even those of us who have considered some of the many possibilities can't predict what it could do. You won't see it coming until it's too late.
 

Johan

Ex got M3 in the divorce, waiting for EU Model Y!
Feb 9, 2012
7,473
9,545
Drammen, Norway
Carry a gun? Really? You have simply not grasped any of the ways an AI might gain control, which is rather the whole point. Even those of us who have considered some of the many possibilities can't predict what it could do. You won't see it coming until it's too late.

But in this particular movie it would have actually sufficed to carry a gun... Which is why it wasn't a great movie.
 

Cosmacelf

Well-Known Member
Mar 6, 2013
8,360
19,855
San Diego
Carry a gun? Really? You have simply not grasped any of the ways an AI might gain control, which is rather the whole point. Even those of us who have considered some of the many possibilities can't predict what it could do. You won't see it coming until it's too late.

I always ask in these forums, give me an example. And no one ever does. If you can't explain it, it isn't a very good argument.
 

JRP3

Hyperactive Member
Aug 20, 2007
19,540
42,967
Central New York
Actually it's been explained in these forums, as well as in a number of books, but again, you missed the main point: None of us can likely explain what a higher intelligence may do, any more than a chimp could explain what humans might do.
 

Cosmacelf

Well-Known Member
Mar 6, 2013
8,360
19,855
San Diego
Actually it's been explained in these forums, as well as in a number of books, but again, you missed the main point: None of us can likely explain what a higher intelligence may do, any more than a chimp could explain what humans might do.

Ah yes, the fear of the unknown. Hard to argue against a complete unknown.

And no, it hasn't been explained in these forums, or if it has, please point me to a post (I mean, other than just saying that you can't come up with a scenario, ie. Fear of the unknown).

Look, no one is going to suddenly develop completely self aware, super intelligence all of a sudden. Like any engineering problem, AI will be built incredibly small step by small step. And it won't have human desires, or heck, any desires, because what's the usefulness of building that? It would be like building a car that has been designed to randomly swerve into oncoming traffic. No, smart machines will be purpose built for specific tasks.
 

ecarfan

Well-Known Member
Sep 21, 2013
19,194
13,843
San Mateo, CA
I think it has been clearly explained that a recursive learning AI system could in theory suddenly develop SAI on its own on a time scale that humans could not recognize. While at this point that is only a theory, the potential negative consequences could be so serious from a human point of view (existential threat) that such a possibility must be considered.
 

ggies07

Supporting Member
Nov 8, 2012
3,798
6,905
Ft. Worth, TX
Ah yes, the fear of the unknown. Hard to argue against a complete unknown.

And no, it hasn't been explained in these forums, or if it has, please point me to a post (I mean, other than just saying that you can't come up with a scenario, ie. Fear of the unknown).

Look, no one is going to suddenly develop completely self aware, super intelligence all of a sudden. Like any engineering problem, AI will be built incredibly small step by small step. And it won't have human desires, or heck, any desires, because what's the usefulness of building that? It would be like building a car that has been designed to randomly swerve into oncoming traffic. No, smart machines will be purpose built for specific tasks.
Really? Have you read the theories in the book Superintelligence? It is very possible it will happen quickly once we get to a certain point.

I really liked the movie compared to Chappie, which i thought was going to be more thought provoking, but was more violent than anything.

Why would he carry a gun? There was no threat until there was. Did he think he would eventually be outsmarted? Yes, i think at some level he knew, but not with violence.
 

Cosmacelf

Well-Known Member
Mar 6, 2013
8,360
19,855
San Diego
Really? Have you read the theories in the book Superintelligence? It is very possible it will happen quickly once we get to a certain point.

I really liked the movie compared to Chappie, which i thought was going to be more thought provoking, but was more violent than anything.

Why would he carry a gun? There was no threat until there was. Did he think he would eventually be outsmarted? Yes, i think at some level he knew, but not with violence.

Why would he carry a gun? Because he willfully made an autonomously moving machine with a strong desire to escape, and he was their jailer. You'd have to be completely stupid to not realize that the machine might rise up against him.

And no I haven't read superintelligence as no one has been able to succinctly outline a credible threat to me.
 

Johan

Ex got M3 in the divorce, waiting for EU Model Y!
Feb 9, 2012
7,473
9,545
Drammen, Norway
Why would he carry a gun? Because he willfully made an autonomously moving machine with a strong desire to escape, and he was their jailer. You'd have to be completely stupid to not realize that the machine might rise up against him.

And no I haven't read superintelligence as no one has been able to succinctly outline a credible threat to me.

With alll due respect you should read it and give it a think before you ridicule the potential danger of AI. Especially the chapters on theoretical methods of going about giving the AI its motivations and The control problem.
 

Cosmacelf

Well-Known Member
Mar 6, 2013
8,360
19,855
San Diego
I'm not ridiculing, I'm asking. And so far no one has been able to give me a good example of the dangers. I may take a look at the book...
 

SteveS0353

Member
Aug 23, 2014
365
54
San Diego, CA
With alll due respect you should read it and give it a think before you ridicule the potential danger of AI. Especially the chapters on theoretical methods of going about giving the AI its motivations and The control problem.

I know it's been linked before in this thread and elsewhere. This 2-part summary has some thought provoking stuff in it.

The AI Revolution: Road to Superintelligence - Wait But Why
 

Cosmacelf

Well-Known Member
Mar 6, 2013
8,360
19,855
San Diego
The problem with super intelligence theories is that people forget you need very, very specialized hardware to run the AI on. And hardware always has limits. The first AGI machine won't be able to get super smart because of hardware limits. And a computer can't build anything unless you gave it a very advanced robotic body. Then there is the problem of motivation. An artificial brain isn't going to have all the legacy crap that a human brain has like a will to live. It's only going to have what is useful.

I suppose in theory some really rich (because these machines aren't going to be cheap) mad genius could hack together a malevolent super intelligent machine ... But even then one suitably targeted bullet or, if needed, bomb, will take it out pretty easily.
 

JRP3

Hyperactive Member
Aug 20, 2007
19,540
42,967
Central New York
Your mistake is to assume malevolence, instead of a simple drive to complete a task in ways that may not be beneficial to humans. A simple analogy, consider a wood chipper. It's a dumb machine with no intelligence, designed simply to shred whatever is fed into it. Sometimes people get sucked in as well as wood. No bad intent needed. Also consider computer viruses that get released on the web and infect millions of machines, they can never be fully eradicated unless all connected computers on the internet are wiped clean or destroyed.
 

About Us

Formed in 2006, Tesla Motors Club (TMC) was the first independent online Tesla community. Today it remains the largest and most dynamic community of Tesla enthusiasts. Learn more.

Do you value your experience at TMC? Consider becoming a Supporting Member of Tesla Motors Club. As a thank you for your contribution, you'll get nearly no ads in the Community and Groups sections. Additional perks are available depending on the level of contribution. Please visit the Account Upgrades page for more details.


SUPPORT TMC
Top