Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Are you with Musk or Hawking on AI

This site may earn commission on affiliate links.
The problem with super intelligence theories is that people forget you need very, very specialized hardware to run the AI on. And hardware always has limits. The first AGI machine won't be able to get super smart because of hardware limits. And a computer can't build anything unless you gave it a very advanced robotic body. Then there is the problem of motivation. An artificial brain isn't going to have all the legacy crap that a human brain has like a will to live. It's only going to have what is useful.

I suppose in theory some really rich (because these machines aren't going to be cheap) mad genius could hack together a malevolent super intelligent machine ... But even then one suitably targeted bullet or, if needed, bomb, will take it out pretty easily.

Lemme give you a good quote from Boström's book:

Consider the following scenario. Over the coming years and decades, AI systems become gradually more capable and as a consequence find increasing real-world application: they might be used to operate trains, cars, industrial and household robots, and autonomous military vehicles. We may suppose that this automation for the most part has the desired effects, but that the success is punctuated by occasional mishaps—a driverless truck crashes into oncoming traffic, a military drone fires at innocent civilians. Investigations reveal the incidents to have been caused by judgment errors by the controlling AIs. Public debate ensues. Some call for tighter oversight and regulation, others emphasize the need for research and better-engineered systems—systems that are smarter and have more common sense, and that are less likely to make tragic mistakes. Amidst the din can perhaps also be heard the shrill voices of doomsayers predicting many kinds of ill and impending catastrophe. Yet the momentum is very much with the growing AI and robotics industries. So development continues, and progress is made. As the automated navigation systems of cars become smarter, they suffer fewer accidents; and a military robots achieve more precise targeting, they cause less collateral damage. A broad lesson is inferred from these observations of real-world outcomes: the smarter the AI, the safer it is. It is a lesson based on science, data, and statistics, not armchair philosophizing. Against this backdrop, some group of researchers is beginning to achieve promising results in their work on developing general machine intelligence. The researchers are carefully testing their seed AI in a sandbox environment, and the signs are all good. The AI’s behavior inspires confidence—increasingly so, as its intelligence is gradually increased.

At this point, any remaining Cassandra would have several strikes against her:

* A history of alarmists predicting intolerable harm from the growing capabilities of robotic systems and being repeatedly proven wrong. Automation has brought many benefits and has, on the whole, turned out safer than human operation.
* A clear empirical trend: the smarter the AI, the safer and more reliable it has been.Surely this bodes well for a project aiming at creating machine intelligence more generally smart than any ever built before—what is more, machine intelligence that can improve itself so that it will become even more reliable.
* Large and growing industries with vested interests in robotics and machine intelligence. These fields are widely seen as key to national economic competitiveness and military security. Many prestigious scientists have built their careers laying the groundwork for the present applications and the more advanced systems being planned.
* A promising new technique in artificial intelligence, which is tremendously exciting to those who have participated in or followed the research. Although safety issues and ethics are debated, the outcome is preordained. Too much has been invested to pull back now. AI researchers have been working to get to human-level artificial general intelligence for the better part of a century: of course there is no real prospect that they will now suddenly stop and throw away all this effort just when it finally is about to bear fruit.
* The enactment of some safety rituals, whatever helps demonstrate that the participants are ethical and responsible (but nothing that significantly impedes the forward charge).
* A careful evaluation of seed AI in a sandbox environment, showing that it is behaving cooperatively and showing good judgment. After some further adjustments, the test results are as good as they could be. It is a green light for the final step ...

And so we boldly go—into the whirling knives.

We observe here how it could be the case that when dumb, smarter is safer; yet when smart, smarter is more dangerous. There is a kind of pivot point, at which a strategy that has previously worked excellently suddenly starts to backfire. We may call the phenomenon the treacherous turn.

The treacherous turn—While weak, an AI behaves cooperatively (increasingly so, as it gets smarter). When the AI gets sufficiently strong—without warning or provocation—it strikes, forms a singleton, and begins directly to optimize the world according to the criteria implied by its final values.
 
I still don't see it. Somehow this super smart AI has to marshall real world industries to affect us in any significant way. A super smart AI can't build an AI army unless you have access to a huge industrial base. It can't take over the existing military machines since they have very, very good controls in place to prevent that very scenario (after all, if a smart AI could take over a military's machines, so could smart human enemy hackers).
 
I still don't see it. Somehow this super smart AI has to marshall real world industries to affect us in any significant way. A super smart AI can't build an AI army unless you have access to a huge industrial base. It can't take over the existing military machines since they have very, very good controls in place to prevent that very scenario (after all, if a smart AI could take over a military's machines, so could smart human enemy hackers).

The point is WE GIVE IT CONTROL since it does everything better than us.
 
I still don't see it. Somehow this super smart AI has to marshall real world industries to affect us in any significant way. A super smart AI can't build an AI army unless you have access to a huge industrial base. It can't take over the existing military machines since they have very, very good controls in place to prevent that very scenario (after all, if a smart AI could take over a military's machines, so could smart human enemy hackers).
Someone needs to watch WarGames (1983 version) again. At least for comic relief, if not food for thought. :)
 
Great interview with Alex Garland and his thought process for Ex Machina. He calls it an idea movie; wanting people to engage in conversation on the topic......which is what the author of Wait,but Why website was saying in his two part series on AI...... Check it out.


Edit:

After doing some research on trying to find out what book he is talking about that made him create this film, I think I found it if it's the same school he is talking about:

school - http://www3.imperial.ac.uk/humanrobotics

book - Human Robotics: Neuromechanics and Motor Control: Etienne Burdet, David W. Franklin, Theodore E. Milner: 9780262019538: Amazon.com: Books
 
Last edited by a moderator:
Interesting...interesting....maybe the beginning of something.....

can build and test its own babies

A University of Cambridge team of researchers has created a "mother robot" capable of giving birth to -- or more correctly, building its own -- baby cube-bots. The team conducted five rounds of experiments, wherein it assembled 10 pint-sized machine children per generation with a motor and one to five plastic cubes. It then observed how fast they moved, as well as how far they got, all without human intervention. Momma bot left the fastest ones untouched, while "mutation and crossover were introduced" in the slower robo-kids for the next generation. By the time it got to the last batch, the small machines moved twice as fast as the fastest first-gen baby bots.
While there are software that are capable of generating designs much faster, the researchers argue that their mom bot ended up creating more effective configurations, even if it took 10 minutes to build each one.

 
Last edited by a moderator:
Elon seems to take this problem seriously because he has donated a lot of money to AI security research, I believe $10 million.

If anybody is interested in Elon's views on AI, I recommend his interview at MIT in Oct 2014. Search for Elon Musk, MIT Centennial Symposium.
 
Ok, for anyone truly interested in AI, possible future superintelligence and in particular the control problem (well laid out in Boström's book) this is a new paper by him (Boström) with a proposition on how to attack the difficult Control Problem.

http://www.nickbostrom.com/papers/porosity.pdf

This is probably a bad attempt at trying to sum it up, since I'm not smart enough and it's a really intricate problem, but I'll try anyway to try to get more people interested in the paper:
Boström believes it's more important to control an AI by giving it "the right" values rather than limiting its ability to do harm. The ideal value to load would be to give the AI as its main goal: "Act in a way that best fulfills the sum of all wishes of mankind (Boström calls it CEV, short for the Coherent Extrapolated Volition of Humankind). Now we can likely not program/define this very well. One clever approach would be to instruct the AI to do a thought experiment: "Imagine you contacting other AIs asking them to send you instructions on how to arrange things in such a way that it best achieves the CEV. What would the instructions you would receive likely be?" In this way we use the superintelligence of our newly created superintelligence to solve the control problem for us, by defining the CEV better than we ever could, by instructing it to think hypothetically about what other superintelligences would advise it to do.

Smart, but there's more and you're already likely thinking "infinite regress" (at least that's what I thought at this point).

But reading on Boström describes some really smart ways of weighing different inputs given by a multitude of other AIs, ways of selecting which AIs to listen more or less to (such as ways to weigh the inputs of AIs created by people similar to us, or with similar value structures, more. And ways to listen less to AIs that may have self generated or re making many many copies of themselves to gain more influence over our AI.). Now what I'm not quite sure of here is if he means "hypothetical other AIs" i.e. AIs that our AI is just imagining could exist or actual other AIs in existence in the universe or multiverse. I think (I could be wrong here) that it doesn't really matter if other AIs actually exist or not, if they do these are proposals on how to allow them to influence us best, if they only exist in theory our AI still needs to know how to weigh their inputs.

To give you a further teaser he introduces the concept of constructing and/or looking for a "cookie", a unique physical data structure, in our information exchange with other AIs (where we basically trade supposed wisdom from them for influence over us) in order to separate the good ones from bad ones so to speak. Quite fascinating.
 
Last edited:
Interesting development....

Quantum computers a step closer to reality after silicon coding breakthrough | Technology | The Guardian

Australian researchers have demonstrated that a quantum version of computer code can be written on a silicon microchip with the highest level of accuracy ever recorded.
A quantum computer uses atoms rather than transistors as its processing unit, allowing it to conduct multiple complex calculations at once and at high speed.

To mathematically prove the entanglement of the two particles had occurred, the finding had to past the Bells Test, a stringent and unforgiving test that detects even the most minor imperfection.
The research passed the test with the highest score ever recorded in an experiment.

Using atoms instead of transistors reminds me of the part in EX Machina where the creator is explaining how he development his AI and it wasn't until he moved away from boards to a more fluid like substance that he was able to move beyond his AI barriers.
 
As my friend Edsger Dijkstra once said "The question of whether machines can think is about as relevant as the question of whether submarines can swim
I take it that if a machine contains software that enables it to analyze a real world situation and then take action based on that analysis (e.g. Tesla AP analyzing a road and then steering to stay in the lane) Edsger does not call that "thinking".
I think it likely that in the next few decades machines will be built that, from our point of view, make complex decisions and act as a human would, and we will have no choice but to describe them as "thinking" machines. But Edsger may still resist using that term in that context. It won't matter because the machine will be way ahead of him -- and me -- in capability.
 
I take it that if a machine contains software that enables it to analyze a real world situation and then take action based on that analysis (e.g. Tesla AP analyzing a road and then steering to stay in the lane) Edsger does not call that "thinking".
I think it likely that in the next few decades machines will be built that, from our point of view, make complex decisions and act as a human would, and we will have no choice but to describe them as "thinking" machines. But Edsger may still resist using that term in that context. It won't matter because the machine will be way ahead of him -- and me -- in capability.

Back at the University my AI professor expressed this notion by saying something along the lines of: AI research is a self-destructing area of study, because every time an advancement is made to solve a problem previously categorized as something that requires intelligence, suddenly it is no longer considered to require intelligence, they become "expert-systems" or some other label that is not AI. Examples include chess programs that beat human chess-masters, face recognition, medical diagnosis etc. In other words, we keep moving the goal-post as we consider humanity to be "special" to have this thing we call intelligence and we do not want to allow our creations (robots, software) to be considered our Equals.

I believe the only point where this goal-post moving will break down will be when AGI truly surpasses human intelligence.
 
Good to have some action in this thread :)

When we're discussing the more fundamental concept of what "intelligence" and "thinking" really is, it's important to get the basic theories in check:

Do we live in a deterministic or indeterministic universe? I don't know, but either way there seems (IMO) to be no mechanism for a libertarian free will (the classic notion of a "true" free will) because if there was, there must be some way (magical or otherwise) to affect the physical world without it being a part of the physical universe. When it comes to cognition and brain physiology it would mean some mechanism by which you decide your next thought before you think it - highly unlikely if you ask me. So human brain cognition is basically no different from "machine intelligence" or "artificial intelligence" in that it's just a series of intricate events taking place in sequence in a complex system (the brain) but it could have all the same been in a computer or some other complex physical device.

Daniel Dennet, one of the foremost philosophers of our current time when it comes to theory of mind and free will is a compatibilist. This means he believes it's possible to combine a deterministic physical universe, with the mind being the result of physical processes in the brain and nothing else (no divine intervention, no supernatural soul), with a notion of free will. He says we as humans are "reason responsive agents" meaning we react but we also learn and we adapt. Whether we "really choose" to act this way or that way is beside the point: our conscious experience is that of an agent thinking with purpose and making choices. Likely we are perceiving our thinking as it's occurring, so in a sense we're just observing the processes in our minds as they occur: we call this steady stream of experience consciousness.

So what's this got to do with AI? Well, whenever a machine becomes a reason responsive agent it's my belief that it is in fact "thinking" albeit through other physical mechanics than in our biological brains. And it is conscious but as of yet only in different specific senses (not generally yet).

So the Tesla AP system is, IMO, both a thinking system and a conscious system within the limits of it's functionality.

I remembered one of Dennet's favorite quotes about these things. He talks about a friend of his who wrote a book on Indian street magicians. People would ask him if they were real magicians or conjurers. He would reply of course they were conjurers, since real magic doesn't exist. So then the only "real" magic that can exist is that which isn't what most people instinctively want to think.

Edit: found the quote:

02ff906c0d16f53a48eddbb47b8c67b8.jpg


So, forget the idea that human "thinking" and "consciousness" is so special. It's not.
 
Last edited: