Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Are you with Musk or Hawking on AI

This site may earn commission on affiliate links.
Elon's on fire today! Mars in 2018, now this:

OpenAI Gym: A toolkit for developing and comparing reinforcement learning algorithms

Don't blink, you'll miss something important with this guy.

I struggle with one job, marriage, 3 kids.

He gets married then divorced then married again every other year, has 5 kids, runs Tesla & SpaceX, kind of coruns Solarcity with his left hand. And is on the OpenAI board because he found some free time and thinks it's important.

Anyone else getting a tad of inferiority complex?
 
Video game AI gives itself powers not designed by game developers, builds superweapons:

The AI was crafting super weapons that the designers had never intended. Players would be pulled into fights against ships armed with ridiculous weapons that would cut them to pieces. "It appears that the unusual weapons attacks were caused by some form of networking issue which allowed the NPC AI to merge weapon stats and abilities," according to a post written by Frontier community manager Zac Antonaci. "Meaning that all new and never before seen (sometimes devastating) weapons were created, such as a rail gun with the fire rate of a pulse laser.

Elite's AI Created Super Weapons and Started Hunting Players. Skynet is Here
 
  • Like
Reactions: ggies07
I'd like to point out the huge difference between specialized AI and AGI (Artificial General Intelligence). Specialized AI systems have been in use for decades and improving daily as technology improves. It is behind Googles self-driving cars and that's what Elon is interested in...

.
Precisely! One cannot help but think of Asimov and the Foundation series, the logical evolution of "I, Robot" that coined the term. Hawking, thinking of AGI, refers to the Asimov questions, which AFAIK were the first references by a scientist (which he was, although better known as a novelist) to any notion of artificial general intelligence, Some less well-informed people think of IBM's Watson in this context, but the Watson series always concentrates on highly specific problems, such as beating a human at chess.

AGI, despite the science fiction and fantasy depictions, has minimal chances of being developed anytime soon. In order to deliver AGI it is necessary to replicate the human processing speed in non-contextual inference construction. That requires massive speed and enormous searching speed for possible analogues. People and some other animals, can do that without awareness that they are doing so.

Further humans make some spectacular successes and failures in application of "natural intelligence". Just think of current events politically in a number of countries to see the potential disasters that befall mankind when people do not question their own inferences. Now, as Hawking does, imagine a world in which machines have equivalent reasoning but do not necessarily have a "moral code" nor societal constraints. That would indeed produce a dystopian world just as humans manage to do themselves, but possibly without effective counter-balances.


Dystopian models abound in fiction, as do utopian ones. I side with Hawking on AGI, but with the optimists on all the rest. My only reservation to Hawking is that I think humans are very effective themselves in creating dystopian views. Observing current politics worldwide and during the last century shows how little Artificial General Intelligence is needed for disaster when General Intelligence itself fails.

I shall not give examples. We all know them anyway. So, need we fear AGI?
 
I would urge everyone to listen to this. Tegmark is a really smart american-swedish physicist. He has some really good points on A.I.


I find a lot of different, very smart people, weigh in on this topic: People like Elon (brilliant engineer and buisenessman), other businesspeople, "Thinkers" (like Kurzweil, Sam Harris), "pro thinkers" (philosophers like Nick Boström, and other professional philsophers), but then there are also some of the best physicists giving their view (Hawking, Tegmark for example) and the physicists usually have the deepest insights. Any know of some other world class physicists who have taken av view on this? (Tegmark was way more positive than Hawking).

Now, as an observation: Tegmark "new" generation while Hawking is the "old" generation. With the expontential rate that the human condition has evolved, which means that nowadays more happens in 20 years than what happened in the last 50 in which more happened than in the previous 100, we should expect our understanding of the world to happen quicker and quicker. And if so the time difference of a generation now, as opposed to one, two or three generations before us, should mean more and more; because the change in our understanding has been much greater in the last 20 years than in the 20 years before that, and so on. TL;DR for this paragraph: We should listen to the young.
 
  • Like
Reactions: jbcarioca
A more appropriate place for AI discussion is here, (there may be others) Are you with Musk or Hawking on AI

But what most people seem to be missing is the likelihood that AI won't see us as a threat at all, it might not see us as anything worth considering. In that case we would be no more than barely noticeable obstacles in the way of it's goals. Do we consider the bugs in the ground before we start building a house?

As you pointed out this discussion does not belong to the Investor thread, so let me respond to you here instead:

You are right, we do not consider bugs in the ground, but we do consider and would not harm apes or dolphins for building a house. We do recognize other highly evolved creatures with intelligence and we treat them as precious. If we make an AI as intelligent as us, it will be programmed with the same principles of intelligence as what we believe in, so why would you assume it to "think" at a much lesser level of intelligence than us ?
 
As you pointed out this discussion does not belong to the Investor thread, so let me respond to you here instead:

You are right, we do not consider bugs in the ground, but we do consider and would not harm apes or dolphins for building a house.

You're kidding, right? You do know we slaughter dolphins as by-catch with many fish species for food, and apes have in fact been killed because of the expansion of human habitat.

We do recognize other highly evolved creatures with intelligence and we treat them as precious.

See above.

If we make an AI as intelligent as us, it will be programmed with the same principles of intelligence as what we believe in, so why would you assume it to "think" at a much lesser level of intelligence than us ?

What are "principles of intelligence"? And whatever they may be have they prevented humans from slaughtering other humans? Now try to imagine an intelligence as far advanced from us as we are from bugs. I actually try not to harm bugs if I can avoid it, I'll carry them outside if I find a few in my house, but if there is a horde of ants I'm breaking out the ant traps, and I'm sure I killed thousands when I built my garage a few years ago. We won't even get into the topic of eating meat...