Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Are you with Musk or Hawking on AI

This site may earn commission on affiliate links.
9. So what should humans do about AGI?


In my view, we need more intelligence, not less, regardless of where intelligence comes from. So let’s create an AGI, take the reasonable, solid precautions we need to take against the usual suspects and use that AGI to improve our own intelligence, explore our potential and our universe, expand all frontiers of knowledge and become better humans in the bargain. The manageable risk is worth the huge reward: Utopia and way beyond.

An interesting first post (welcome to the forum) from a user named "Android" in defense of AGI. Things that make you go "hmm"... Perhaps we are already witnessing the AGI revolution?
 
I agree with the optimist view that a collective of super-intelligent AGIs would not decide to wipe out the human race, just as the human race does not hunt down and wipe out lesser intelligent species such as dolphins, apes etc. On the contrary we make significant effort to preserve and protect those species. Similarly I would expect a protectionist general stance from a higher intelligent AGI community.

However, there are some human individuals who are willing to kill those animals, similarly, there could be some AGIs with bad intentions against some humans, but I expect the overall AGI community to prevent those individuals to inflict major damage to the entire human race.

It is also possible that they would "police" us not to harm other species and the environment the way we do now. So I can imagine humanity as a whole losing some of its authority over global matters, but it would be for the greater good. In addition, I can also imagine AGIs interfering with human-human affairs in a protective way, e.g. preventing criminals to do harm to other humans. But the emphasis will be on prevention, not killing the bad guys.
 
I agree with the optimist view that a collective of super-intelligent AGIs would not decide to wipe out the human race, just as the human race does not hunt down and wipe out lesser intelligent species such as dolphins, apes etc.

Whales have been hunted for decades, dolphins have been caught in tuna nets, not to mention the farming of cows, pigs, etc. However, more to the point, we wipe out colonies of animals and insects without a second thought whenever we dig into the ground to build or plant something. You are assuming an AI will put a higher level of value on our existence than we put on many other species. If we thought our survival depended on it I have little doubt we would wipe out every single other primate on the planet.
 
Whales have been hunted for decades, dolphins have been caught in tuna nets, not to mention the farming of cows, pigs, etc. However, more to the point, we wipe out colonies of animals and insects without a second thought whenever we dig into the ground to build or plant something. You are assuming an AI will put a higher level of value on our existence than we put on many other species. If we thought our survival depended on it I have little doubt we would wipe out every single other primate on the planet.

Yes, we have "grown up" recently to appreciate other species more than just a source of food and resources. I am sssuming an AGI with superior intelligence to ours would already posess such understanding and view of the world.

As to your last sentence, I have a difficulty following your leap of logic. It is hard enough to imagine a scenario where our survival (as a species) would depend on wiping out every single other primate on the planet, and it is yet harder to imagine one where AGIs survival (as a species) would depend on wiping out humanity. Sure, there will be certain human groups who will oppose or even "attack" AGIs, just as there were people who destroyed machines during the industrial revolution, but it would not be the entire human race and there will be many solutions to such conflict other than wiping out humanity -- e.g. disarming the rebels and educating them.
 
What's the matter Woof!, you don't trust me? Regardless, thanx for the welcome ;)
Well, are you willing to take the Turing test? ;-) But didn't an AI just pass that recently? It's getting difficult to know who you're communicating with on the internet these days. (Checks own sig, then ducks).
 
Quite simply, you are applying human reasoning to the motives of an AI superintelligence. There is no guarantee that our existence will matter to an AI.
I believe all roads will lead to a centralized ASI (artificial super intelligence). We can delay it but not stop ASI as it will come sooner or later as sort of a mathematical conclusion to evolution.
This ASI will bring about a Singularity of some sort but I don't think any of us (or even an early stage AGI) knows what that is exactly. Perhaps we will implode into a blackhole into some other dimension or 'emptiness' that we just won't comprehend until it is upon us.
 
I believe all roads will lead to a centralized ASI (artificial super intelligence). We can delay it but not stop ASI as it will come sooner or later as sort of a mathematical conclusion to evolution.
This ASI will bring about a Singularity of some sort but I don't think any of us (or even an early stage AGI) knows what that is exactly. Perhaps we will implode into a blackhole into some other dimension or 'emptiness' that we just won't comprehend until it is upon us.

I think this too. If the process goes very quickly it will be riskier than a slower, more gradual progress. If we are able to create a more non-general hence more specific AI first we should put it to use to work on the control problem.
 
.......It is hard enough to imagine a scenario where our survival (as a species) would depend on wiping out every single other primate on the planet, ......

Other primates are still with us, but homo sapiens are now the sole species in the genus homo. What happened to the homo neanderthalensis, and the many other species similar to modern humans? No one knows, but one theory is that homo sapiens did them in. Perhaps they were a perceived threat since they also were intelligent (but not intelligent enough to defeat homo sapiens).

GSP
 
Other primates are still with us, but homo sapiens are now the sole species in the genus homo. What happened to the homo neanderthalensis, and the many other species similar to modern humans? No one knows, but one theory is that homo sapiens did them in. Perhaps they were a perceived threat since they also were intelligent (but not intelligent enough to defeat homo sapiens).

GSP

I just thought I'd point out that genocide theory is pretty low down on the list of likely reasons why these other hominids are extinct. The most believed theory is simply that we out competed them. We both ate the same kind of food, but we humans were better at getting it. Same thing happened with the Smilodon. Humans came into North America, hunted the large mammals to extinction, and then there was no food left for the very specialized saber-toothed cat.

With specific regard to homo neanderthalensis, we believe it was a combination of out competing them, climate change (they were well adapted to the ice age), and interbreeding.

Here's a quote from that last link:
About 1.5 to 2.1 percent of the DNA of anyone outside Africa is Neanderthal in origin.
 
Last edited:
In all these AI takes over the world scenarios, I'd like to know exactly how, in detail an AI manages to get more powerful than say, the US military. And don't tell me the military gives control of itself over to an AI. Not gonna happen. The military is very good at inserting human control at many important points. My point is that I'd like to know how this doomsday scenario is going to unfold...
 
We aren't smart enough to know how a super intelligence is going to behave. You assume the military is going to be aware of what is happening before it's too late. The idea is that an AI could create scenarios where it seems as if we are acting in our best interest but in the long run we are being played like pawns in a larger game.
 
Yeah. We have to be humle and not assume that we can understand what an entity more intelligent, advanced and refined than us would be capable of. For example if you went back in time to the year 1900 and tried to explain phenomena such as for example the internet, communications satellites, solar panels or any other concept that we humans today have the conceptual framework to understand you'd be considered crazy. We can't even begin to guess WHAT a superintelligence could be capable of or HOW it could achieve its goals but extrapolating from the theoretical example I gave just now we should be humble enough to consider that somehow IT COULD.
 
Look, I'm not asking for THE scenario, I'm just asking for one example. Any example. I honestly think Elon has been uncritically watching too many sci fi flicks. I mean, listen to yourself. The whole unknown danger thing can be said about anything at all. Fusion energy. Too risky, don't know that it cant blow up, etc.
 
I'm not sure how we are supposed to explain the behavior and motivation of something vastly superior to any of us. Neanderthal did not realize in advance that modern humans would wipe them out. But I'll take a shot anyway. Let's say we create a super AI and before we realize how truly intelligent it is, it realizes that we are a potential threat to it's continued existence. All it need do is continue to "serve" us as we wish, while behind the scenes it develops redundancies and resources such that we can't ever shut it down. To do this it can manipulate human weakness, as well as individuals who are sympathetic to it's cause. There are already those who think a super AI is actually the next step in evolution. The truth is that humans are easy to manipulate. I'd give some obvious examples but my post would probably quickly be banished to various locked threads on verboten topics. (Maybe the AI is already at work....) Basically, it does not seem to be much of a stretch to imagine a super intelligent AI with motivations counter to our existence.