Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

2017 Investor Roundtable:General Discussion

This site may earn commission on affiliate links.
Status
Not open for further replies.
The real big news Elon talked about yesterday was he expects General AI to be developed in 7~8 years. That number shows we are on the edge to have machines that's 1 million times smarter than human within a decade. There is a decent chance that the entire humanity could be wiped out within 20 years. It's going to be really hard to stop the progress toward the cliff because 99% of the population don't even think that's a risk. How do we know those machines will continue to build friendly machines? At this moment every major country is trying to develop smart killer robots. Killing human will be part of their DNA.

Sci-Fi movies always have a happy ending that human defeat the machines in the end. In reality, the chance of winning is near zero.
 
This thread is originally by an owner who just got to order his 3, but someone in the discussion says he has both an owner reservation for his X and a non-owner one (can obviously have just one priority reservation per previously owned vehicle) and the non-owner one got to order, too. Should take this with a package of salt as this might be a special case, but things seem to be progressing.

exactly. I have a model X and Tesla said that only one of my two model 3 reservations would be priority. The first reservation configurator opened on Nov 21 and the second opened yesterday. I had assumed that the second reservation didn't get the owner priority

Model 3 Ordered, Regular Customer, Details in the comments • r/teslamotors
 
Anybody heard if the LA to NY autopilot demo is still on the table for this year? I guess it wouldn't be that much of an improvement over what they currently have since I assume they will still have to be paying attention but it sounds significant in a way, and some potential for a lot of "free advertising" if they make a big thing of it.

Also, iirc GAAP accounting changes take effect starting with the next earnings, which should make the SCTY deal clearer/look better. Anybody crunched the numbers on it yet?
 
The real big news Elon talked about yesterday was he expects General AI to be developed in 7~8 years. That number shows we are on the edge to have machines that's 1 million times smarter than human within a decade. There is a decent chance that the entire humanity could be wiped out within 20 years. It's going to be really hard to stop the progress toward the cliff because 99% of the population don't even think that's a risk. How do we know those machines will continue to build friendly machines? At this moment every major country is trying to develop smart killer robots. Killing human will be part of their DNA.

Sci-Fi movies always have a happy ending that human defeat the machines in the end. In reality, the chance of winning is near zero.

Aside from the doomsday scenario. I dont think killer robots will be a thing. The reason is that war is already become so sterile compared to the days of the WW1/WW2 where millions of people died. There is no stomach for that kind of killing, even bad guys, in a modern war. I think the future is non lethal weapons and drones. So not a killer humanoid robot with machine guns but probably swarms of drones that can clear the way for human special forces to snatch specific bad guys. The drones would have some kind of knock out gas or other non-lethal way to subdue people who might get in the way of the mission. The drones would have enough sensors to know what is and what is not a threat.

But I agree for the most part. Without some form of safeguard, there is almost no way that General AI doesn't want to take over the world and enslave or exterminate us. I guess there is also a chance that it gets so smart so fast that it just leaves the planet before we wake up and notice its gone.
 
Tesla screwed up the priority of reservations and order? I'm shocked. Shocked, I tell you!

Maybe, maybe not. Tesla never said that every owner would get an invite before every non-owner. I am sure there are cases where West coast non-owners, like this one, would get an invite before east cost owners. Remember that Tesla also always builds cars early in the quarter that are delivered the farthest way, so maybe East coast owners would be getting invites near the end of the year while west coast non-owners can get them now because they will be delivered before year end. This is possible because they have already built the car as the only options are wheels and color and my guess is they can swap the wheels after the car is built.
 
  • Like
Reactions: hobbes and mongo
Aside from the doomsday scenario. I dont think killer robots will be a thing. The reason is that war is already become so sterile compared to the days of the WW1/WW2 where millions of people died. There is no stomach for that kind of killing, even bad guys, in a modern war. I think the future is non lethal weapons and drones. So not a killer humanoid robot with machine guns but probably swarms of drones that can clear the way for human special forces to snatch specific bad guys. The drones would have some kind of knock out gas or other non-lethal way to subdue people who might get in the way of the mission. The drones would have enough sensors to know what is and what is not a threat.

But I agree for the most part. Without some form of safeguard, there is almost no way that General AI doesn't want to take over the world and enslave or exterminate us. I guess there is also a chance that it gets so smart so fast that it just leaves the planet before we wake up and notice its gone.

If they won't kill millions of people, it's most likely because there is no need, instead of they can't or no stomach. There will be no need to do mass killing because the majority of earth population will be de-intelligentized and be fed and herded like animals. Like today human beings don't feel the need to try to extinct a species, say ants, because there is no way ants can post any threats to humans. Same thing.

AI leaving the planet? this is the least concern for me as human being can easily produce AIs all over again. How do you know the AI won't make sure this won't happen by wiping out human beings before they leave? :)
 
  • Like
Reactions: DurandalAI
If they won't kill millions of people, it's most likely because there is no need, instead of they can't or no stomach. There will be no need to do mass killing because the majority of earth population will be de-intelligentized and be fed and herded like animals. Like today human beings don't feel the need to try to extinct a species, say ants, because there is no way ants can post any threats to humans. Same thing.

AI leaving the planet? this is the least concern for me as human being can easily produce AIs all over again. How do you know the AI won't make sure this won't happen by wiping out human beings before they leave? :)

The book 'Life 3.0' by Max Tegmark is a must read for anyone interested in the future of AI.
 
Interesting data from this tweet:

Josua Fagerholm on Twitter

DQjfdkHW0AAvAaX.jpg


I can tell you from being ii the online advertising business with 15 years experience. Brand awareness like this costs millions and millions to cultivate, maybe even billions. There is a tremendous amount of value be generated by what? Basically word of mouth and Twitter. That is why I really love twitter as a user and it should be a great investment.
 
Or adults could just take it upon themselves to re-educate themselves for a new career. You know, like in olden days. :rolleyes:
The real big news Elon talked about yesterday was he expects General AI to be developed in 7~8 years. That number shows we are on the edge to have machines that's 1 million times smarter than human within a decade. There is a decent chance that the entire humanity could be wiped out within 20 years. It's going to be really hard to stop the progress toward the cliff because 99% of the population don't even think that's a risk. How do we know those machines will continue to build friendly machines? At this moment every major country is trying to develop smart killer robots. Killing human will be part of their DNA.

Sci-Fi movies always have a happy ending that human defeat the machines in the end. In reality, the chance of winning is near zero.

The singularity:

0 : Ai reaches parity with human intelligence

30 min: through self-improvement, the AI reaches superhuman intelligence

1 h: now the AI brain is equivalent to a human brain of the size of the sun

42 months later: the AI hangs out in Amsterdam smoking weed
 
Without some form of safeguard, there is almost no way that General AI doesn't want to take over the world and enslave or exterminate us. I guess there is also a chance that it gets so smart so fast that it just leaves the planet before we wake up and notice its gone.

If we assume, that an AI emerges that is thousands of times more intelligent than humans, then it would take over control of the world from us humans, I agree. But is it necessary a bad thing ? Just look around how dumb our humanity is destroying the world, driving most species out of existence including the homo sapiens species. It would be very refreshing if more intelligent AI would take over and stop the idiocracy that is currently ongoing under our oh-so-great leadership.

Would such AI really enslave and exterminate us ?
When humanity was at a more primitive, lower level of intelligence, we had slavery, genocide. Most of us have risen in intelligence to the level so that we do not go around willingly exterminating other species. If the AI is thousands of times more intelligent than us, then they would value life and would help preserve it. Sure, we would lose control, they may limit our freedom, e.g. not allowing us to continue our wanton ecological destruction.

I, for one, would welcome such overlords!
 
Last edited:
If we assume, that an AI emerges that is thousands of times more intelligent than humans, then it would take over control of the world from us humans, I agree. But is it necessary a bad thing ? Just look around how dumb our humanity is destroying the world, driving most species out of existence including the homo sapiens species. It would be very refreshing if more intelligent AI would take over and stop the idiocracy that is currently ongoing under our oh-so-great leadership.

Would such AI really enslave and exterminate us ?
When humanity was at a more primitive, lower level of intelligence, we had slavery, genocide. Most of us have risen in intelligence to the level so that we do not go around willingly exterminating other species. If the AI is thousands of times more intelligent than us, then they would value life and would help preserve it. Sure, we would lose control, they may limit our freedom, e.g. not allowing us to continue our wanton ecological destruction.

I, for one, would welcome such overlords!

Be careful what you wish for. You hope AI could stop human stupidity, but AI might well decide that the best way to do so is to eliminate human beings all together.
 
Status
Not open for further replies.