Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

2017 Investor Roundtable:General Discussion

This site may earn commission on affiliate links.
Status
Not open for further replies.
Anybody heard if the LA to NY autopilot demo is still on the table for this year? I guess it wouldn't be that much of an improvement over what they currently have since I assume they will still have to be paying attention but it sounds significant in a way, and some potential for a lot of "free advertising" if they make a big thing of it.

See ?! L5 already working
 
I may be naive, but I think the main item missing in AI machines is mobility and adapting to the external forces in the world. Just because a ‘Watson’ is a million times smarter than any human, what gives him the capability to crawl out of the box (much like the early marine animals crawling ashore) and create the mobility to take over?

I can see the ‘I’m sorry Dave, I can’t do that’ but I just don’t see Hal creating a mobile force.
 
  • Like
Reactions: neroden
I may be naive, but I think the main item missing in AI machines is mobility and adapting to the external forces in the world. Just because a ‘Watson’ is a million times smarter than any human, what gives him the capability to crawl out of the box (much like the early marine animals crawling ashore) and create the mobility to take over?

I can see the ‘I’m sorry Dave, I can’t do that’ but I just don’t see Hal creating a mobile force.


Agreed, but the thing is the situations are easy to artificially create and then train the model as long as you want until it gets it right.

For example the " construction site " problem, it's very easy to artificially create and then train the program to understand it.

I mean it's very hard to solve the issue, but it's fairly easy to set the right conditions.
 
The singularity:

0 : Ai reaches parity with human intelligence

30 min: through self-improvement, the AI reaches superhuman intelligence

1 h: now the AI brain is equivalent to a human brain of the size of the sun

42 months later: the AI hangs out in Amsterdam smoking weed

At least since the Greeks many have believed the Gods created us for their own amusement. If the AI seeks godlike intelligence, mutatus mutandis, we should be all right, at least as gladiators, actors, or politicians. But then I repeat myself.

The Buddhas, Messiahs, Muhammads, Moseses, the avatars of Hinduism, and others too numerous to recall, are just tragic early examples of general superhuman intelligence.
 
I may be naive, but I think the main item missing in AI machines is mobility and adapting to the external forces in the world. Just because a ‘Watson’ is a million times smarter than any human, what gives him the capability to crawl out of the box (much like the early marine animals crawling ashore) and create the mobility to take over?

I can see the ‘I’m sorry Dave, I can’t do that’ but I just don’t see Hal creating a mobile force.

Five different ways to make it happen.

1. The super smart machines should be able to convince and instruct some people to do certain things, and these guys will believe they are doing the right things for the benefit of humanity. Before they realize it, they have created dozens of moving and killing super robots.

2. More likely, as we typing, engineers are actively working to create moving super AIs. Have you seen those jumping and flying robots on Youtube? I don't think those guys will hesitate to add super AI brains to them, it should be super fun.

3. Nations are developing super intelligent killer robots. I can't blame them. If they don't do it, their enemies will have the robots first and attack.

4. Companies would love to build intelligent robots so they can work in the automated factories without human intervention.

5. Evil scientists/engineers build super robots so they can become the king of the world.

What can stop these from happening?
 
Last edited:
  • Like
Reactions: Crowded Mind
Anyone have an update on the tax code and the ev credit?

There is both up and downside to it being gone. It would impact margins, or demand, but for model 3 may not be relevant, since they are likely sold out for a year regardless of tax. Seems like foreign sales are also picking up, so USA tax less critical.
 
  • Like
Reactions: Reciprocity
Five different ways to make it happen.

1. The super smart machines should be able to convince and instruct some people to do certain things, and these guys will believe they are doing the right things for the benefit of humanity.

2. More likely, as we typing, engineers are actively working to create moving super AIs. Have you seen those jumping and flying robots on Youtube? I don't think those guys will hesitate to add super AI brains to them, it should be super fun.

3. Nations are developing super intelligent killer robots. I can't blame them. If they don't do it, their enemies will have the robots first and attack.

4. Companies would love to build intelligent robots so they can work in the automated factories without human intervention.

5. Evil scientists/engineers build super robots so they can become the king of the world.

What can stop these from happening?

Evil electrons working to build an anti-AI which will lead to the great electron fight. Instead of a war of words, like this thread has become, it will be the first electron wars which someone like Asimov may have chronicled. If my earlier expectation is correct, I can imagine some TV program emerging like the WWF.

Further, since a true solipsist can never be refuted, why wouldn't the general AI conclude that the external reality is just an hallucination which can only be stabilized short of madness with the caveat, it must be socially acceptable. Verification is the key, as the weatherman once said on Alaskan radio, "I'll take a leak outside to see if its freezing."

Edit: But we digress. There's a thread for this where people lurk who really know what they're talking about because they are doing AI. Some may be here, elifino.
 
Last edited:
Five different ways to make it happen.

1. The super smart machines should be able to convince and instruct some people to do certain things, and these guys will believe they are doing the right things for the benefit of humanity. Before they realize it, they have created dozens of moving and killing super robots.

2. More likely, as we typing, engineers are actively working to create moving super AIs. Have you seen those jumping and flying robots on Youtube? I don't think those guys will hesitate to add super AI brains to them, it should be super fun.

3. Nations are developing super intelligent killer robots. I can't blame them. If they don't do it, their enemies will have the robots first and attack.

4. Companies would love to build intelligent robots so they can work in the automated factories without human intervention.

5. Evil scientists/engineers build super robots so they can become the king of the world.

What can stop these from happening?
All of your examples need humans to be complicit— are we THAT stupid?
 
Anyone have an update on the tax code and the ev credit?

Collins considers changing vote on tax bill over amendments

Renewable Energy Is Surging. The GOP Tax Bill Could Curtail That.

"Senator Dean Heller, Republican of Nevada, has said he will work to oppose the House's repeal of the credit."
_______________________________________________________________________________________________

If the Republican leadership loses these two votes then the entire tax bill goes down.
 
A more appropriate place for AI discussion is here, (there may be others) Are you with Musk or Hawking on AI

But what most people seem to be missing is the likelihood that AI won't see us as a threat at all, it might not see us as anything worth considering. In that case we would be no more than barely noticeable obstacles in the way of it's goals. Do we consider the bugs in the ground before we start building a house?
 
Aside from the doomsday scenario. I dont think killer robots will be a thing. The reason is that war is already become so sterile compared to the days of the WW1/WW2 where millions of people died. There is no stomach for that kind of killing, even bad guys, in a modern war. I think the future is non lethal weapons and drones. So not a killer humanoid robot with machine guns but probably swarms of drones that can clear the way for human special forces to snatch specific bad guys. The drones would have some kind of knock out gas or other non-lethal way to subdue people who might get in the way of the mission. The drones would have enough sensors to know what is and what is not a threat.

But I agree for the most part. Without some form of safeguard, there is almost no way that General AI doesn't want to take over the world and enslave or exterminate us. I guess there is also a chance that it gets so smart so fast that it just leaves the planet before we wake up and notice its gone.

I think it's important to note that there need be no malice or ill intent by the AI to wipe us out. Think about the Y2K bug as a teeny, tiny taste. I've worked with enough software teams to know what we are human and make programming mistakes. The directive/program could be totally benign, but with broad/general AI executing it - it could be done at horrific expense to anything in the way of its orders. And crazy fast. Let the mind run wild. This article series is great by the way: The Artificial Intelligence Revolution: Part 1 - Wait But Why
 
A more appropriate place for AI discussion is here, (there may be others) Are you with Musk or Hawking on AI

But what most people seem to be missing is the likelihood that AI won't see us as a threat at all, it might not see us as anything worth considering. In that case we would be no more than barely noticeable obstacles in the way of it's goals. Do we consider the bugs in the ground before we start building a house?

But you do go through and exterminate them after you finish building the house.

Or at least you should, unless you like bugs in your house....
 
  • Like
Reactions: Oil4AsphaultOnly
If we assume, that an AI emerges that is thousands of times more intelligent than humans, then it would take over control of the world from us humans, I agree. But is it necessary a bad thing ?

Not necessarily. See, for example, the Culture series by Iain M. Banks. One of Musk's favorites, too. (And the source of SpaceX's drone ship names.)
 
Anyone have an update on the tax code and the ev credit?

There is both up and downside to it being gone. It would impact margins, or demand, but for model 3 may not be relevant, since they are likely sold out for a year regardless of tax. Seems like foreign sales are also picking up, so USA tax less critical.

Nothing new, they are in conference now between the Senate and House. I have heard nothing on the tax credit, but the Senate should win in conference because of the difficulties getting the 50 votes and the Senate keeps the credits. That doesn't mean it won't get nuked in conference. Probably won't know until late next week or the next.
 
Status
Not open for further replies.