Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

NVIDIA Unveils "First" AI Computer for Level 5 Driverless Vehicles

This site may earn commission on affiliate links.
I want these features from the existing hardware:

#1: Around view display showing a birds-eye view of what all the cameras are seeing. (Mainly for parking.)
#2: Ability to mine bit-coins for profit if the autopilot hardware isn't actually driving the car.
 
  • Funny
Reactions: teddytoons
They accept verbal directions NOW. Not sure what you think is hard here.

Authority. The commands they accept are authenticated by virtue of being spoken inside the car. But how does a car know to trust someone outside?

But let's not get hung up on this issue. When/if driverless cars with manual driver override are a thing, they'll receive outside orders simply through an authenticated network request and not through voice.
 
Yeah. I don't want to swap out my vehicle in 3-5 years. I paid for full autonomous driving upfront because I'm reasonably sure that Musk will deliver it even if my car needs a hardware upgrade. I base this belief on what he's done in the past with these cars. (Yes, I understand it's just a belief)
Full Autonomy requires a lot of CPU power.

If Tesla is going to try to achieve it with just cameras....then the image recognition and response software is going to have to be astounding. That's what makes humans so astounding.
Our eyes themselves are amazing in everything that they do: From the auto focus to the self cleaning functions, however whats most astounding is our brain and its accuracy of interpreting the constant video the eyes provide it.
I program robots for a living that also uses camera images to do what they have to do.

Anyway...lets say someone is driving a car and they see a tree off to the side of the road standing tall and not bothering anyone. Fine.
Now lets say that the same person sees a tree that has fallen down in the middle of the road. He sees the trunk and up from the trunk are the branches with the leaves still on them....across the road and then determines that his car would incur a lot of damage if he continued down the road.
How does a computer receive an image of a sideways tree across the road and determine what danger would occur if it were to continue? What if the tree fell and was removed and now there are a ton of leaves and small branches on the road that have all of the parts of a tree, but is no longer a tree and with slow driving a car can now drive over the leaves but at a very slow pace because leaves are very slippery - unlike pavement?

I believe there is a reason why states don't give out drivers licenses before a person is 18 or so. Even with our super brains....it takes a while for adaptive learning to learn in such a way that each potential driver not provide a risk to society. When a young person rides a bike or crosses a street or rides with other people in cars they are learning. And I agree that 18 years of this experience gets a person the opportunity to at least take a class to find out if they are fit for driving.

Driving is extremely easy. However recognizing / adapting / and adjusting to the environment is easy for the brain, but not for a computer.
The question for programmers is: How is it possible to cram 18 years of adaptive learning into a single computer @ $500 or so that can perform flawlessly with constant adaptive input?
 
Full Autonomy requires a lot of CPU power.

If Tesla is going to try to achieve it with just cameras....then the image recognition and response software is going to have to be astounding. That's what makes humans so astounding.
Our eyes themselves are amazing in everything that they do: From the auto focus to the self cleaning functions, however whats most astounding is our brain and its accuracy of interpreting the constant video the eyes provide it.
I program robots for a living that also uses camera images to do what they have to do.

Anyway...lets say someone is driving a car and they see a tree off to the side of the road standing tall and not bothering anyone. Fine.
Now lets say that the same person sees a tree that has fallen down in the middle of the road. He sees the trunk and up from the trunk are the branches with the leaves still on them....across the road and then determines that his car would incur a lot of damage if he continued down the road.
How does a computer receive an image of a sideways tree across the road and determine what danger would occur if it were to continue? What if the tree fell and was removed and now there are a ton of leaves and small branches on the road that have all of the parts of a tree, but is no longer a tree and with slow driving a car can now drive over the leaves but at a very slow pace because leaves are very slippery - unlike pavement?

I believe there is a reason why states don't give out drivers licenses before a person is 18 or so. Even with our super brains....it takes a while for adaptive learning to learn in such a way that each potential driver not provide a risk to society. When a young person rides a bike or crosses a street or rides with other people in cars they are learning. And I agree that 18 years of this experience gets a person the opportunity to at least take a class to find out if they are fit for driving.

Driving is extremely easy. However recognizing / adapting / and adjusting to the environment is easy for the brain, but not for a computer.
The question for programmers is: How is it possible to cram 18 years of adaptive learning into a single computer @ $500 or so that can perform flawlessly with constant adaptive input?
The same way it determines the difference between someone walking along the sidewalk and someone crossing the road. One is scenery and the other is obstructing the road/cars path. That seems pretty easy to me. It has a pre-determined route it is following, it sees a tree. Is tree in my path? No --> continue on path, Yes--> Stop. This is not to say that FSD will be easy. There are an infinite number of variables. I think the toughest part will be accounting for changes to the known path, such as construction areas.
 
Last edited:
It has a pre-determined route it is following, it sees a tree. Is tree in my path? No --> continue on path, Yes--> Stop.

Most of the processing is occurring on determining the immediate surroundings, route is, at best, a background process. Just like with human drivers. You can turn on EAP in a current incarnation without giving a destination, and it will happily drive.

Thank you kindly.
 
Maybe you missed the part where the accident happened 100 yards in front of me. Closing the freeway without any exits between me and the accident. Cars ahead of me, cars behind, no way to go anywhere until the highway patrol gave instructions. A couple cars unable to follow instructions and nobody would have been going anywhere until tow trucks came to remove them.

No, you missed where I explained how that works. The highway patrol gives a command to the drivers of the old fashioned cars, and to the AI in the autonomous car. I still don't see what's so hard.

if this is coming in the next couple years that must all already exist, right?

I don't think you understand how the 'future' works.

Authority. The commands they accept are authenticated by virtue of being spoken inside the car. But how does a car know to trust someone outside?

How do YOU know to trust someone outside? Remember the reason we are being replaced as drivers is that we are so terrible at it that a machine can do better.

Thank you kindly.
 
...I program robots for a living that also uses camera images to do what they have to do.
.... I've already been hit by a lot of hail before due to the nature of my job..... I have made many solar panels myself. ..

I've seen it as I was temporarily blinded by a colossal blunder with Super Capacitors on my job.

I'm not looking for everyone to agree. I get paid to develop AR at a major Telecommunications company here in the US.

One last post concerning the robot assembly lines I write python libraries for.

I have a business where I don't mind breaking even if it continues to do the same. My solar business. Its doing really well and I don't need profit from it.



I've been on my current job for a long long time. ( 29 years ) love it.

I get my ability to sift through insults because my job calls for me to "take it" from customers on a daily basis.
What exactly is your job title that entails programming python libraries for augmented reality solar panels that power super-capacitors in the hail while getting verbally abused by customers?

I only ask because you are CLEARLY an expert on this thread's subject, and I just want everyone else to get a grasp of your expertise[/QUOTE]
 
The same way it determines the difference between someone walking along the sidewalk and someone crossing the road. One is scenery and the other is obstructing the road/cars path. That seems pretty easy to me. It has a pre-determined route it is following, it sees a tree. Is tree in my path? No --> continue on path, Yes--> Stop. This is not to say that FSD will be easy. There are an infinite number of variables. I think the toughest part will be accounting for changes to the known path, such as construction areas.
That's definitely NOT easy.

I would almost assure you that if you lay a person down sideways in the middle of the street the computer won't know what it is.
 
That's definitely NOT easy.

I would almost assure you that if you lay a person down sideways in the middle of the street the computer won't know what it is.

Nobody said it was easy but this is literally the stuff they have been working on now for many years... starting with primitive sensor systems on cars that were being built for trials at universities 10 or more years ago and now culminating in billions of dollars being spent to get the technology good enough to deploy on public roads.

object recognition is a big part of self driving but it's augmented by things like Lidar. The car doesn't necessarily have to know that it's a person on the road it just has to recognize that it's an object, start slowing down and then once it knows more about the object based on other sensor information it can decide what it is going to do.

In current cars with AP2 I believe that if a person was lying comatose on the road the car would not run over them, it would slow down recognizing there was an object on the road and then pull off and alert the driver to take control.
 
  • Like
Reactions: teddytoons
I hypothesize that Tesla (and other autonomous vehicle providers) will have remote control centers with operators that can take command when a vehicle flags it needs assistance (either by traffic anomaly or triggered manually by public servant)
The idea of having my car remote controlled by employee xy is downright frightening...

Unmanned vehicles are 20+ year old industry. If they can do planes, cars will be no problem.
I seriously hope you are joking here.....
The difference in complexity and the needed reaction times and data amounts between keeping a flight route based on height, gps, radar coordinates etc while surveiling the vehicle functions is a joke compared to just crossing a lively street in any city with dozens of other cars, bikes and people around you.
 
The idea of having my car remote controlled by employee xy is downright frightening...
Both Waymo and Nissan is going to do this and California's recent regulations are structured around this method.

I seriously hope you are joking here.....
The difference in complexity and the needed reaction times and data amounts between keeping a flight route based on height, gps, radar coordinates etc while surveiling the vehicle functions is a joke compared to just crossing a lively street in any city with dozens of other cars, bikes and people around you.
I would say it's orthogonal, not necessarily one being more complex than the other. During the flight you contend with altitude and the controls are a ton more complex (why you need a trained pilot, while a car needs minimal training). There is also a very real possibility of bit flips (thus requiring much more redundancy), while in a car it's far more unlikely.
 
I would say it's orthogonal, not necessarily one being more complex than the other. During the flight you contend with altitude and the controls are a ton more complex (why you need a trained pilot, while a car needs minimal training). There is also a very real possibility of bit flips (thus requiring much more redundancy), while in a car it's far more unlikely.

Flying is pretty darned easy. Landing less so, but flying? it's much quicker to learn steady state flying than it is to smoothly parallel park a manual transmission car on a steep hill (let's throw in some impatient drivers wanting to get around you too).
 
  • Funny
Reactions: teddytoons
What exactly is your job title that entails programming python libraries for augmented reality solar panels that power super-capacitors in the hail while getting verbally abused by customers?

I only ask because you are CLEARLY an expert on this thread's subject, and I just want everyone else to get a grasp of your expertise
tj2qq.jpg
 
Everyone I know in the aviation industry says, "If they can do autonomous cars, planes should be no problem!" And they all consider cars the harder task because the environment they operate in is so much more variable.

Autonomous driving, this is 100% true. I'm referring to remotely operated aspect. Give some technician VR goggles and some controls and they can navigate situations the autopilot can't decipher

I seriously hope you are joking here.....
The difference in complexity and the needed reaction times and data amounts between keeping a flight route based on height, gps, radar coordinates etc while surveiling the vehicle functions is a joke compared to just crossing a lively street in any city with dozens of other cars, bikes and people around you.

Agreed! See statement above referring to remotely piloting. Not talking about the autonomous algorithms with that prior post.
 
Full Autonomy requires a lot of CPU power.

If Tesla is going to try to achieve it with just cameras....then the image recognition and response software is going to have to be astounding. That's what makes humans so astounding.
Our eyes themselves are amazing in everything that they do: From the auto focus to the self cleaning functions, however whats most astounding is our brain and its accuracy of interpreting the constant video the eyes provide it.
I program robots for a living that also uses camera images to do what they have to do.

Anyway...lets say someone is driving a car and they see a tree off to the side of the road standing tall and not bothering anyone. Fine.
Now lets say that the same person sees a tree that has fallen down in the middle of the road. He sees the trunk and up from the trunk are the branches with the leaves still on them....across the road and then determines that his car would incur a lot of damage if he continued down the road.
How does a computer receive an image of a sideways tree across the road and determine what danger would occur if it were to continue? What if the tree fell and was removed and now there are a ton of leaves and small branches on the road that have all of the parts of a tree, but is no longer a tree and with slow driving a car can now drive over the leaves but at a very slow pace because leaves are very slippery - unlike pavement?

I believe there is a reason why states don't give out drivers licenses before a person is 18 or so. Even with our super brains....it takes a while for adaptive learning to learn in such a way that each potential driver not provide a risk to society. When a young person rides a bike or crosses a street or rides with other people in cars they are learning. And I agree that 18 years of this experience gets a person the opportunity to at least take a class to find out if they are fit for driving.

Driving is extremely easy. However recognizing / adapting / and adjusting to the environment is easy for the brain, but not for a computer.
The question for programmers is: How is it possible to cram 18 years of adaptive learning into a single computer @ $500 or so that can perform flawlessly with constant adaptive input?
Your one example is easy to fix using rule based learning which has been out since what, the 70s? (ok, maybe since the beginning of time, but I mean wrt to computers). But if you use rule based learning to fix every possibility you see, you'll be behind the keyboard forever.

Also, it's not all about image processing and cameras, there's multiple ways to skin a cat. And yes, your particular example is an easy problem.

As for cramming 18 years of adaptive learning into a single machine, the current school of thought is machine learning. One of the big ones is neural networks, not necessarily because they're the best of the best for doing everything (it's a tool in a toolbox. this tool might fix this particular problem, but might not work for every problem), but people also like them because they're the most like the human brain (hey the human brain has neurons, so this must work too! ;)). Three years from now, someone might invent a completely different way of learning, and that might be how we get to L5 autonomy. Or maybe they'll use NNs. Or maybe some sort of rule based machine learning. Or maybe.... or maybe... or maybe...

Please stop regurgitating nonsense you picked up reading TMC. At least go read wikipedia before taking your claims of programming robots and propagating it to figuring out how to solve L5 autonomy.
 
Last edited:
What exactly is your job title that entails programming python libraries for augmented reality solar panels that power super-capacitors in the hail while getting verbally abused by customers?

I only ask because you are CLEARLY an expert on this thread's subject, and I just want everyone else to get a grasp of your expertise
You forgot one

My company is AT&T. We set standards.
 
  • Informative
  • Like
Reactions: EinSV and Chewy3
I don't think you understand how the 'future' works.

I have no doubt it can be accomplished, all I ever said is "not any time soon". As in, there's no chance in hell that the current Model 3 will self drive. These things will all take time to work out, and I think there will be government mandated hardware on self driving cars for exactly these sorts of reasons before they are ever allowed.
 
I have no doubt it can be accomplished, all I ever said is "not any time soon". As in, there's no chance in hell that the current Model 3 will self drive. These things will all take time to work out, and I think there will be government mandated hardware on self driving cars for exactly these sorts of reasons before they are ever allowed.
I don't think anyone is arguing that it exists on the Model 3. The point is that once it does exist (Model 3 or other car), there likely will be a way for the car to get an over ride from law enforcement, or construction zone operator, or whatever option you want to plug in here.