Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

My car is learning a lot but.... what? and how?

This site may earn commission on affiliate links.
I am reading all over the place that Tesla cars "learn" as more miles driven, that with AI the learning process is faster, that with shadow mode Tesla validates the software etc etc.

Can someone with knowledge on the subject please explain what exactly "learning" means and how it happens (or point to a page that explains it)?

So, I drove today 20 miles with autopilot (TACC and auto steering). What exactly Tesla learned from this? Is the car recording the lanes? GPS position?And if it does, is it only for my car or it uploads the recording to the cloud? When? How often? What else and how the car "learned"?

During my drive there was a false collision warning alert. How did Tesla or the car learn from this? Does the car record from the cameras and sends images to the cloud so that Tesla engineers figure out what happened? Or the car automatically figured out that this was a false alert and "learned" for the next time?

Is "learning" a manual process (engineers collect data and modify the software) or automatic (software corrects on its own somehow)?

:Confused:
 
You asked a lot of questions that I believe no one on TMC really has the answers to. Many of your questions have been discussed, with the conclusion being we just don't know.

Perhaps someone will jump in with better information, but I think that is unlikely.
 
:Confused:

Me too! Welcome to the club of confusion :)


Is the car recording the lanes? GPS position?And if it does, is it only for my car or it uploads the recording to the cloud? When? How often? What else and how the car "learned"?

1) Yes.
2) Yes
3) for your car and the cloud
4) all the time, whenever your drive is active.
5) and many other things


Is "learning" a manual process (engineers collect data and modify the software) or automatic (software corrects on its own somehow)?

Your car works by Artificial Intelligence.

That means a programmer does not have to write a line that whenever you see this shape, it's a traffic cone and don't hit it.

Instead, your car would hit the traffic cones and it writes the program codes that it just encountered a collision and it needs to avoid it next time.

However, since your car is brand new, it is not nice to let your car bump into something just to learn.

That is why there is the cloud where all the incidences, data, lessons are uploaded so that your brand new car does not have to physically bump a traffic cone to know. It downloads all the experiences from the cloud.

That is what I understand.
 
That is why there is the cloud where all the incidences, data, lessons are uploaded so that your brand new car does not have to physically bump a traffic cone to know. It downloads all the experiences from the cloud.
I'm not an AI expert, but my understanding is that there is more to it than this in that the car is always watching what the human driver does and it learns from that. Since humans generally avoid traffic cones, it learns over time that it should also do so - no collision required.

Using the false collision warning example in the OP, the human driver presumably overrides the car and powers through it, which teaches the car that it was a false alarm. Have that happen enough times, and the AI learns more about sensor input that actually represents a danger vs input that isn't as bad as it otherwise would have thought. The number of false alarms will go down, though I don't expect they will ever disappear entirely.
 
Suggested reading (AP1): Exclusive: The Tesla AutoPilot - An In-Depth Look At The Technology Behind the Engineering Marvel, and mabe this on NVidia's cards which provide input to the fleet neural network: Nvidia's new Tesla cards meet the needs of the growing capacities of AI services.

Mike_j, when you say you have seen no evidence of AI, what are you expecting to see? The vehicle makes better decisions, on the same road for, e.g. steering curves, following road markings, etc (well documented observations on this site reflect this) - that is a result of the fleet learning, as just one example.

When a human takes over from TACC or AP, that is a learning event for the network. An individual event does not materially impact an already learned response, but many 'corrections' being received in similar/same circumstance, location, etc, while cause the response to be adapted.
 
...Does anyone have anything that's not anecdotal?...

As you see in Nvidia video, they did allow the car to hit traffic cones to learn for itself.

"In contrast to the usual approach to operating self-driving cars, we did not program any explicit object detection, mapping, path planning or control components into this car. Instead, the car learns on its own to create all necessary internal representations necessary to steer, simply by observing human drivers.

The car successfully navigates the construction site while freeing us from creating specialized detectors for cones or other objects present at the site. Similarly, the car can drive on the road that is overgrown with grass and bushes without the need to create a vegetation detection system. All it takes is about twenty example runs driven by humans at different times of the day. Learning to drive in these complex environments demonstrates new capabilities of deep neural networks.

The car also learns to generalize its driving behavior. This video includes a clip that shows a car that was trained only on California roads successfully driving itself in New Jersey ."

 
Sorry, I'm just not drinking the kool-aid. Yes, Nvidia showed a car that avoided cones, and Tesla now uses Nvidia hardware. I don't make the leap of faith that teslas now avoid traffic cones. How many posts have we seen of autopark not avoiding concrete pillars. Hasn't it learnt by now?
 
  • Like
Reactions: Andyw2100
...So far I have seen no evidence of AI....

This is what I think, the Nvidia demo car learns locally and reacts locally without the benefit of downloading other experiences from the cloud. Even when it learns in CA, it can still bring that experience to NJ.

So with that kind of a car, you would see the AI works immediately such as "twenty example runs driven by humans at different times of the day". And that is the advantage! You see your success right away after giving it a demo 20 times. Yeah!

The problem with doing everything locally in your car is: you might damage your car while learning such as hitting traffic cones.

Thus, Tesla allows your car to learn and react in shadows first. For example, it would see a human slows down from 65 mph to 25 mph every single time at this GPS because of a tight corner. After 20 times, it says it can do it, it can do it!!!

However, Tesla does not allow your car to imitate that particular human's behavior until the Headquarter double checks with the results then go ahead approve it with a next over-the-air update.

Thus, Tesla's disadvantage is: you don't see the result right away.

The advantage is your car will not be physically damaged due to unchecked approval from Headquarter.
 
The "concrete pillar" thing is AP1, with its 12 sonars.

Sonars have their limitations, among other things they are heavily dependent on the surface structure/angle/material that's reflecting the signal (sound - it's just high freq sound). It's also a matter of surface being perpendicular to the sensor.

AP2 has a couple of cameras extra that AP1 don't. With the cameras plugged into the PX2 algorithm machine above the glove box, the system will do object recognition, route planning and calculate the so called Free Space, i.e. where the car is allowed to drive (NOT into concrete pillars).

The PX2 utilizes machine learning, Deep Neural Networks with some input data that it learns from - Your and Our driving in "stealth mode".

"Experience" aquired from our collective driving is sent back through WiFi/4G to California or where ever Tesla has their mothership Nvidia machine brain, which in turn can flash this "experience" back to the fleet PX2's.

Simply put
 
  • Informative
Reactions: Tam
Sorry, I'm just not drinking the kool-aid. Yes, Nvidia showed a car that avoided cones, and Tesla now uses Nvidia hardware. I don't make the leap of faith that teslas now avoid traffic cones. How many posts have we seen of autopark not avoiding concrete pillars. Hasn't it learnt by now?

You are right that it's too early to know how well Enhanced Autopilot works.

It is true that Nvidia is not Tesla.

But xsi123 was wondering so I just speculate the theory of how it works.

Apparently, there's not enough learning yet so that's why there has been a delay.

Not enough learning means the system can get into accidents like you mentioned: autoparking into concrete pillars.

We expect it will incrementally get better to the point of driverless option too but of course right now, we don't know just yet.

It would be nice if it can prove that some point in future that it:

1) reliably stops for stationary vehicles (including Florida case).
2) slows down and not hitting guard rails at cornering.
3) avoids traffic cones

and others...
 
Last edited:
Sorry, I'm just not drinking the kool-aid. Yes, Nvidia showed a car that avoided cones, and Tesla now uses Nvidia hardware. I don't make the leap of faith that teslas now avoid traffic cones. How many posts have we seen of autopark not avoiding concrete pillars. Hasn't it learnt by now?
Agreed. A few Tweets and a carefully scripted video or two has created a lot of conjecture. I find it hard to believe that Tesla is advanced enough and has enough talent to create what has been speculated on TMC. I suspect there are still 'old fashioned' lines of code being written to handle driving events.
 
I have had my S for 5 months now so it's old :)
But the learning it has done in that time is obvious to see. I love that about the car!
One example: (intentionally did this over and over for 3 months) Parkway curve and incline. The first few times at the same speed 55mph the S just could not handle it and would beep,beep,beep and generally panic for me to take over. :) After about 8 times it started to handle this same spot on AP1 w/o any human assistance. Cool!
How it learns, details etc...... I have no idea. Sure it sends info and such but I am not a programmer so.......................
 
Agreed. A few Tweets and a carefully scripted video or two has created a lot of conjecture. I find it hard to believe that Tesla is advanced enough and has enough talent to create what has been speculated on TMC. I suspect there are still 'old fashioned' lines of code being written to handle driving events.
Had to tap the Disagree button on this. Tesla is far, far ahead of the competition. And there's no black magic involved at all! Quite the opposite, deep learning and neural networks is actually many years old tech. The HW for making autonomous driving has however just recently become cheap (and small) enough to be realistically installed in production vehicles. By aquiring mobileye's, and later nvidia's, HW into their cars, all Tesla needed to do was to set up a dedicated team to work on this 24/7, gather data from thousands of cars and let the machines do their job.

No question HW2 will make significant progress in the coming months. On the other hand, I don't think we'll see HW2 make Level 5 autonomy.
 
  • Like
Reactions: jldf310
...conjecture. I find it hard to believe that Tesla is advanced enough and has enough talent to create what has been speculated on TMC. I suspect there are still 'old fashioned' lines of code being written to handle driving events.


Tesla is using Nvidia platform and as stated by its youtube:

""In contrast to the usual approach to operating self-driving cars, we did not program any explicit object detection, mapping, path planning or control components into this car. Instead, the car learns on its own to create all necessary internal representations necessary to steer, simply by observing human drivers."

You can disagree with their disclosure and think that it's a scripted video where they manually write codes for overgrown grass dirt roads, conventional roads, traffic cones, turn left, turn right after how many feet and so on...

But I don't think so because not only Tesla is buying Nvidia but other companies do too.

It is true that AP2 right now is not advanced enough because it is still learning. That is why there are limited implementations: first there were only 1,000 cars, then now the maximum autosteer speed is 45mph and on freeways only...

Tesla predicts it will improves drastically in 6 to a year so we'll have to wait and see once the feature limitations will be lifted.
 
The entire fleet as one. Would make no sense if Tesla made it so that each car required it's own special training and learning.

What's unknown, is how many "takes" is required to whitelist an area, e.g. a certain hill top, freeway overhead pass etc. If anyone experience TACC braking where it's not supposed to, more than once at the same spot, we know it requires more than one "take" before it know it's safe... Question is how many
 
Agreed. A few Tweets and a carefully scripted video or two has created a lot of conjecture. I find it hard to believe that Tesla is advanced enough and has enough talent to create what has been speculated on TMC. I suspect there are still 'old fashioned' lines of code being written to handle driving events.
Sorry, as others said, this is so wrong it warrants a disagree. Although my job role currently does not involve AI, I learned plenty about it during my studies.

Other than for very simple situations with few corner cases, it's drastically harder to manually program something that can handle pattern recognition vs just using a neural network (as used in AP1 and also AP2).

Neural networks are not a very complex concept. You can google for examples, but here is one with just 9 lines of code:
How to build a simple neural network in 9 lines of Python code – Technology, Invention, App, and More

A neural network learns with training data by assigning weights to various inputs to calculate an output. Then the difference between output and desired output is calculated and weights adjusted to get closer to the desired output. Repeat this many times and the network will be able to answer perfectly when given the same training inputs. The programmer does not have to manually program anything about how to handle specific situations. The challenge is how to format data into something it will recognize and also create a way to tell it how wrong a particular output is.

Then later on, new inputs (not in the training set) can given to the network and it will be able to give an answer based on what it has learned.
 
Last edited:
I don't know anything about how Tesla, specifically, is doing autonomous driving, but I do know a thing or two about deep learning (AI based on neural networks) and a bit about the different approaches for applying deep learning to things like autonomous driving.

The first thing to understand is that there are two 'steps' to deep learning: "training" and "inference". You can think of training as 'teaching' a neural network whatever it is you want to know. Inference is when you take the results of that training and use it to draw conclusions.

Training
Training is extremely compute intensive - Nvidia's specialized training computer consumes about 3200 watts, and can do roughly 170 trillion operations per second, and you probably need about a dozen of those to do the kind of training autonomous driving requires, and they would probably still have to run almost full time. Clearly, this is the kind of thing that requires a data center.

The way training works is that you create a set of data (for example, a few million images) along with labels ("this image contains a boston terrier"). You "show" the data to a neural network, and compare it against what the neural network thinks it saw. If it is wrong ("majestic bald eagle"), you give the network feedback ("no! bad neural network! this is a boston terrier"). Repeat this several million times, and the network will eventually learn what a boston terrier looks like.

Creating those labeled data sets is one of the most important parts of deep learning (and AI in general) - google "data is the new coal" to learn more about that. This is what I suspect Tesla is doing right now: collecting tons of video from our AP2 cars, along with all the other sensor data. The data then has to be labeled - someone or something has to point to objects, identify them as whatever they are, etc. This is a labor intensive process. Once you have a baseline of functionality, you can start taking shortcuts (more on this later) to accelerate the process.

Training tends to work very well in certain problem domains: for example, you can train a network by showing it video and identifying objects ("pedestrian", "sign", etc). It is harder to train a network to do higher level cognition ("this is a chicken crossing a road"), so engineers tend to build a bunch of lower level neural networks, and wire them to a higher level level brain. This is similar to how a brain is wired - there will be some object detection 'lobes', some sensor 'lobes', etc, all feeding their conclusions to a 'decider lobe', which in turn controls the brakes, the steering wheel, etc. Note that not all of these are necessarily neural networks - some might be more traditional control systems.

Next post: inference.