Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Why Tesla's fleet learning is a big deal

This site may earn commission on affiliate links.

diplomat33

Average guy who loves autonomous vehicles
Aug 3, 2017
12,747
18,716
USA
My perspective is that solving Full Self-Driving (L4 autonomy) is essentially a two stage process:

Stage 1: "Feature Complete"

This is the initial stage where you put the right hardware and software in the car and it is able to do all the basic functions that a self-driving car needs to be able to do (stay in the lane, move with traffic, make lane changes, stop at stop signs and red lights, navigate intersections, follow nav directions, etc). During this stage, the system is not reliable enough to be L4 so driver supervision is required (safety driver) but the goal at this stage is just getting the pieces in places.

This stage is now relatively straightforward I think. Most self-driving companies, like Waymo, achieved this stage awhile ago. Even newcomers seem to achieve this stage relatively quickly since the hardware and software is common. Get a powerful Nvidia or eyeQ4 computer, LIDAR, radar, some high def cameras and the right software and you are pretty much good to go. LIDAR definitely helps you achieve this stage quicker too since LIDAR can provide the car with high accuracy mapping of the car's surroundings.

Tesla is just now finishing up this stage because they did things the hard way. They rejected LIDAR (for various reasons I won't rehash here because it is off topic) and went with camera + radar + ultrasonic hardware that puts much more emphasis on camera vision. Basically, camera vision has to do what the LIDAR would normally do. So the camera vision has to be much more sophisticated. But based on the Autonomy Investor Day event, it appears that Tesla has perfected their camera vision where it is able to detect and track objects with high accuracy.

Stage 2: "The March of 9's"

This is the perfecting stage. Once you have the pieces together, you have to make the system better and better until it gets so good and so reliable that the driver can safely not pay attention anymore. Of course, 99% might sound very good but it is not good enough considering how many miles a car drives in the US every day. You really have to get to something like 99.9999% before your car is good enough to be L4 autonomous. Hence the term "march of 9's". You perfect your self-driving, adding another 9 each time until eventually you have enough 9's that your system is L4 autonomous.

This stage is the tricky stage and it is where the leaders in self-driving are currently working on. There are probably millions and millions of edge cases and driving cases that a car needs to learn how to handle in order to be L4 autonomous. Plus, human drivers can be very unpredictable. So it is difficult to teach an autonomous car to handle all the crazy situations it might face.

This is where I think Tesla's fleet learning will pay off big time. These edge cases happen all over the world, on different roads, in different weather conditions etc... There is no easy way to solve all the edge cases and driving cases with simulations. They are just not going to cover everything. And a small fleet of cars or a fleet that is very localized will also miss a ton of driving situations. The best way, statistically, to solve all the millions of driving cases, is to have a huge number of cars spread out over the entire US for example, because those cars will experience a much greater variety of driving cases every day. So this is why I think that Tesla's fleet learning is a big deal. With hundreds of thousands of cars on roads all over the world, Tesla will get to all those edge cases and driving cases much faster. When Tesla gets to 1 million cars on the road in a couple years, that data will be even bigger.

This is why I am optimistic that Tesla will achieve general full self-driving at some point. Solving the edge cases requires massive data and Tesla has that. Now, other companies can get to local full self-driving by focusing on just one geographical area and training their cars to handle that specific are. This is what Waymo is doing and I think they will be successful. But Tesla has an advantage to getting to full self-driving in a more general way because of the data that Tesla has. As the numbers of cars grow, so will the data which will accelerate the growth of the machine learning. It is exponential. And really exponential growth is the only way to solve a big data problem in a timely manner. It's like if I tell you to solve 100 billion puzzles. Trying to solve that many puzzles by hand would take forever. Just getting even a large number of people, say 100,000, to solve them, would still take way too long. But if I grow the number of people solving the puzzles exponentially, I will solve the puzzles in a reasonable amount of time.
 
I think that “Feature complete by the end of this year” is finally a timeline, although ambitious, that we can expect to be held to.

The reason I say this after all the missed predictions is that this is the first timeline Tesla has given Autopilot since EAP (in the configurator). Note that I say Tesla, and not Elon.

And the previous timeline wasn’t met, and Tesla was sued over it.
I’m sure they don’t want to be sued again... and putting the timeline in the configurator once again cannot be an accident. They must be sure of it, or perhaps even be putting what they think is a conservative timeline
 
Last edited:
Perhaps I missed it. Has anybody seen a formal definition of how you calculate the number of 9s for an L5 candidate? Similarly, is there a formal definition for human drivers? And lastly, I'd like to see what the number of 9s is for top 10%, average, and bottom 10% human drivers.

Here is an interesting paper on the subject. It's academic but if you have time, you should check it out:
https://www.rand.org/content/dam/rand/pubs/research_reports/RR1400/RR1478/RAND_RR1478.pdf
 
Here is an interesting paper on the subject. It's academic but if you have time, you should check it out:
https://www.rand.org/content/dam/rand/pubs/research_reports/RR1400/RR1478/RAND_RR1478.pdf
Thanks. Mostly just skimmed it, but I think they may be missing an option in their analysis. Elaborating...

Repeatedly, they use the "100 test vehicles, driven 365x24x7, …" approach and end up with large durations like 400 years.

Sometimes the solution is obvious: change one of the parameters.

If a city with 10,000 drivers opted to sign on to be a test fleet then that 400 years becomes 4 years. If you bump that up to 100,000 drivers, it drops to 4.8 months.

With this in mind, Tesla's NoA-with-confirmation and then without approach aligns well. As does initially leveraging the redundant CPUs to not be redundant but rather to test different neural net revisions against each other.
 
Thanks. Mostly just skimmed it, but I think they may be missing an option in their analysis. Elaborating...

Repeatedly, they use the "100 test vehicles, driven 365x24x7, …" approach and end up with large durations like 400 years.

Sometimes the solution is obvious: change one of the parameters.

If a city with 10,000 drivers opted to sign on to be a test fleet then that 400 years becomes 4 years. If you bump that up to 100,000 drivers, it drops to 4.8 months.

With this in mind, Tesla's NoA-with-confirmation and then without approach aligns well. As does initially leveraging the redundant CPUs to not be redundant but rather to test different neural net revisions against each other.

Yes and imagine when Tesla hits 500,000 cars or 1M cars? The time will drop even lower.
 
  • Like
Reactions: pvogel
...
If a city with 10,000 drivers opted to sign on to be a test fleet then that 400 years becomes 4 years. If you bump that up to 100,000 drivers, it drops to 4.8 months.
...
And thinking a bit more about it.... if these 10,000 were Tesla employees that opted in using their personal vehicles, California might want to consider it.

All the insurance, liability, and repairs would fall squarely on Tesla. Nice and simple.
 
There is no definition for feature complete so it isn't much of a timeline.

I'm quite certain, as directly implied (or even stated outright, can't remember) during the autonomy event that "feature complete" entails the car doing 100% of all driving tasks while still requiring supervision.

I take it to mean that it will be similar to what we have on the freeways, but on surface roads. Essentially (again, my interpretation) the car will technically be able to handle all situations, but we may need to intervene once in a while in case it takes a longer route or is too cautious making certain maneuvers.
 
  • Informative
Reactions: Stevaughnator
... but we may need to intervene once in a while in case it takes a longer route or is too cautious making certain maneuvers.
Good luck with that interpretation. It will not be able to handle all situations , not even close, by end of year. It will not handle someone giving hand gestures (cameras are low res), will not handle partially occluded objects like stop sign blocked by tree growth, will not handle adversarial attacks like when people learn putting a little white sticker on a stop sign means Tesla misreads it, etc... Elon has been wrong so many times on FSD and even he admits his timelines are overly optimistic.
 
This is you in 12 months


No, this will be me in 12 months, except in my Model 3:

maxresdefault.jpg
 
  • Like
Reactions: 1375mlm
My perspective is that solving Full Self-Driving (L4 autonomy) is essentially a two stage process:

Stage 1: "Feature Complete"


Stage 2: "The March of 9's"

I completely agree with you, but would modify the 'Feature Complete' stage slightly which make is more intuitive. I would say feature complete, is when the AI CAN as you say, cope with most/all situations that it will come across in the normal coarse of normal driving. BUT it's still a NOVICE. just like a teenager, just passed their test. OK, most of the time, but still got some learning to do. Needs some mentoring, some supervision, and some limits while it's gaining experience.

March of the 9's, it's gaining experience, and knowledge, and becoming a more and more expert driver. At some point, it's capability will exceed that of most of its mentors. At that point, it needs to migrate to more 'self learning', to progress.

It keeps learning, and at some point becomes better than the 'Average' driver, who is 95% better than the rest of the drivers on the road. And at that point, you might as well let the AI drive, as it's probably better than you. (unless you want to drive that is). At which point the insurance companies give you a bonus, for letting the AI drive.

It keeps learning

At some point it's better than all but the most expert drivers. At which point, the insurance companies penalize you if you want to drive yourself.

Some-when along that line, the 'driver' of the car trusts the AI to drive more that themselves, and figure, there isn't any point in monitoring it, as it's better than me.

At some-when along that line, the numbers will show that the AI (singular) is better then the millions of of individuals trying to learn how to drive, and at that point, the AI is ready for level 5.

The reason I think this is inevitable, is that the AI has better sensors than us, and a more capable process than us (at doing the driving thing). Past the 'feature complete' stage, we don't need to 'program' it any more, it just needs to learn, and gain experience. And since it's a single AI learning, with millions of eyes learning all at once, it's going to do that pretty quick. (Or pretty quickly we will find out not going to have the capacity to learn enough, and needs a hardware update to push it along to the next stage).
 
I remember watching an Nvidia video a while back on their system and it was mentioned how getting source video was the hardest thing. In that they had the system working, but feeding the machine learning beast was a non-trivial problem.

I think people/Wall Street/analysts are serious underestimating or really just missing completely what it is to have a fleet of cars sending back video to feed into the machine learning.

You can have the hardware...
You can have the software...

But you really really need the data, lots of it.

And Tesla is the only company that has it right now.
 
I think that “Feature complete by the end of this year” is finally a timeline, although ambitious, that we can expect to be held to.

The reason I say this after all the missed predictions is that this is the first timeline Tesla has given Autopilot since EAP (in the configurator). Note that I say Tesla, and not Elon.

And the previous timeline wasn’t met, and Tesla was sued over it.
I’m sure they don’t want to be sued again... and putting the timeline in the configurator once again cannot be an accident. They must be sure of it, or perhaps even be putting what they think is a conservative timeline
I disagree on the "although ambitious" part. At least in software, if you tell the public, or your VP, SVP, EVP, 100% absolutely we'll have it out this year, you've already got it and just going from a highly successful limited beta to bigger audience beta. Their employee volunteers have been running with it for, what, 6 months. It's almost mid year. I'd expect the EAP to either have it or be about to have it. But that may be perturbed because several people broke their NDAs. TWT, it's May next week.
 
Last edited:
  • Like
Reactions: 1375mlm
There is no definition for feature complete so it isn't much of a timeline.
I would be highly doubtful that's the case. I've done software development for 45 yrs. I cannot think of one time there wasn't a list of features, MVP (minimum viable product declaration), must haves, nice to have, if we have time. I'd guess, at least the software side is doing Agile. They'll have a MVP defined, a burn down list and a burn up list.
As for telling us what it is, yeah, I'd guess they have, NoA for non-highways plus auto park, auto summons. But as far as explicitly, no, they never will expose the lists.
 
  • Like
Reactions: 1375mlm
Perhaps I missed it. Has anybody seen a formal definition of how you calculate the number of 9s for an L5 candidate? Similarly, is there a formal definition for human drivers? And lastly, I'd like to see what the number of 9s is for top 10%, average, and bottom 10% human drivers.
Exactly. Where are the goalposts?
Taking a nap in the car's back seat may be a perfectly fine criteria for Elon Musk, self-proclaimed Engineer Extraordinaire, but regulators, legislators and technical professionals need to hammer out specifics.
I get that Tesla's accomplishment will be nothing short of extraordinary and world-changing no matter how many 9's, but the sales/finance/marketing people need to step back and the technical people need to step up and hammer out quantifiable standards that the regulators and legislators can work with to do their necessary jobs. Much of this is probably going on or will happened in a scramble to keep from blocking Teslas from being used in FSD mode by their owners. A classic "disruptive" strategy if ever there was one.
The takeaway from Monday is that the collective Tesla technical team is VERY confident in their work and their ability to release a solid FSD system in the near future. Their progress is very hard to believe, even for experts in the field, and the data collection system using the Tesla fleet is genius, clearly planned but still amazing and nearly impossible to compete with. It gave the competition and investors plenty to think about. It would appear that Tesla's risky gamble on deep neural networks and machine learning is going to pay off in a unique, elegant solution.
 
Last edited by a moderator:
  • Like
Reactions: Stevaughnator
@Paddy3101 - I think you missed a wrinkle. Elaborating...

As the performance of the "L5 FSD candidate system" improves, insurance companies and regulators will likely be motivated to incentivize humans to "let the car do it" to remove the unpredictable humans from the roadways. I expect much of the tail of edge cases to be directly caused by the "cohabiting" human drivers. If/when the human drivers are removed, the self-driving car performance numbers will improve overnight. I suspect that can be proven statistically in test markets pretty easily.

In other words, if you can get Jimmy to let the car drive he becomes safer and the self-driving cars around him become safer.
 
  • Love
Reactions: Stevaughnator
Exactly. Where are the goalposts?
Taking a nap in the car's back seat may be a perfectly fine criteria for Elon Musk, self-proclaimed Engineer Extraordinaire, but regulators, legislators and technical professionals need to hammer out specifics.
I get that Tesla's accomplishment will be nothing short of extraordinary and world-changing no matter how many 9's, but the sales/finance/marketing people need to step back and the technical people need to step up and hammer out quantifiable standards that the regulators and legislators can work with to do their necessary jobs. Much of this is probably going on or will happened in a scramble to keep from blocking Teslas from being used in FSD mode by their owners. A classic "disruptive" strategy if ever there was one.
The takeaway from Monday is that the collective Tesla technical team is VERY confident in their work and their ability to release a solid FSD system in the near future. Their progress is very hard to believe, even for experts in the field, and the data collection system using the Tesla fleet is genius, clearly planned but still amazing and nearly impossible to compete with. It gave the competition and investors plenty to think about. It would appear that Tesla's risky gamble on deep neural networks and machine learning is going to pay off in a unique, elegant solution.
I saw a Tweet just a few minutes ago where the guy said he was there and took the demo ride and it was absolutely phenomenal.
As for regulatory approval, remember this also. The real push is not coming from Tesla owners, it's coming from UPS, FedEx, long-haul companies everywhere. There is a LOT of money at stake here.
 
  • Love
Reactions: JeffnReno