Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
The bright side is that once Tesla widely releases fsd beta to the US fleet, our questions will be answered.

What questions will be answered?

Who to believe and what approach is correct
I'm a little suspicious that if Tesla released V9.1 as is you would say it proves that Tesla's approach is correct. Or is there some quantifiable performance level that you'll be looking for in the wide release?
I think it's important to set expectations now so we can avoid confirmation bias.
 
I'm a little suspicious that if Tesla released V9.1 as is you would say it proves that Tesla's approach is correct. Or is there some quantifiable performance level that you'll be looking for in the wide release?
I think it's important to set expectations now so we can avoid confirmation bias.

Unfortunately, it's just mostly my opinion. We can see from the videos that fsd beta makes maneuvers that require constant vigilance. In Dave Lee's recent video about FSD beta, he also says it's a long way from wide release, just because of the potentially dangerous maneuvers or decisions it makes.

The regulatory pressure on a widely released fsd beta would be immense, so Tesla's performance threshold for wide release is likely very high. Again, it's just my conjecture.

I want to also add that it's actually ridiculous (bad idea) for Tesla to widely release the fsd beta.... There's a good chance it'll never happen (because of regulatory risk), but it would be expanded a great deal.
 
Ok wow lol

He's right. You refuse to take off the Tesla glasses and open your eyes. Here is Waymo's Head of Research indirectly talking about Mobileye's True Redundancy.

"There's early fusion where you take all the signals from all sensors and you mesh them and throw them into one deep network and detection and tracking comes out. There's late fusion which is you process each stream independently eventually you combine signals from all of them to track the objects with...What is the benefit of late fusion? The benefit is that you train three model typically, a radar model, a lidar model and a camera model. Each of those have strengths and weakness and they are trained in different situation and ultimately they will have very different failures. So if you have all of them giving you detections and then you have something that combines them. They have very independent mode of failures and that's a good property. Because you can guarantee that if one model misses something the other one is very likely to pick it up. By the law of independent factors you can multiple the failure rate. So that's great. Typically if you are doing academic research style work, if you fuse everything early and throw it into a big deep network. You will do the best in some academic objective. But you will lose the independent failure. What you are risking is that you have a single model that essentially fuse data in such a way (biases from co-training) that it will make correlated mistakes and you don't have a way to hedge against it. If you did them independently it might come out right." - Drago Anguelov

Is Drago, Waymo's Head of Research who agrees with Amnon also a stupid idiot?
 
Last edited:
This is an interesting issue. The two systems are independently analyzing the situation and therefore probabilities of failure are assumed to be random in each and therefore statistically uncorrelated. If that were clearly true the combined probability calculation would be defensible.

However this kind of statistical predictive calculation is fraught with major possibilities for error. It would be correct if the possibility of failure in each had nothing to do with the external input, but only with some kind of random noise-generated failure within the systems themselves. IMO it is however quite likely that each has chance of perceptual error that is highly influenced by external challenging scenarios.

Each (Vision and fused Radar/Lidar) may actually have much better than 10^-4 (1 in 10,000) failure when presented with everyday scenarios that were trained for, but significantly worse when presented with unusual, unknown and/or untrained scenarios. Thus, in the set of unusual events (edge cases as we like to call them), the probability of failure may be far higher than 10^-4 for each, and since the major causative factor was the occurrence of the edge-case scenario, that is challenging for both systems, we can reasonably conclude that the probability of edge-case failures will then not be statistically independent. This has two very important implications:
  • The 10^-4 baseline assumption is invalidated for this set of edge cases (maybe it's now 10^-3 (1 in 1000) or 10^-2 (1 in 100) for this troublesome set
  • Further and very importantly, we can no longer take comfort in the multiply-them-together estimation method because the cause of failure was not random between them, but exposed possible vulnerabilities of each. It is in this very set where this is likely to be true.
This is a fairly well-known concept in system reliability engineering. Non-random (here edge-case scenario) events may be Special Cause failures that do not obey the statistical distribution assumptions. In a supposedly redundant system, Special-Cause failures that can affect both modules of the x2 redundant architecture are then known as Common-Cause failures. This is highly likely in simple redundancy where the two modules are identical. In this ME case, the two modules are very different ("not that kind of redundancy" to quote an earlier exchange about ME), and so admittedly it's not clear that Special Cause failures will become Common Cause failures. However it's also not at all clear that they won't, because both sides are encountering unusual input and can make error for related or independent reasons.

Post-event analysis of accidents and failures often reveal the unexpected ways that engineering assumptions crumbled in the face of an assumed-independent failures that turned out not to be. Events that were assumed to be statistically independent became related by a chain-of-unfortunate-events scenario that defied the comforting statistical predictions.

I'm not going all the way to say that Amnon is pushing a clear fallacy, but I'm strongly cautioning that this use of statistical prediction contains important assumptions that deserve to be challenged and re-examined. It sounds technically comforting but I'm not ready to accept the randomness premise behind it.

And in this whole discussion, we're glossing over the issue of how the outputs of these independent perception modules are eventually fused into a driving decision. That is a further non-statistical process. Voting on which side to believe is difficult when there can be serious errors in the confidence level of each side's perception.

One of the best posts in this thread: very well laid out, brings up even more concerns about Amnon's claims. He's made many fallacious claims, at least 5 in that video alone.
 
The issue is that only the manufacturer has the simulation tools to figure out if a disengagement was necessary to avoid a collision. Tesla is going to have the same issue once FSD gets a few orders of magnitude better. How can an FSD beta user determine whether or not a collision would have occurred had they not disengaged?
Once systems are actually deployed and driving billions of miles I agree that the DMV can just track accident data and miles driven and determine safety. We won’t need to rely on the analysis done by the manufacturer.
The only things that ultimately matter, in my view, are injuries and deaths per million miles while using any technology, from humans driving to driver assist to lame-o L4. (Property loss $/MM arguably could be another valuable metric.)

The internals of widely divergent technologies are interesting, but ultimately irrelevant in the real world. Those showing worse metrics than others or humans will fail in the market.
 
I was one of the first ones who called out Audi and BMW years ago when it was clear they were just in it for the PR. I even made a thread about it.
That's not true from my memory. I remember responding to one of the threads where you were talking about the Audi system and you pretty much did the same thing (counting eggs before they hatched). Plenty were saying the timeline for release was significantly farther back than you suggested.
First L3 Self Driving Car - Audi A8 world premieres in Barcelona
Audi Pilot Driving actually had a good system but management gobbled it up like they did the 4 other development project that came after Piloted Drive was shut down.
Most people think i'm biased, but i'm not. I have ripped into just about every traditional automaker out there.
Usually people that are unbiased don't have to call themselves unbiased ;). Other people are the proper judge of that, not oneself. From that thread alone, I think very few people will agree that you are unbiased. And ripping into other automakers does not make one unbiased, unbiased would be to show no preference at all for any company, but there are obviously companies you favor (there is nothing wrong with that BTW).

About the Huawei Phone OS, they were forced to make a phone in acouple months without Google Android so they forked the Android Open Source Platform (which alot of companies do) and rebranded it with plans to improve and differentiate as time goes. This is actually what makes them a tech company. An automaker wouldn't do that. They would take 5 years and then come out with absolutely nothing. Tesla did the same thing by ultilizing GoogleNet. Anyway this is different and meanless to the discussion. There is no good open source SDC software to fork.
That's not an accurate take on Huawei's move at all. In 2018 there was already a government ban on Huawei (along with ZTE), signaling Huawei needed to find a replacement. In March 2019, they announced they were working on an in house replacement of Android (called Hongmeng or Harmony) and that they had been developing it since 2012.
Criticism of Huawei - Wikipedia

That turned out to be a bald faced lie from the article linked. All they apparently did was take AOSP and did a find and replace of any references to Android. So not only is it not their in house effort as they claim, in the two years they had since announcement of developing a new in-house OS, they did almost nothing significant. If instead they straight up said they were going to fork it (like Amazon's Fire OS) and their efforts showed in the fork, that's a whole different story. Instead they tried to hide their tracks in the most low-brow way possible (anyone who have worked in software development can see what I mean).
If that doesn't ring any alarm bells on claims by Huawei on software prowess, and also any forward looking claims they make going forward, I don't know what does.
I'm talking about Elon claiming level 5 in two years for the past 6 years and others either claiming Level 5 is impossible or that it will take a long long time, some mentioning 2030+.
But Elon kept claiming he will do it in 2018.


Neither nissan nor me ever promoted the propilot 2.0 as L4, but always as L3 in every related marketing. Even then the system as it is was never released. I'm not even talking about whether its L2 or L3. I mean the original system had 4 lidars, 12 surround cameras. The release hardware had 0 lidars and 3 forward cameras.
The quoted statement from your own thread suggests L4. "Complete autonomous driving for all driving situation on the highway." L3 is not complete autonomous driving.
It definitely was disappointing. But its par for course for traditional automakers. I made this post over 2 years ago and its still accurate. There's a reason there's still no automaker with a reliable OTA update system after years on years. Yet startups perfect it from day one.

Note that mercedes actually ended up giving up and going completely Nvidia for both hardware, software, ADAS and AV.
VW is still stuck creating teams, fumbling the development, nuking everything and creating another team. Rinse and repeat.

Its not even just because of safety risk. they absolutely have no ambition. You can hand them a L5 self driving system today free and they will find a way to fumble the roll out and spend 3 years just deciding what to do with it. Then spend another 3 three rolling out 100 cars in a city. safe to say they are hopeless.​
Its not the engineers its the 80 years old business suit heads calling the shots.​
The only hope are start-ups. These two articles are good reads...​

The reason for the long deployment time is due to the trad automakers. Mobileye already came out and said it takes traditional automakers and tier 1s 3-4 to integrate. EyeQ4 went to production in Q4 2017 and it took NIO till the first half of 2018 to integrate and deploy it. So less than a year. Yet it takes someone like Ford and GM 4 & 3 years respectively to deploy EyeQ4 and Ford with crap driving policy. For EyeQ5, Geely's new Zeekr brand is integrating and deploying less than 1 year after its production date.

So blame the traditional automakers. However the eventually released propilot 2.0 in Japan is coming to the US this year.
Anyway, If you want the latest and greatest Mobileye system, you will only find it on EV startup cars not traditional automakers.
I mean you will literally see the latest and greatest supervision on the Zeekr this year.
So now your egg counting is based on startups? For myself, my criteria remains the same as it was in 2017, it's not here until it is actually released and consumers are using the software at the actual level claimed (whether we are talking about end-to-end L2, L3, L4, or L5).

BTW, I looked up NIO last time when you discussed it. They are far less impressive than you put it. Although they launched the vehicle in June 2018, they didn't even have ACC in the vehicle until April 2019.
MASTER THREAD: FSD Subscription Available 16 Jul 2021

I would also be careful on any PR announcements from them. They may announce they "released" a feature, like for example NOP (their version of Tesla's NOA) below in April 2020, when they haven't yet actually.
NIO Introduces Navigation On Pilot And Revamped Parking Assist
NOP wasn't actually released until October 2020 (and judging from articles discussing previous releases, the initial release is not necessarily equivalent to industry leaders).
NIO OS 2.7.0 released, brings NOP driver assistance feature - CnTechPost
 
Just my opinion of course but I think we are probably 3-5 years away from large scale deployment of L4 (geofenced) robotaxis in the US. I base this opinion on the big improvements we have seen in city autonomous driving, the recent announcement from Ford/Argo that they are planning a robotaxi service in several cities in the next couple years, Waymo and Cruise close to public deployment in SF, Zoox starting production on their roboshuttle, Mobileye testing their AVs in several US cities, Aurora also announcing plans for a robotaxi service in a few years and others. It just seems like we are likely to see several companies deploy autonomous ride-hailing services in multiple cities in the next few years.
 
  • Like
Reactions: powertoold
That's not true from my memory. I remember responding to one of the threads where you were talking about the Audi system and you pretty much did the same thing (counting eggs before they hatched). Plenty were saying the timeline for release was significantly farther back than you suggested.
First L3 Self Driving Car - Audi A8 world premieres in Barcelona
I wasn't wrong. I quoted a direct statement from Audi after their announcement in 2017.
Heck the statement is still on their site.
  • Piloted driving functions to be rolled out in production Audi A8 versions from 2018
Journalists get it wrong all the time, they call L2, L5 and call L3, L2. They sometimes call Lidar, Radar and Radar, Lidar.
They butcher things lefts and right. You wouldn't notice unless you did your own research and not just read articles.
For example two days ago i spent a-couple hours doing a refresher on what Volvo was up to. Almost every article i read got it wrong on what they were doing.

Sorry I'm not a Tesla fanatic like you I won't disparage every other company but Tesla.
Usually people that are unbiased don't have to call themselves unbiased ;). Other people are the proper judge of that, not oneself. From that thread alone, I think very few people will agree that you are unbiased. And ripping into other automakers does not make one unbiased, unbiased would be to show no preference at all for any company, but there are obviously companies you favor (there is nothing wrong with that BTW).
I have absolutely no favorites, I have reported and done research on almost every single SDC company. You on the other hand have one and only favorite, Tesla.
That's why you disparage other companies other than them and you have been doing it for over 5 years.
That's not an accurate take on Huawei's move at all. In 2018 there was already a government ban on Huawei (along with ZTE), signaling Huawei needed to find a replacement. In March 2019, they announced they were working on an in house replacement of Android (called Hongmeng or Harmony) and that they had been developing it since 2012.
Criticism of Huawei - Wikipedia

That turned out to be a bald faced lie from the article linked. All they apparently did was take AOSP and did a find and replace of any references to Android. So not only is it not their in house effort as they claim, in the two years they had since announcement of developing a new in-house OS, they did almost nothing significant. If instead they straight up said they were going to fork it (like Amazon's Fire OS) and their efforts showed in the fork, that's a whole different story. Instead they tried to hide their tracks in the most low-brow way possible (anyone who have worked in software development can see what I mean).
If that doesn't ring any alarm bells on claims by Huawei on software prowess, and also any forward looking claims they make going forward, I don't know what does.
Elon/Tesla lies literally every day and you never once, atleast from what i recall call any of his lies "lies" let alone "bald faced lie".
Everything Elon/Tesla says is later excused with "oh they were just being optimistic" by his fans. He could come out and say "I will have a fully functioning time machine by the end of 2022, this is a fact, this will happen, etc etc". Fast forward to 2030 and he has absolutely nothing, You and the rest of the Tesla fans would come out saying "oh it doesn't mean he was lying, he was just being optimistic". FSD was one of Elon/Tesla biggest lie. Worse than any lie I have ever seen from a company and you completely excuse it away and have the nerve to call others statement a "bald faced lie".

Do you realize that there could have been a department/team at Huawei since 2012 that worked and attempted to create a phone OS? Heck just about every tech company had some kind of development team attempting to do that. That isn't a lie. Yet you would call that a lie and excuse the robbery and blatant lies from Elon. GTFO


The quoted statement from your own thread suggests L4. "Complete autonomous driving for all driving situation on the highway." L3 is not complete autonomous driving.
The word "Complete" doesn't have any meaning here. We know that L3 car is a self driving car and L3 is autonomous driving.
L3 is considered as "Conditional Autonomy" with "Some driving mode" supported. You can't just look at the first 3 words, you have to look at the entire sentence.
"for all driving situation on the highway". So that's where the complete comes in at. Not that its L4. But that it will support all driving mode on the highway.
We also know that Nissan told Mobileye and Mobileye presented Propilot 2.0 as a L3 car and the thread title literally says "L3 cars". So no there is no confusion.


So now your egg counting is based on startups? For myself, my criteria remains the same as it was in 2017, it's not here until it is actually released and consumers are using the software at the actual level claimed (whether we are talking about end-to-end L2, L3, L4, or L5).
No your criteria has always been pro Tesla and disparage any other development that others are working on.
When announcements don't pan out or I actually do research and find out things are not going well.
Then i make a post about it and adjust my view points. That's how it should be. Unlike Tesla fans.
When Volvo sent the Drive Me participants their regular ADAS in their consumer car that anyone could buy.
I knew they had nothing and i came out and said it. Again that's how it should be. But almost all Tesla fans believe everything word that Elon says 6 years later like its the word of god.
BTW, I looked up NIO last time when you discussed it. They are far less impressive than you put it. Although they launched the vehicle in June 2018, they didn't even have ACC in the vehicle until April 2019.
MASTER THREAD: FSD Subscription Available 16 Jul 2021

Everything you said in this post is false and a blatant misrepresentation. This is what i'm talking about. This is what makes you a Tesla fanatic.
You take one article (when i can find 1k equivalent of Tesla) and issue and try to disparage an entire company with it.

First of All, it was not just ACC that launched in 2019, it was the entire Nio Pilot equivalent to Tesla Autopilot with Highway Pilot, Traffic Jam Pilot, Auto Lane Changing (ALC), LKA and more. When Tesla unveiled Autopilot in October 2014, it took them over a year to release AP. This is a similar timeline as NIO Pilot. Stop disparaging other companies due to your love of Tesla.

There are thousands of posts about OTA crashes and system crashes of Tesla.
Imagine if i selected one to paint the entire company. GTFO. You Tesla fans never cease to amaze me the lows you will go.

I would also be careful on any PR announcements from them. They may announce they "released" a feature, like for example NOP (their version of Tesla's NOA) below in April 2020, when they haven't yet actually.
NIO Introduces Navigation On Pilot And Revamped Parking Assist
NOP wasn't actually released until October 2020 (and judging from articles discussing previous releases, the initial release is not necessarily equivalent to industry leaders).
NIO OS 2.7.0 released, brings NOP driver assistance feature - CnTechPost

You are making up BS again. They didn't announce that they released it. They announced the package and said it was coming in an OTA update. For a Tesla fan to criticize that when the entire company whom you worship is built on claiming they actually released something that is not actually released even years later, not just on a marketing statement but ON THE ORDER PAGE!

You Tesla fans are something else.
 
Last edited:
  • Like
Reactions: linux-works
This is an interesting issue. The two systems are independently analyzing the situation and therefore probabilities of failure are assumed to be random in each and therefore statistically uncorrelated. If that were clearly true the combined probability calculation would be defensible.

However this kind of statistical predictive calculation is fraught with major possibilities for error. It would be correct if the possibility of failure in each had nothing to do with the external input, but only with some kind of random noise-generated failure within the systems themselves. IMO it is however quite likely that each has chance of perceptual error that is highly influenced by external challenging scenarios.

Its impossible to understand the independence until you understand the differences between each and every sensor.
Infact we can even move forward until that is clearly understood. For example Lidar sees in total pitch darkness and in bright sun light. This would be great at night when in neighborhoods in the inner city with no street lights or rural places and dealing with black objects and pedestrians with dark cloths. Secondly as you can see in the video, lidar shoots out lasers, its an active sensor so it actually sees the accurate dimension and shape and distance of an object without the need of ML. While for example a camera is completely blind without ML. This is some of the stark unique nature of lidar. Again failure to understand things that Lidar can do that No other sensors can do means you will never grasp the independence nature of several perception streams.

 
  • Like
Reactions: diplomat33
Infact we can even move forward until that is clearly understood. For example Lidar sees in total pitch darkness

If your headlights are not functional you probably shouldn't be driving at night.


. Secondly as you can see in the video, lidar shoots out lasers, its an active sensor so it actually sees the accurate dimension and shape and distance of an object without the need of ML. While for example a camera is completely blind without ML.


This is not correct. A camera is, obviously, not "blind" without ML. Otherwise pictures would always be blank.

ML is what lets the computer understand what the camera sees.

Which is the same thing as with LIDAR

In both cases it's simple for the computer to know there's "something" seen by LIDAR or camera.

In both cases it requires ML to make an estimation what the something is-- and then further decision making regarding what, if anything, the vehicle needs to do about it.


The primary thing lidar provides, that previously was not being done with vision, is providing accurate distance to the objects.

Tesla is now doing that with vision.

If they can obtain accuracy needed for safe driving then LIDAR no longer adds any value at all.
 
If your headlights are not functional you probably shouldn't be driving at night.
headlights doesn't let you see everything.
This is not correct. A camera is, obviously, not "blind" without ML. Otherwise pictures would always be blank.

It is. Camera (vision) is just a bunch of numbers from 0-255 and without ML you can't make sense of it. Your vision system is completely blind.
Computer-vision1.png


ML is what lets the computer understand what the camera sees.

Which is the same thing as with LIDAR
No it isn't. You don't need ML to understand lidar. Again Lidar gives you precise measurement of the objects in your path and their distance.
You only need ML for accurate classification. This is why Lidar was solely used in the beginning for self driving cars accurate detection.
In both cases it's simple for the computer to know there's "something" seen by LIDAR or camera.
This is simply false as i have proven.
In both cases it requires ML to make an estimation what the something is-- and then further decision making regarding what, if anything, the vehicle needs to do about it.
Again false. Most SDC companies has system that uses the raw/processed input of lidar to determine if there's an obstacle in the way they need to stop for or drive around in case their ML models on lidar inputs fail. If you did robotics as a kid in school, you would know this.
The primary thing lidar provides, that previously was not being done with vision, is providing accurate distance to the objects.
This is blatantly false. A computer can't tell there's an object in a picture without ML. Its literally called object detection. Lidar provides accurate detection, shape and distance.
Tesla is now doing that with vision.

If they can obtain accuracy needed for safe driving then LIDAR no longer adds any value at all.
Find me any ML system with 99.999999% accuracy. Newsflash: It doesn't exist!
 
Last edited:
  • Like
Reactions: diplomat33
headlights doesn't let you see everything.

That wasn't the point though- you were touting the ability to see in complete darkness... which is a condition you ought never drive in to begin with.



It is. Camera (vision) is just a bunch of numbers from 0-255 and without ML you can't make sense of it. Your vision system is completely blind.
Computer-vision1.png


You seem unclear on what the word blind means.

"I see something but do not know what it is" is not blind.


No it isn't. You don't need ML to understand lidar. Again Lidar gives you precise measurement of the objects in your path and their distance.


So it gives you.... a bunch of numbers.

Same as you just admitted a camera does.


This is simply false as i have proven.

You may think you have, you have not.


This is blatantly false. A computer can't tell there's an object in a picture without ML. Its literally called object detection. Lidar provides accurate detection, shape and distance.

Sure it does.

With ML.

Just like vision.

In both cases you get a bunch of data from a sensor, but need ML to know what you're "looking" at.


There's Waymo discussing some of the ML tasks they're using on LIDAR data for example.[/QUOTE]
 
  • Like
Reactions: powertoold
In both cases you get a bunch of data from a sensor, but need ML to know what you're "looking" at.
The point is that with LIDAR you don't need to use ML to detect objects. You can write regular old procedural code to prevent you from running into another car. Name a car sized object that you might want to run into, you don't always need to know what you're looking at.
Obviously you can feed LIDAR data or any other data into NNs too.
Just think of LIDAR as a superior more expensive form of the VIDAR that Tesla is working on.
 
  • Like
Reactions: Bladerskb
The primary thing lidar provides, that previously was not being done with vision, is providing accurate distance to the objects.

Tesla is now doing that with vision.

If they can obtain accuracy needed for safe driving then LIDAR no longer adds any value at all.

You are forgetting velocity. It is also important to accurately calculate the velocity of objects.

Camera vision can do this too of course. But it requires complex ML. I am not sure what the accuracy is. The accuracy would probably decrease in low light conditions. Lidar is able to calculate the precise velocity of an object using the Doppler effect in both day and night:

FirstLight Lidar
To travel fast, you need to see far. Our FirstLight Lidar is engineered to see more than twice the distance of even the most advanced conventional lidar systems, making it the only lidar that allows trucks to travel safely at high speeds. FirstLight eliminates virtually all interference from sunlight and other sensors—and using the Doppler effect, it can track the velocity of moving objects and people at distances and with levels of accuracy unmatched in the industry. The Aurora Driver | Self-Driving Technology

Lidar can provide other advantages too like accurately detecting drivable space, localizing precisely on a map and again it works perfectly in both day and night. So I think it is very simplistic and flawed to say that if camera vision can calculate distances safely enough that lidar loses all value.
 
Last edited:
The point is that with LIDAR you don't need to use ML to detect objects. You can write regular old procedural code to prevent you from running into another car

Without ML you don't know it's a car.


. Name a car sized object that you might want to run into


A large enough bag or tarp would be an example. In fact most of the arguments about sensor fusion cite cameras as being quite important for figuring out which large things LIDAR might not understand are NOT things you necessarily want to slam on the brakes for.



You are forgetting velocity. It is also important to accurately calculate the velocity of objects.

Camera vision can do this too of course. But it requires complex ML


That's why I'm not forgetting it- same thing applies to both.

Vision can do the task.

Humans do that task with vision right now.

So does Teslas vision-only system.

Lidar can provide other advantages too like detecting drivable space, localizing precisely on a map

This is only relevant if you rely on mm-accurate HD maps. Which of course Tesla does not-- and doing so makes scaling your system incredibly expensive and difficult (see again the failure of Waymo to make it out of one tiny suburb with this issue).




Find me any ML system with 99.999999% accuracy. Newsflash: It doesn't exist!


For distance? Why would it need to?

You don't design a driving system to always stop 0.000001% from hitting the car in front of it after all. So if LIDAR can tell me the car is EXACTLY 22.6748991 meters away... but vision can "only" tell me it's only about 22.67 meters away, that's still PLENTY accurate for the task at hand.


You want to design it to stop with a MUCH larger distance between objects than that- both because you can't be 100% certain of stopping distance given variable conditions, and because you don't want to be shoved into the car in front if the guy behind didn't brake in time.

So once again not using mm-level requirements makes the system generally easier to scale and not require otherwise unneeded and expensive sensors.