Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
Not sure if you guys have seen this work from Jack Stilgoe from 2021, but it's pretty spot on imho:

" How can we know a self-driving car is safe?

Self-driving cars promise solutions to some of the hazards of human driving but there are important questions about thesafety of these new technologies. This paper takes a qualitative social science approach to the question ‘how safe is safeenough?’ Drawing on 50 interviews with people developing and researching self-driving cars, I describe two dominant nar-ratives of safety. The first, safety-in-numbers, sees safety as a self-evident property of the technology and offers metrics inan attempt to reassure the public. The second approach, safety-by-design, starts with the challenge of safety assurance andsees the technology as intrinsically problematic. The first approach is concerned only with performance—what a self-drivingsystem does. The second is also concerned with why systems do what they do and how they should be tested. Using insightsfrom workshops with members of the public, I introduce a further concern that will define trustworthy self-driving cars: theintended and perceived purposes of a system. Engineers’ safety assurances will have their credibility tested in public. ‘Howsafe is safe enough?’ prompts further questions: ‘safe enough for what?’ and ‘safe enough for whom?’"


Favorite quotes from interviewees:

"Very few Silicon Valley companies have ever had to ship a safety critical thing..."

"I’m not sure I’d be rude to the AI people but often all of them working in this area don’t understand a lot of standard safety engineering”
 
If you need another article that explicitly pointed it out, here's one:
"NHTSA only grants 2,500 such exemptions each year, but there is legislation to increase that number to 25,000."
Cruise nears approval to mass-produce robotaxis with no steering wheel, pedals | TechCrunch

If you need a direct NHTSA source, here's a press release from earlier applications:
"There is an exemption cap of 2,500 vehicles per manufacturer."
U.S. Department of Transportation Seeks Public Comment on GM and Nuro Automated Vehicle Petitions | NHTSA
That’s from 2019 before they finalized the rules. NHTSA Finalizes First Occupant Protection Safety Standards for Vehicles Without Driving Controls | NHTSA
Cursory search doesn’t show any limit. Not sure why there would be a limit once the rules were finalized.
 
An early pioneer of autonomous vehicles takes FSD V12.3.3 out for a drive. I can only imagine how nerve wracking it must have been to let the car drive, after knowing how limited the earliest heuristic systems were.

He also relates a tale of how he was at a trade show to talk about one of the last vehicles he worked on, and Elon was there with the Roadster. Elon even invited him to join Tesla.

 
  • Like
Reactions: eli_ and JHCCAZ
That’s from 2019 before they finalized the rules. NHTSA Finalizes First Occupant Protection Safety Standards for Vehicles Without Driving Controls | NHTSA
Cursory search doesn’t show any limit. Not sure why there would be a limit once the rules were finalized.
That seems to amend crash safety standards to apply to AVs, but still does not make such AVs FMVSS compliant. If the Cruise vehicle was FMVSS compliant there would have been no need to apply to NHTSA, given the vehicle can be self certified (as had been always done for conventional vehicles). That Cruise article was from 2023 and Cruise was still applying for approval and thus subject to the same 2500 unit limit.
 
That seems to amend crash safety standards to apply to AVs, but still does not make such AVs FMVSS compliant. If the Cruise vehicle was FMVSS compliant there would have been no need to apply to NHTSA, given the vehicle can be self certified (as had been always done for conventional vehicles). That Cruise article was from 2023 and Cruise was still applying for approval and thus subject to the same 2500 unit limit.
What would be the point of amending the FMVSS in a way only applicable to AVs and still not allow AVs to be compliant?
Now I’m curious in what way the Tesla Robotaxi won’t be compliant with FMVSS.
 
End to end is not a magic bullet, according to ME CTO:

"
End-to-end AI is great. Let's just not be religious about each fancy new method. What really works in applications that require high precision is a combination of methods, each with its own advantages. For more details, watch the video :)
"

I do think Shai makes a valid point. Trying to achieve 99.99999% reliably with just pure vision end to end and nothing else, seems very unlikely. It's just a matter of statistics. "Magic bullets" by definition are vey unlikely to work. But Elon seems to like "magic bullets". He is betting that somehow with enough data and enough training, they can eventually get the pure vision end to end to 99.99999%. But even if it does work, it will likely take a lot longer than Elon thinks.

Personally, I believe end-to-end should be a useful part of the solution but should not be the entire solution. It's why I like the Mobileye focus on redundancy. We see in other industries, like aviation, that redundancy is key to achieving super high MTBF. So I think it only makes sense to do the same with AVs. Yes, you should use state of the art (SOTA) techniques like end to end, but you should have redundancies like extra sensors, and different complimentary ML techniques, in order to make your AV more robust. It's really common sense IMO. What's more likely to be 99.99999% reliable? One single system with no backup if it fails, or multiple systems that can "catch" each other's mistakes? Clearly, the latter will be more reliable.
 
That seems to amend crash safety standards to apply to AVs, but still does not make such AVs FMVSS compliant. If the Cruise vehicle was FMVSS compliant there would have been no need to apply to NHTSA, given the vehicle can be self certified (as had been always done for conventional vehicles). That Cruise article was from 2023 and Cruise was still applying for approval and thus subject to the same 2500 unit limit.
This is probably the reason Cruise needs an exemption:

"This final rule only applies to ADS-equipped vehicles that have seating configurations similar to non-ADS vehicles, i.e., forward-facing front seating positions (conventional seating). Thus, NHTSA focused on conventional seating in this rulemaking, noting that additional research is necessary to understand and address different safety risks posed by vehicles with unconventional seating arrangements (e.g., rear-facing seats or campfire seating)."

I wonder if having luggage in the same area as passengers will be an issue with pod vehicles.
 
End to end is not a magic bullet, according to ME CTO:

"
End-to-end AI is great. Let's just not be religious about each fancy new method. What really works in applications that require high precision is a combination of methods, each with its own advantages. For more details, watch the video :)
"

Now that is a really stupid take.

Ask anyone in any field in which deep learning has been applied to a raw data stream. The accuracy of the deep learning models are superior given enough data and compute.

Restricting models to including only a portion to DL methods are only superior when you lack data and compute.

Self driving is no different from any other complicated problem that machine learning is solving, only that it requires the highest of accuracy.

Before end 2 end, MobilEye was dissing deep learning in general. I’ll watch the video but MobilEye has always been biased given their long history of classical methods and lack of capital to do massive computing.
 
Now that is a really stupid take.

Ask anyone in any field in which deep learning has been applied to a raw data stream. The accuracy of the deep learning models are superior given enough data and compute.

Restricting models to including only a portion to DL methods are only superior when you lack data and compute.

Self driving is no different from any other complicated problem that machine learning is solving, only that it requires the highest of accuracy.

Before end 2 end, MobilEye was dissing deep learning in general. I’ll watch the video but MobilEye has always been biased given their long history of classical methods and lack of capital to do massive computing.

Yeah watched a bunch of the video. No real substance, just hand waving about redundant systems and that "multiple models" gives better accuracy than one. Yeah get back to me when he addresses why all the best language translation, voice to text, image generation, NLP, and even time series detections like heart arrythmias are all deep learning models based on raw data because they are the most accurate.

Yet somehow self driving cars are a special, mystical case where the trends don't apply. :rolleyes:

BTW nothing I state implies that Tesla is winning the self driving race, this is a pure commentary on data science / algorithmic approaches.
 
  • Like
Reactions: JHCCAZ
Now that is a really stupid take.

Ask anyone in any field in which deep learning has been applied to a raw data stream. The accuracy of the deep learning models are superior given enough data and compute.

Restricting models to including only a portion to DL methods are only superior when you lack data and compute.

Self driving is no different from any other complicated problem that machine learning is solving, only that it requires the highest of accuracy.

Before end 2 end, MobilEye was dissing deep learning in general. I’ll watch the video but MobilEye has always been biased given their long history of classical methods and lack of capital to do massive computing.

To be clear, Mobileye is not saying that deep learning is not useful, or that deep learning is not accurate. On the contrary, they acknowledge that ML and deep learning is a powerful tool and can be very accurate. But they are making the very specific argument that they don't believe a pure vision, end-to-end only approach (ie vision-only, no radar, no lidar, no HD maps, just a single deep neural network from sensor input to control) can achieve the high 99.999999% reliability needed for eyes-off. They are arguing against a "magic bullet" approach to solving FSD, ie this one single ML method alone, with no redundancies, will solve everything. Their argument is that you need ML + redundancies to achieve the 99.999999% needed for eyes-off.
 
  • Informative
Reactions: primedive
Wow, in only half a day the requirements increased by another 9. What are you measuring with your 9s? (microseconds without a death / total microseconds) ... you need more 9s. (trips without any problems / total trips) ... you have an absurd number of 9s,

If you take 1 safety critical intervention per 1M miles and convert it to a percentage, you get 99.999999%. That is what Mobileye says you need to remove driver supervision (safer than humans).
 
To be clear, Mobileye is not saying that deep learning is not useful, or that deep learning is not accurate. On the contrary, they acknowledge that ML and deep learning is a powerful tool and can be very accurate. But they are making the very specific argument that they don't believe a pure vision, end-to-end only approach (ie vision-only, no radar, no lidar, no HD maps, just a single deep neural network from sensor input to control) can achieve the high 99.999999% reliability needed for eyes-off. They are arguing against a "magic bullet" approach to solving FSD, ie this one single ML method alone, with no redundancies, will solve everything. Their argument is that you need ML + redundancies to achieve the 99.999999% needed for eyes-off.

There's two separate things - the amount / variety of sensors input and the modeling approach. I don't like the term "redundant", that makes it sound like it's a backup / safety critical system when all it really means is there is some orthoganality to the data streams so a sensor fusion technique can improve reliability when using multiple inputs vs just one type. No arguement from me that more inputs give you the opportunity for a more accurate model.

But that has nothing to do with end-to-end. You can still input all that data into an end to end model. The end-to-end model will figure out how to optimally fuse the sensors, you don't need separate models for each. Multimodal deep learning models are a pretty well studied thing!
 
  • Like
  • Informative
Reactions: primedive and eli_
The hard part of self-driving isn't achieving redundancy, it's not the hardware, it's not getting the permits, it's not the sensors. The hard part is 100% the software. The big improvements to come will be from how the model is trained, going beyond imitation learning. The more of the world the model is able to digest and generalize the more 9's you will get.
 
I personally don't think ML alone is ready to take on safety critical applications this decade. See for example radiology. Also, (a) how many companies can you names form the valley that has shipped safety-critical tech ? (b) Based on pure ML?

a) a handful
b) zero

Not even radiology on still images has removed the human from the loop.

With regards to (pure) e2e: To properly scale self driving you need to be able to adjust it to climates, different regulatory domains, cultural driving differences. I don't see how a single large NN will be able to accommodate that.
 
If you take 1 safety critical intervention per 1M miles and convert it to a percentage, you get 99.999999%. That is what Mobileye says you need to remove driver supervision (safer than humans).
Just chance unit from mile to meter and you can add three nines.

Imo it's just a figure of speech, probably based on six sigma.
Six Sigma is a set of methodologies and tools used to improve business processes by reducing defects and errors, minimizing variation, and increasing quality and efficiency. The goal of Six Sigma is to achieve a level of quality that is nearly perfect, with only 3.4 defects per million opportunities.

Imo Tesla has the right question being asked, because they have actually thought through it. Basically
1. Be safer than the average human -> you can argue that FSD should be allowed, ie it makes the streets safer
2. Be ~10x safer than the average human -> at this point it's get hard to argue that FSD should not be allowed, you can start to prove it is significant safer than the average human
3. Be safer than all humans -> now it's impossible to argue that FSD should not be allowed