1% Error rate ! That would be huge considering both a human is probably processing 10s of things per second as well as even today’s Neural Net based system.
Yep.
But you probably also realize that even 1% error rate may be way too high for true autonomous driving (FSD branded, or not). Depending on the situation, you may want <0.01% error rate (e.g.: correctly detecting and stopping at red lights), which is a ridiculously high engineering reliability bar for which to aim.
L1-5 are not nebulous and meaningless name tags. They are very well defined by the SAE.
SAE L1-5 definitions are bureaucratic meaningless BS.
No automaker is developing to L#. Sales people may be selling to L#s, but engineers develop features.
Tesla, like all OEMs, is developing features that either work well, or not so well, yet.
And then they keep improving them. Step by step.
Whether or not those features add up to some bureaucrat's definition of an L#, is only interesting to sales people and consultants.
I think I get what you are saying. Basically you want safety alerts to prove to you as the driver that the car can detect these safety issues and therefore the car has what it takes to do FSD. But what you are forgetting is that a lack of alerts does not necessarily mean that the car lacks the safety feature.
I am pretty sure that Tesla lacks the hardware (not just software) for rear cross traffic alerts, rear emergency braking, and infrared forward night vision. Never mind Lidar.
Elon thinks those are unnecessary.
May be he is right, may be he is wrong.
The only way to find out is by observing how well those "interim" alert features cover those capability gaps. However buggy and annoying, having those alerts (that you can, hopefully, turn off) indicates progress towards FSD goals.
Lack of alerts indicates lack of feature development progress.
A good way that we can gauge FSD progress is whether the features work. We don't necessarily need alerts for everything. For example, if the car successfully stops at red lights, we know Tesla has implemented that FSD feature. We don't need the car to beep at every red light to know that the car can see red lights.
Judging by past features reliability and hilarious Warnings and Disclaimers in Model 3 manual (LKAS, ACC, etc), I have ZERO trust in Tesla's ability to stop for red lights correctly with greater than 99% success rate.
The downside of not stopping for a red light is severe (accident with major injuries, if not deaths), so there is no way in hell I would blindly "trust" Tesla to get it right for my family.
Either those red-light alerts are displayed reliably in advance (visual indicators, beeps, seat / steering wheel vibration, whatever), or Elon can train the NN on his own family, friends, and employees.
They won't be risking the lives of mine, that's for certain!
a