A drop from 99.99% to 95% uptime as a result of heavy precipitation implies extremely heavy rainfall occurs nearly 5% of the time—over 430 hours a year. Such rainfall events in reality occur more along the order of 0.1% of the time or less. The overwhelming majority of those cases are under intense cumulonimbus cells and are therefore fleeting. Most last on the order of around 10 minutes.
I think I wasn't quite clear. 95% was my estimate of current FSD uptime, not my estimate of torrential-rain prevalence. FSD currently fails (at least in my cars) in much milder weather circumstances; even the slightest drizzle can cause "FSD degraded" warnings and failure. (As well as being blinded by sun glare, false "camera blocked or blinded" warnings in very dark conditions, etc.)
If an autonomous car has to pull over during an extremely heavy precipitation event (as many humans do), it would have almost no impact on robotaxi uptime. How many people are going to demand to load or unload a robotaxi during such events, or are unwilling to pull over to the side of the road for the ten or 15 minutes to wait to let it pass?
Agreed, but the system has to handle the onset of such weather gracefully. Waymo (to my knowledge) does; FSD does not. If it can't handle it gracefully, then it must err on the side of caution and not attempt to begin a drive when such weather is even remotely possible, which would add up to a lot more than 0.01 percent of the time.
Further, just because you get red hands of death on the current system doesn’t mean a RT-ready version of the software would immediately give up and quit. That’s just the way it’s currently programmed.
Correct, but it's not yet clear or proven whether the limiting factor (the reason it's not solved yet) is sensor suite, or compute hardware, or programming/training. No doubt there's an order of magnitude improvement still possible with HW3/HW4, but it will need several orders of magnitude to reach Robotaxi readiness.
AI can already detect cancer cells in medical imagery before oncologists can. There are many documented cases of this. There’s no reason to believe a camera-based system can’t handle the task of pulling over as an extreme rainfall event begins.
Oncologists have an 80% success rate at detecting cancer cells; AI has a 90% success rate. (Or something in that ballpark.) Robotaxi will need a 99.999% success rate at avoiding situations that might cause it to fail. Achieving near-perfection is completely different from achieving better-than-average.
Many humans are able to and do pull to the side of the highway and park during extreme rainfall events. The remaining ones engage their hazards and drive slowly. It’s just not a major concern.
Of course not (for humans). The question is whether HW3/HW4 will ever be capable of failing as gracefully and reliably.
This reminds me of all the concerns and hand-wringing people had about mud caking the cameras and rending the FSD system inoperable. My second Tesla was one of the first AP1 vehicles (Oct 2014 build Model S). Ever since that time I have never seen a single issue reported where someone’s AP or FSD system has failed as a result of mud caking the lenses.
My Model Y was recently experiencing chronic FSD failures as a result of residue buildup within the camera housing, effectively a dirty lens. (Likely due to a manufacturing process which resulted in plastic-residue offgassing.) My Model 3 experienced image-quality-related failures due to some sort of mineral residue buildup on one of the external cameras, which required a camera replacement. Neither of these is a sudden mud splash per se, but the car must be robust to ALL these types of image-quality failures, not just one specific one.
Much like range anxiety, these are issues manufactured by the human mind that are inflated to be a larger concern than they actually deserve to be.
Strong disagree. Robotaxi must have less than one safety-critical failure per several million miles to be viable at scale. Granted that most specific problematic failures (e.g. mud splashes or pigeon bulls-eyes) may be quite rare, to the point that most drivers will never have experienced them, but even that level of rarity may still be statistically showstopping for the fleet, because there are thousands of such black-swan possibilities, and their probabilities are additive. Redundancy is the solution to this, and having multiple sensor modalities (e.g. radar, lidar, ultrasonic) with different strengths and failure modes, is a far more robust redundancy than merely having multiple visual cameras.