I personally think Tesla should make a much bigger/more visible reference to this statistic:
Not sure why Tesla would want to make rear ends more visible until they fix phantom braking. Not saying there is a link..... but not feeling there isn't.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
I personally think Tesla should make a much bigger/more visible reference to this statistic:
there does not seem to be object permanence.
What is visualized is not the same as what the computer thinks. It is just where the car is thinking the probability is above a certain threshold at the moment. It might think there is a 40% chance that a car is in coordinate XY and not visualize it, but it still while drive a bit cautiously until it knows.More concerning, if you look at the FSD visualizations, there does not seem to be object permanence. Cars blink in and out of existence on the screen based on what the cameras can see in the moment. We can see it more in busy traffic when cars block the view of other cars, rendering them temporarily invisible to the Tesla. The road edges of the intersection also seem to flicker when traffic is blocking the cameras from seeing the entire intersection.
Maybe it is just a glitch in the visualizations?
Probably somewhere around an accident probability of every 100-200k miles. This would be a ridiculous achievement. I don't know what Tesla's internal goal is.
A frustrating problem viewable in the FSD videos is the car often, when turning left, almost drives into a stationary object. The driver yelps, takes control and a few deep breaths later, restarts FSD. Deja Vu of Teslas hitting stopped vehicles on divided highways...
This issue causes me to think of the radar sensor and Depth perception on the cameras as not working together. Perhaps it’s a small number of images of angled cars as viewed during a turn. That can be improved with more usage of FSD. But why isn’t the radar assisting in “seeing” a large metal object nearby?
Is this the “reason” Lidar is a popular sensor in other driver assistance implementations? I don’t want to go there, but the Tesla literally should not drive into stationary objects....
Tesla's autopilot (including Navigate on Autopilot) render's an accident ever 4-5 million miles. "Probability of every 100-200k miles" is the opposite of safe. It is worse then actual driving. Human driving has an accident every 400,000 miles.
When will FSD be safe? When it achieves a probability of ever 10-20 million miles.
The 500,000 mile metric is for police reported accidents. Many accidents are not reported to the police.Tesla's autopilot (including Navigate on Autopilot) render's an accident ever 4-5 million miles. "Probability of every 100-200k miles" is the opposite of safe. It is worse then actual driving. Human driving has an accident every 400,000 miles.
When will FSD be safe? When it achieves a probability of ever 10-20 million miles.
They are deep questions.
Do you not already have electronic mirrors? There are several / many implementations from various manufacturers in Europe.
Remember 'your entertainment system failing (MCU1) not a safety issue' except if it effects demisting, wiper control, light control, backup camera etc etc?
Personally I think there is a good case for retaining old fashioned reflective mirrors for as long as there could be a human driver. The rear view mirrors with integrated monitor that I have seen do have some advantages, but also drawbacks that on balance leave the old fashioned mirror with plenty of value.
Of course no harm at all adding birds eye and other cool display / camera features, but with cameras still prone to problems like b-pillar cam condensation, I don't want to drive a car that relies on them.
True. I agree Tesla will likely be first at a widely deployed FSD. The question is how advanced will it be. IMO, it is less impressive to deploy FSD wide that requires driver interventions every 50 miles than a wide deployment of FSD that is true driverless for example. For me personally, I am more impressed with limited deployment of driverless FSD than I am of wide deployment of FSD that requires constant driver supervision. For me, driverless FSD is the true prize, not how many cars you put FSD on.
If you're not paying attention how will know to disengage it?I'm not sure "every 50 miles" should be regarded as "constant driver attention"
If you're not paying attention how will know to disengage it?
Not a good suggestion but just to perhaps spark some ideas: Automatic Driving On City Streets (ADOCS)We also need a new name for this hybrid. Any suggestion?
If you're not paying attention how will know to disengage it?
"Probability of every 100-200k miles" is the opposite of safe. It is worse then actual driving. Human driving has an accident every 400,000 miles.
It already has a name, it's SAE Level 3.We also need a new name for this hybrid. Any suggestion?
That's why it's important for engineers to go back and simulate each disengagement and try to figure what would have happened. Right now the disengagement rate seems far too high for that to be practical. They need to focus on making the car drive smoothly and predictably first before they start trying to measure theoretical driverless safety.But do we know in how many of these cases the car would actually have caused a crash?
Many a true thing said in jest.....
Since Tesla need to PROOVE that FSD is safer than human by a decent margin, you could easily see discounts applied the more miles you have FSD engaged.
Slightly difficult for Tesla, might need to distinguish between highway and city performance of FSD. Millions of freeway miles without disengagements doesn't represent city performance and kinda annoying to have to focus specifically on humans still being safer around town.
I've lost track of what disengagements get reported. Are they already catagorized?