Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD rewrite will go out on Oct 20 to limited beta

This site may earn commission on affiliate links.
Another informative video of FSD Beta:


We can see a couple disengagements because FSD did not change lanes quick enough to make a turn.

More concerning, if you look at the FSD visualizations, there does not seem to be object permanence. Cars blink in and out of existence on the screen based on what the cameras can see in the moment. We can see it more in busy traffic when cars block the view of other cars, rendering them temporarily invisible to the Tesla. The road edges of the intersection also seem to flicker when traffic is blocking the cameras from seeing the entire intersection.

Maybe it is just a glitch in the visualizations?
 
  • Informative
Reactions: daktari
there does not seem to be object permanence.

I've been watching that too with objects in general. It was one of the areas that I though 4D HAD to address. If 4D is only on city streets that would explain highway glitches (ie no change) but you'd think objects in general might glitch in and out at distance, but by the time they are in the foreground they sound / must be stable. How else can you make safe driving decisions? Lanes / turns / driveable space / cones / vehicles / pedestrians should surely all be very stable by the time they could effect your car?

Hardly makes sense to not show realistic visualization. That is worrying.
 
  • Like
Reactions: mikes_fsd
More concerning, if you look at the FSD visualizations, there does not seem to be object permanence. Cars blink in and out of existence on the screen based on what the cameras can see in the moment. We can see it more in busy traffic when cars block the view of other cars, rendering them temporarily invisible to the Tesla. The road edges of the intersection also seem to flicker when traffic is blocking the cameras from seeing the entire intersection.

Maybe it is just a glitch in the visualizations?
What is visualized is not the same as what the computer thinks. It is just where the car is thinking the probability is above a certain threshold at the moment. It might think there is a 40% chance that a car is in coordinate XY and not visualize it, but it still while drive a bit cautiously until it knows.

I assume that with more data and with some tweeking of the forgetting factor in the temporal network the blinking in and out will lessen.
 
A frustrating problem viewable in the FSD videos is the car often, when turning left, almost drives into a stationary object. The driver yelps, takes control and a few deep breaths later, restarts FSD. Deja Vu of Teslas hitting stopped vehicles on divided highways...

This issue causes me to think of the radar sensor and Depth perception on the cameras as not working together. Perhaps it’s a small number of images of angled cars as viewed during a turn. That can be improved with more usage of FSD. But why isn’t the radar assisting in “seeing” a large metal object nearby?

Is this the “reason” Lidar is a popular sensor in other driver assistance implementations? I don’t want to go there, but the Tesla literally should not drive into stationary objects....
 
  • Like
Reactions: P3dStealth
Probably somewhere around an accident probability of every 100-200k miles. This would be a ridiculous achievement. I don't know what Tesla's internal goal is.

Tesla's autopilot (including Navigate on Autopilot) render's an accident ever 4-5 million miles. "Probability of every 100-200k miles" is the opposite of safe. It is worse then actual driving. Human driving has an accident every 400,000 miles.

When will FSD be safe? When it achieves a probability of ever 10-20 million miles.
 
A frustrating problem viewable in the FSD videos is the car often, when turning left, almost drives into a stationary object. The driver yelps, takes control and a few deep breaths later, restarts FSD. Deja Vu of Teslas hitting stopped vehicles on divided highways...

This issue causes me to think of the radar sensor and Depth perception on the cameras as not working together. Perhaps it’s a small number of images of angled cars as viewed during a turn. That can be improved with more usage of FSD. But why isn’t the radar assisting in “seeing” a large metal object nearby?

Is this the “reason” Lidar is a popular sensor in other driver assistance implementations? I don’t want to go there, but the Tesla literally should not drive into stationary objects....

Radar does not work with stationary objects.
 
  • Disagree
Reactions: WattPwr
Tesla's autopilot (including Navigate on Autopilot) render's an accident ever 4-5 million miles. "Probability of every 100-200k miles" is the opposite of safe. It is worse then actual driving. Human driving has an accident every 400,000 miles.

When will FSD be safe? When it achieves a probability of ever 10-20 million miles.

I don't think you are really looking at the context behind those statistics.

AP's accident rate is the combination of a good driver assist and an alert driver that can intervene to prevent accidents. AP by itself, with no human in the car, would not have an accident rate of 1 per 4-5M miles. It only looks that good because an alert driver intervened often to save it from accidents.

Remember that FSD has to be able to operate without anybody in the driver seat. So you need to look at what the accident rate would if the driver had not intervened.

Also, we have to look at the causes of the accidents. How severe are the accidents. As we learned from the Waymo report, AVs won't get into the same type of accidents as humans do. AVs won't get distracted and go off the road like humans do. AVs won't run red lights and t bone other cars as humans do. AVs will be in much less severe accidents than humans.

So an accident rate of 1 per 100k miles might look bad compared to the human rate but keep in mind that most of the AVs accidents will be minor with no injuries whereas the human rate of 1 per 400k includes a lot of fatal and severe crashes. So which is better 1 per 100k miles that is mostly minor incidents or 1 per 400k miles that includes a lot of serious crashes and fatalities?
 
Tesla's autopilot (including Navigate on Autopilot) render's an accident ever 4-5 million miles. "Probability of every 100-200k miles" is the opposite of safe. It is worse then actual driving. Human driving has an accident every 400,000 miles.

When will FSD be safe? When it achieves a probability of ever 10-20 million miles.
The 500,000 mile metric is for police reported accidents. Many accidents are not reported to the police.
Achieving 10-20 million miles is impossible because you can't always stop people from running into you.
It also might be acceptable to have a higher accident rate than humans if the type of accidents are less severe. It's a very complicated question! Check out this safety paper from Waymo: https://storage.googleapis.com/sdc-...Waymo-Public-Road-Safety-Performance-Data.pdf
On the other hand if the car kills a pedestrian every 50 million miles that would be completely unacceptable since the total pedestrian fatality rate is one per 485 million miles in the US (I think).
 
They are deep questions.

Do you not already have electronic mirrors? There are several / many implementations from various manufacturers in Europe.

Remember 'your entertainment system failing (MCU1) not a safety issue' except if it effects demisting, wiper control, light control, backup camera etc etc?

Personally I think there is a good case for retaining old fashioned reflective mirrors for as long as there could be a human driver. The rear view mirrors with integrated monitor that I have seen do have some advantages, but also drawbacks that on balance leave the old fashioned mirror with plenty of value.

Of course no harm at all adding birds eye and other cool display / camera features, but with cameras still prone to problems like b-pillar cam condensation, I don't want to drive a car that relies on them.

Another example:. My screen has gone totally black while driving. Nothing but sound. Happened twice for the whole trip home until I reset it. Tesla told me they couldn't find anything wrong.
 
True. I agree Tesla will likely be first at a widely deployed FSD. The question is how advanced will it be. IMO, it is less impressive to deploy FSD wide that requires driver interventions every 50 miles than a wide deployment of FSD that is true driverless for example. For me personally, I am more impressed with limited deployment of driverless FSD than I am of wide deployment of FSD that requires constant driver supervision. For me, driverless FSD is the true prize, not how many cars you put FSD on.

I'm not sure "every 50 miles" should be regarded as "constant driver attention". Given that FSD is going to be L2/L3, the unanswered question here is: what is an acceptable rate of disengagement? And, just as important, under what circumstances? Every 50 miles would be bad on a freeway, not bad driving in town.

I think I'm the opposite of you .. I'm more impressed with a system that can be used anywhere with a low but acceptable level of disengagement, then a system that can be used in very restricted deployment but require no intervention (and I also question the utility of such a system). Perhaps both have their place for different uses, but ultimately a geofenced system isnt going to cut it for private vehicle use imho.
 
This argument will go on for a long time and I'm not going to wade in until we have a better idea of how well the new FSD works which won't be until next year.
What I wonder is when do we get to the point where the driver has to remain behind the wheel but is not expected to take over immediately. Examples- it's snowing and the cameras are blocked, parking garages, accident scenes or a road outage. You get the picture.

Then whatever statistics are agreed too by regulators that confirm FSD is safer then even the best drivers why couldn't Tesla be given government approval so drivers can be behind the wheel but can look at their phone or whatever else they are distracted by.
In this scenario FSD does not have to be ready as a robo taxi service since the driver's sole purpose is to only take over within a minute or two to deal with very unique edge cases. This would add real value to FSD to justify a $10k price tag and be a significant advantage for Tesla.

Too many people are getting bogged down with the L4/L5 argument. The scenario above would seen as a real positive for many yet not eliminate the driver completely. Would it be 100% FSD no, but getting there in steps would be a major improvement and it would avoid this all or nothing approach too many people are getting bogged down with. I could see something like this happening sometime in 2022 once there is enough time to collect and analyze statistics.

We also need a new name for this hybrid. Any suggestion?
 
If you're not paying attention how will know to disengage it?

There are two types of disengagement .. driver initiated and car initiated, and it's important to distinguish. Obviously, car initiated disengagement requires less vigilance on the drivers part. So the issue isnt "disengagement per miles driven", it's "driver disengagement per miles driven".

In the current beta we are seeing a lot of driver disengagement. Why? At first glance, you might say "because the car made a mistake". But actually, if you watch, many of the disengagements are "I thought the car was going to make a mistake". This is hardly surprising, its a beta, and the drivers have been warned to be vigilant (and are obviously anxious not to get into a crash etc). So any dubious situation and they quickly shut off FSD. I know I would do the same. But do we know in how many of these cases the car would actually have caused a crash?

I dont pretend to know the answer to this, but I'm willing to bet that as the beta is shaken down, the number of driver disengagements will go way down. First, because the car will get better, and second because drivers will realize that the car sometimes has better judgement than the human. After all, it has far faster reaction times, and can judge distances to another car to within an inch or two .. better than a human.

Remember emergency braking before ABS? You were taught to NOT slam on the brakes but to brake hard but not so hard the car will skid. Along came ABS, and the rule was "slam on the brakes" because the ABS system will perform optimum braking better than a human could. At some point, we are going to see similar rules as cars take over more driving functions... "dont grab the steering wheel, the car can figure out a safe way to get out of an accident better than you can, and do it faster" isnt too far in the future.
 
"Probability of every 100-200k miles" is the opposite of safe. It is worse then actual driving. Human driving has an accident every 400,000 miles.

I think the 1 accident per 400k miles is only counting serious accidents.

According to this article, if you count all collisions, the human accident rate is 40-60 accidents in 6.1M miles or 1 accident per 101k-152k miles.

"Nationally, 6.1 million miles of driving by a good driver should result in about 40-60 events, most of which are small dings, 22-27 or which would involve an insurance claim, 12 which would get reported to police and 6 injury crashes."

Waymo Data Shows Superhuman Safety Record. They Should Deploy Today

So an AV with 1 accident per 100k-200k miles would actually be as safe or better than human drivers.
 
  • Informative
Reactions: mhan00
We also need a new name for this hybrid. Any suggestion?
It already has a name, it's SAE Level 3.
You could add a description of the operational design domain. For example Audi calls there non-existent system "Traffic Jam Pilot" because it only works in slow traffic on the highway.
But do we know in how many of these cases the car would actually have caused a crash?
That's why it's important for engineers to go back and simulate each disengagement and try to figure what would have happened. Right now the disengagement rate seems far too high for that to be practical. They need to focus on making the car drive smoothly and predictably first before they start trying to measure theoretical driverless safety.
 
Last edited:
Many a true thing said in jest.....

Since Tesla need to PROOVE that FSD is safer than human by a decent margin, you could easily see discounts applied the more miles you have FSD engaged.

Slightly difficult for Tesla, might need to distinguish between highway and city performance of FSD. Millions of freeway miles without disengagements doesn't represent city performance and kinda annoying to have to focus specifically on humans still being safer around town.

I've lost track of what disengagements get reported. Are they already catagorized?

Why not let FSD do a road test to get a driver’s licence like everyone else? If it passes it is good enough..