Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta Videos (and questions for FSD Beta drivers)

This site may earn commission on affiliate links.
I completely agree that there are different classes of errors. The first one is the most critical since it is safety related. The second one is still bad, even though you say it is not a direct safety hazard, it involves bad driving that could be a problem. The last one is trivial and unimportant since it is not bad driving or a safety issue. So the first two classes of errors need to be minimized as much as possible. It is hard to put an exact number. But the first class which is safety related would probably need to be in the 1 per 100k-200k miles range IMO.

Ultimately, autonomous driving means no human intervention allowed. So the autonomous driving system needs to operate the vehicle safely and smoothly if you don't allow any human intervention (like removing the steering wheel and pedals).

The second one can easily get you a ticket, and doing something like 35mph in a 50 (where people are often going even faster) does cause a safety issue due the large difference in speed. Plus it could lead to road rage.

Just as important is that it impacts acceptance of autonomous driving. What I hate about EAP/FSD the most is that its extremely embarrassing to use. It's basically out to try to embarrass me.

So I agree that it should be minimized as much as possible even though its less severe than the first case.

For the third one I think this could be reduced simply by allowing more flexibility in adjusting its parameters. In fact I want a "drive like I do" recording capability for common drives. That way it knows to drive 20mph in the 25mph zone near my house (lots of kids playing), 42mph in a 35mph zone on the main road (current percentage set speed works great for this), and then 35 in the 25mph zone where a bunch of rich people decided to make it 25mph despite not being a 25mph type zone. They did it for noise issues, and not safety. It used to be 35mph, and so 35mph is what it shall be. Its the most heavily enforced road in all of Western WA so don't go anything over 35mph. :p

The "driving style" recording would also include places to ignore the nav because the nav doesn't always show the easiest route.
 
Last edited:
  • Like
Reactions: diplomat33
Some reason navigation and/or FSD beta gets quite confused by this intersection:

path uturn.jpg


I'm guessing the quick loop around route (left + 3 rights) makes it think it has already progressed to the same intersection, so it should return to the street it just "accidentally" exited. But even then, quite the strange path wanting to U-turn directly in front of oncoming traffic.
 
  • Informative
Reactions: diplomat33
This problem has been around for ages. I'm surprised it never seems to get any better. They will really need to get object tracking across camera views rock solid to get rid of these phantom panic attacks.
Oh definitely the issue of combining predictions from multiple cameras has been a big issue especially with general release production Autopilot. The visualization logic there seems to render objects detected by the main, pillar and repeater cameras as they have mostly non-overlapping views except this has issues with large objects like trucks and buses.

I believe a big portion of the 4D rewrite was to train the birds-eye-view network layers of the neural network. This means each individual camera network layers are still predicting objects, and the neural network now additionally takes on the hard work of figuring out which duplicate objects across main/fisheye/pillar are actually the same to output a single prediction. As with other predictions, there can be mistakes that ideally get fixed with better training data, and one would expect Tesla to have a general trigger detecting these phantom duplicate objects to automatically improve on this even from shadow mode.
 
It is hard to put an exact number. But the first class which is safety related would probably need to be in the 1 per 100k-200k miles range IMO.

Again you are plucking numbers out of the air. What is the basis for these numbers? Probably the only factual statement that can be made is that autonomous (or mostly) driving systems should be at least as safe as the average human driver (though what that means would need careful quantization), or possibly safer than the average by some defined factor.

Possibly, you could add that it should also pass some minimal basic driving competencies, but again that would need careful definition (and probably cause no end of to and fro between makers of self-driving cars as they strive to get their system passed at the expense of others).
 
  • Like
Reactions: mikes_fsd
Again you are plucking numbers out of the air. What is the basis for these numbers? Probably the only factual statement that can be made is that autonomous (or mostly) driving systems should be at least as safe as the average human driver (though what that means would need careful quantization), or possibly safer than the average by some defined factor.

Possibly, you could add that it should also pass some minimal basic driving competencies, but again that would need careful definition (and probably cause no end of to and fro between makers of self-driving cars as they strive to get their system passed at the expense of others).

I am not plucking numbers out of thin air. The numbers are based on human safety. According to national data, a "good driver" gets into an accident every 101,667 miles to 152,500 miles. So I picked 100k-200k to put the autonomous car in the "good driver" range of safety.
 
Fsd may go to the center lane because it can see farther for that lane. The fence is blocking the view of the rightmost lane somewhat. If there was a an obscured car in the rightmost lane, moving to the center lane would let it pass as Chuck accelerated in the center lane.
 
I am not plucking numbers out of thin air. The numbers are based on human safety. According to national data, a "good driver" gets into an accident every 101,667 miles to 152,500 miles. So I picked 100k-200k to put the autonomous car in the "good driver" range of safety.

Good .. but you should say that. Never quote numbers without some form of reference/justification. So much made-up opinion these days is posted as fake facts, it's good to keep things rational.
 
Last edited:
I am not plucking numbers out of thin air. The numbers are based on human safety. According to national data, a "good driver" gets into an accident every 101,667 miles to 152,500 miles. So I picked 100k-200k to put the autonomous car in the "good driver" range of safety.

Doesn't using this statistic mean we should only count interventions that would have led to an accident?
 
  • Like
Reactions: mikes_fsd
#FSDBeta 10.1 version 2020.48.35.6 video showing some unprotected right turns, with a good discussion on route prediction vs route visualization in addition to lane choice logic.
Camera setup really helps. I wish there was some way to setup a camera for the wide angle fish lens up fron too, but no easy way to know how much that one can see so no easy way to decide on how to place a camera there beyond just wild ass guessing.
 
Camera setup really helps. I wish there was some way to setup a camera for the wide angle fish lens up fron too, but no easy way to know how much that one can see so no easy way to decide on how to place a camera there beyond just wild ass guessing.

merged.jpg


The top three images are the front facing camera views. You can see that the wide angle is clipped somewhat by the sides of the windshield mount. Mounting a camera with a 150 degree field of view on the roof close to the windshield should give a good approximation.
 
Doesn't using this statistic mean we should only count interventions that would have led to an accident?

No. Avoiding accidents is just the bare minimum. You also want your autonomous car to be a good driver, follow the rules of the road and respect other road users. So you also want to count interventions from illegal maneuvers, driving mistakes etc... even if they would not have necessarily caused an accident, so that you can fix those issues and make your autonomous car a better driver.

Of course not all interventions are equal. There will be some interventions that are more serious than others. An AV maker should probably look at interventions on a case by case basis and prioritize fixing interventions based on how serious they are.

Cruise put out a blog awhile ago that lists 4 types of interventions that should be counted so that they can be addressed:

1) Naturally occurring situations requiring urgent attention
These are interventions from the safety driver just reacting to a quick situation. Maybe the intervention was not needed but in the moment, you want to play it safe. For example, if a kid jumps out in the middle of the road, you are not going to wait to see if the autonomous car will brake in time, the driver needs to intervene immediately. You can always analyze the intervention later in a simulation to see if it was really necessary or not.

2) Driver caution, judgement, or preference
Sometimes the safety driver intervenes out of caution or just personal preference. Maybe the driver sees an upcoming construction zone and decides to disengage because they are not sure if the AV can handle it. You would want to analyze the intervention later to see if it was necessary. Or maybe the safety driver prefers to be in the middle lane so they disengage even thought there was nothing wrong. If there was nothing unsafe or bad about what the autonomous car did, then you can probably ignore those interventions.

3) Courtesy to other road users
Some interventions might be caused by the autonomous car doing something "rude" to other road users, like cutting off another car. It might not have caused an accident but you would still want to fix those interventions.

4) True AV limitations or errors
These are interventions from the autonomous car doing something truly bad or unsafe that would have caused an accident. These might be caused by a perception or planning error like the autonomous car not seeing another car or turning into incoming traffic. These interventions are very serious and need to be fixed ASAP.

Source: The Disengagement Myth
 
No. Avoiding accidents is just the bare minimum. You also want your autonomous car to be a good driver, follow the rules of the road and respect other road users. So you also want to count interventions from illegal maneuvers, driving mistakes etc... even if they would not have necessarily caused an accident, so that you can fix those issues and make your autonomous car a better driver.

Of course not all interventions are equal. There will be some interventions that are more serious than others. An AV maker should probably look at interventions on a case by case basis and prioritize fixing interventions based on how serious they are.

Cruise put out a blog awhile ago that lists 4 types of interventions that should be counted so that they can be addressed:

1) Naturally occurring situations requiring urgent attention
These are interventions from the safety driver just reacting to a quick situation. Maybe the intervention was not needed but in the moment, you want to play it safe. For example, if a kid jumps out in the middle of the road, you are not going to wait to see if the autonomous car will brake in time, the driver needs to intervene immediately. You can always analyze the intervention later in a simulation to see if it was really necessary or not.

2) Driver caution, judgement, or preference
Sometimes the safety driver intervenes out of caution or just personal preference. Maybe the driver sees an upcoming construction zone and decides to disengage because they are not sure if the AV can handle it. You would want to analyze the intervention later to see if it was necessary. Or maybe the safety driver prefers to be in the middle lane so they disengage even thought there was nothing wrong. If there was nothing unsafe or bad about what the autonomous car did, then you can probably ignore those interventions.

3) Courtesy to other road users
Some interventions might be caused by the autonomous car doing something "rude" to other road users, like cutting off another car. It might not have caused an accident but you would still want to fix those interventions.

4) True AV limitations or errors
These are interventions from the autonomous car doing something truly bad or unsafe that would have caused an accident. These might be caused by a perception or planning error like the autonomous car not seeing another car or turning into incoming traffic. These interventions are very serious and need to be fixed ASAP.

Source: The Disengagement Myth
Sure. But if you want to count all of those, you need to somehow count all the dunderheaded mistakes humans make and compare the interventions to that number. These will happen much more frequently than accidents, so comparing all interventions to accident numbers is not valid.
 
  • Like
Reactions: mikes_fsd
Sure. But if you want to count all of those, you need to somehow count all the dunderheaded mistakes humans make and compare the interventions to that number. These will happen much more frequently than accidents, so comparing all interventions to accident numbers is not valid.

No, you don't compare all interventions to accidents. That's apples and oranges. You only count "necessary interventions" (both safety and non-safety). And you try to fix "necessary interventions" depending on how serious they are.
 
Last edited:
No, you don't compare all interventions to accidents. That's apples and oranges. You only count "necessary interventions" (both safety and non-safety). And you try to fix "necessary interventions" depending on how serious they are.
We seem to be talking at cross purposes. I don't disagree that all types of interventions should be minimized. My only disagreement is that you wanted to use accident rates as a benchmark for interventions that included those that would not have led to accidents otherwise. For the most part, I agree with what you are saying.