Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta 10.13

This site may earn commission on affiliate links.
How is it flawed? The more interventions, the lower the score, weighed by miles. Interventions are also weighed by severity which is good because interventions that are more safety critical should lower the score more than interventions that are less safety critical. The score also covers all the important types of interventions. And "perfect drives" improve the score.

Perhaps you don't understand how the scoring works?

Add number of interventions where FSD Beta would have hit something multiplied by 15
Add number of interventions where FSD Beta attempted an illegal act multiplied by 10
Add number of interventions where FSD Beta caused confusion in other drivers multiplied by 3
Add number of interventions where FSD Beta did an incorrect act by 2
Add number of interventions where driver was not confident in FSD Beta action but it was not illegal or unsafe multiplied by 1
Minus number of "zero intervention" drives greater than 3 miles multiplied by 2

That basically gives you a total points of "bad driving". Divide by total miles to get "bad driving" per miles. Then multiply by 100 and substract it from 100 to get a percentage of "good driving" per mile.

Here is an example:

The flaw is in the interpretation. Just one example. How many times did I used to intervene thinking I would hit something only to wait a moment and FSD made the adjustment? Lots. With a multiplier of 15 that makes a significant difference in the overall score.
 
The flaw is in the interpretation. Just one example. How many times did I used to intervene thinking I would hit something only to wait a moment and FSD made the adjustment? Lots. With a multiplier of 15 that makes a significant difference in the overall score.

Sure but if you thought you were going to hit something, that is still a huge fail on the part of FSD. Good AVs should detect objects with enough advance notice that they don't need to make last minute adjustments to barely avoid hitting an object. The kind of autonomous driving that is needed for safe, reliable robotaxis should drive safely and smoothly and not need to make a last minute adjustment to avoid hitting something unless it is a situation where the other human driver did something crazy and unexpected.
 
0 chance. First it will be released to a few YouTub/influencers and after a few days if all goes well it will start being SLOWLY rolled out over a couple of weeks checking for bugs. Also since Elon Tweeted that 10.13 "Still needs a few tweaks" that likely means there will be a 13.1 or at least some delayed before it goes to the YouTub/influencers. So probably still 2 to 4 weeks before most of us get it.
Yeah, they gotta finish hard coding in that one unprotected left turn.
 
The only subjective metric is E. Collisions are not subjective. Unsafe, illegal and incorrect acts are not subjective.

A - "Collision Avoidance" is a judgement call on the part of the human. If the collision does not occur, it requires a prediction of what the car would have done as well as what other drivers would have done in response.. If this metric was simply, "caused a collision", I would accept it as objective. But, lets not try that!

C - Tell me how to objectively measure, or determine other drivers' confusion without interviewing the other driver. Purely subjective.

D - An "incorrect act", as opposed to an illegal one, is almost pure judgement call. Aside from missing a turn, or taking a wrong turn, an incorrect act might be considered passing a car "too close" to an upcoming turn, where it may make the driver uncomfortable because of the possibility of missing the turn. Or, perhaps getting into the right lane too soon and sitting behind a slower moving vehicle. All judgement calls.

E - "Perfect drive" is absolutely a subjective metric, since any of the subjective measurements are embedded in the concept of a "perfect drive". Imperfection might be construed as a little jerkiness on a turn, braking too hard, accelerating to gently, or too hard, etc.
 
  • Like
Reactions: MrTemple
I wouldn't consider "smart routing" to be geofencing as a lot of human drivers (including me) do this from time to time to avoid difficult maneuvers that may not be necessary to get from point A to point B. I'm sure we've all seen Waze routing want you to take a side street and then a UPL to avoid a traffic light where the left turn may be backed up. I generally ignore that routing and opt to wait a couple cycles at the light for example if I know the UPL is crossing several lanes on a busy street. I may be stuck at that UPL longer than just waiting a couple cycles at the traffic light....not to mention the protected turn at the light is much safer.
For me it includes all of Paris. :)

In the entire history of mankind there has never been a better deal than the cost of average Uber ride in Paris.
 
  • Like
Reactions: GalacticHero
Sure but if you thought you were going to hit something, that is still a huge fail on the part of FSD. Good AVs should detect objects with enough advance notice that they don't need to make last minute adjustments to barely avoid hitting an object. The kind of autonomous driving that is needed for safe, reliable robotaxis should drive safely and smoothly and not need to make a last minute adjustment to avoid hitting something unless it is a situation where the other human driver did something crazy and unexpected.
Except now I don't expect FSD will hit an object knowing how it drives. Should it be better yes, but with the highest multiplier of 15 scores can be incorrectly skewed depending on how the driver handles this aspect of FSD.
 
Last edited:
I wouldn't consider "smart routing" to be geofencing as a lot of human drivers (including me) do this from time to time to avoid difficult maneuvers that may not be necessary to get from point A to point B. I'm sure we've all seen Waze routing want you to take a side street and then a UPL to avoid a traffic light where the left turn may be backed up. I generally ignore that routing and opt to wait a couple cycles at the light for example if I know the UPL is crossing several lanes on a busy street. I may be stuck at that UPL longer than just waiting a couple cycles at the traffic light....not to mention the protected turn at the light is much safer.
Waze would often try to get me to do an UPL across a busy street where there was no chance there was going to be a break in traffic. I think to succeed, aside from improving it's driving skills, FSD is going to need navigation to take its limitations into account.

For instance, my commute to work is mostly on a freeway. Exiting the freeway requires four sequential lane changes to get into a left turn lane. Getting on that freeway requires the same sort of thing in the other direction. I don't think the maneuver can be legally made, given the distance CO requires a lane change to be signaled. Human drivers just do it anyway. Maybe the other traffic is too thick and you miss the left turn and have to screw around and come back.

When it gets to that "regulatory approval" time, it's going to be like the rolling stop brouhaha. States won't approve it unless it follows the traffic laws. FSD is going to have to do what I and the other drivers -should- do. Go one more exit down. It's then a right turn onto the access road. Also there are no stop lights. The "shorter" route has two long stop lights. I suspect on average the longer distance route is shorter time.
 
  • Like
Reactions: JHCCAZ and Dewg
The UPL problem can be solved by avoiding them in those cases, and just turning right, and adjusting route info. It may add a few minutes to the drive, but is much safer. There are a few busy UPLs in my area that most humans won't try, except for the bold. A few of the busy ones actually had construction over the last few years and turned into traffic controlled lights.

The question is, can this be programmed into the map info, so that the route avoids busy UPLs?
 
Sure but if you thought you were going to hit something, that is still a huge fail on the part of FSD. Good AVs should detect objects with enough advance notice that they don't need to make last minute adjustments to barely avoid hitting an object. The kind of autonomous driving that is needed for safe, reliable robotaxis should drive safely and smoothly and not need to make a last minute adjustment to avoid hitting something unless it is a situation where the other human driver did something crazy and unexpected.
It’s tricky though, isnt it? With enough smarts an AV could make very safe decisions that would still scare a human passenger .. especially when AVs are common enough that they co-ordinate amongst themselves. Or the car squeezing through a gap quite easily that a human couldn’t really attempt.

This is related to phantom braking, in that some percentage of PB is probably not phantom at all .. its a genuine braking event but the human driver missed something the car saw as a potential danger.

In other words, there is a difference between “the car driving safely” and “the car driving like a safe human driver”. We’re not at that point yet, but it’s going to be interesting as we approach it. After all, with much faster reaction times, why does an AV have to restrict itself to the same speed limits as a human?
 
  • Like
Reactions: mach.89
It’s tricky though, isnt it? With enough smarts an AV could make very safe decisions that would still scare a human passenger .. especially when AVs are common enough that they co-ordinate amongst themselves. Or the car squeezing through a gap quite easily that a human couldn’t really attempt.

This is related to phantom braking, in that some percentage of PB is probably not phantom at all .. its a genuine braking event but the human driver missed something the car saw as a potential danger.

In other words, there is a difference between “the car driving safely” and “the car driving like a safe human driver”. We’re not at that point yet, but it’s going to be interesting as we approach it. After all, with much faster reaction times, why does an AV have to restrict itself to the same speed limits as a human?
If a system needs to monitored by humans it needs to drive within the safety envelope of humans. Once you remove the human then you can push the safety envelope. You also don't want to cause panic in other drivers so to really push it you need to remove all human drivers from the road.
 
The UPL problem can be solved by avoiding them in those cases, and just turning right, and adjusting route info. It may add a few minutes to the drive, but is much safer. There are a few busy UPLs in my area that most humans won't try, except for the bold. A few of the busy ones actually had construction over the last few years and turned into traffic controlled lights.

The question is, can this be programmed into the map info, so that the route avoids busy UPLs?
Agree with that. If FSD avoids all UPLs and substitutes that with a right turn and an U-turn, as long as it is done safely, that is still considered FSD. First objective should be, get to the destination safely even if it means taking a longer route than what a human would o. Optimization of routes and UPLs can wait.
 
Agree with that. If FSD avoids all UPLs and substitutes that with a right turn and an U-turn, as long as it is done safely, that is still considered FSD. First objective should be, get to the destination safely even if it means taking a longer route than what a human would o. Optimization of routes and UPLs can wait.
I was driving a few weeks ago during rush hour (manually, not on Beta) and came to an UPL. Traffic was so heavy that I sat at the intersection for nearly 3 mins before deciding it just wasn't worth it and turned right. Added several minutes to my drive to navigate around it, but saved my blood pressure from sitting at that UPL for so long.
 
  • Like
Reactions: JHCCAZ
I was driving a few weeks ago during rush hour (manually, not on Beta) and came to an UPL. Traffic was so heavy that I sat at the intersection for nearly 3 mins before deciding it just wasn't worth it and turned right. Added several minutes to my drive to navigate around it, but saved my blood pressure from sitting at that UPL for so long.
Agree, and a key problem with your example is that you attempted the UPL, wasted a lot of time and then gave up. Also, bailing on the turn and re-merging into traffic isn’t always so easy to do safely, not even mentioning the drive time wasted in attempting, waiting and then giving up on the UPL.

To me, the takeaway from this discussion is that the navigation team needs to put significant emphasis on identifying such difficult upcoming maneuvers, and weighting the alternatives more on safety and predictability, less on "time saved but only if the tricky maneuver succeeds". The logic of this decision would be quantified by a figure of merit comparing alternative routings and combining probabilities of
  • safety risk
  • time risk of the "ego" path
  • traffic deinterference risk (eg contributing to a left turn queue that blocks the main traffic lane)
  • risk of being unable to extricate ego from the attempted maneuver
This weighted decision logic doesn't have to be "software 1.0". It seems very compatible with ML techniques, though I'm not clear how much that is directing navigation logic now.

Finally, there needs to be an easy facility allowing the driver/FSD passenger to influence the route planning. Waypoints are better than nothing, but the ability to request preferred route segments, and to veto problematic ones, would significantly improve the FSD experience.
 
Finally, there needs to be an easy facility allowing the driver/FSD passenger to influence the route planning. Waypoints are better than nothing, but the ability to request preferred route segments, and to veto problematic ones, would significantly improve the FSD experience.
This would be a great solution - putting a toggle in the navigation settings, just like HOV and Toll access - "Unprotected Left Turns" and perhaps either a toggle (meaning it will never attempt them), or a selection, like Chill, Standard, and Assertive on when it will attempt.
 
  • Like
Reactions: edseloh and JHCCAZ
This would be a great solution - putting a toggle in the navigation settings, just like HOV and Toll access - "Unprotected Left Turns" and perhaps either a toggle (meaning it will never attempt them), or a selection, like Chill, Standard, and Assertive on when it will attempt.
I’ve thought about this too as I’ve watched FSD beta. Close to my home it can take two routes to the main road that are two blocks apart. One leads to a UPL, the other to a traffic signal. The car always takes the UPL because its the shorter route. A human (like me) knows the light adds only 1 minute to the drive but makes it much easier .. a routing option “Prefer Controlled Turns” would seem a sensible option.