Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta 10.69

This site may earn commission on affiliate links.
Stopping at stop signs is fine. It is not a problem and is not slow when done correctly.

You’ll recall Tesla rolled through stop signs slowly. Extremely pointless. Now they stop for stop signs, slowly. The stopping part is not the problem.
Yes, it's a problem because people sometimes honk at me and it slows everyone down. And no Tesla did not go thru stop signs slowly. There were videos showing rolling at 5-7mph.
 
Last edited:
And no Tesla did not go thru stop signs slowly. There were videos showing rolling at 5-7mph.

I think you are missing my point - the behavior on approach to stop signs really has not substantially changed, even though the behavior on actually stopping has.

As such, this means that the approach is (and was) highly non-optimal - slowing down too early much of the time (though not always), etc.

I wasn’t saying that rolling through the stop was slow.

In any case stopping for stop signs does not need to be slow. A human can do it optimally very easily (with adjustable g force).

The point is to start slowing down when you need to and then slow all the way to a stop. Rather than slowing down, then coasting, etc., which is both annoying and slow.

Yes, it's a problem because people sometimes honk at me and it slows everyone down

Yes, this is because it is stopping for the stop sign slowly and irregularly. Not because it is stopping at the stop sign. It also can sometimes remain stopped for too long of course. But stopping is not the problem.

I think you and anyone who has used FSD with stop signs is familiar with the behavior I describe.
 
Last edited:
  • Like
Reactions: pilotSteve
I think you are missing my point - the behavior on approach to stop signs really has not substantially changed, even though the behavior on actually stopping has.

As such, this means that the approach is (and was) highly non-optimal - slowing down too early much of the time (though not always), etc.

I wasn’t saying that rolling through the stop was slow.

In any case stopping for stop signs does not need to be slow. A human can do it optimally very easily (with adjustable g force).

The point is to start slowing down when you need to and then slow all the way to a stop. Rather than slowing down, then coasting, etc., which is both annoying and slow.



Yes, this is because it is stopping for the stop sign slowly and irregularly. Not because it is stopping at the stop sign. It also can sometimes remain stopped for too long of course. But stopping is not the problem.

I think you and anyone who has used FSD with stop signs is familiar with the behavior I describe.
Don't agree. No matter how well you slow down the added stopping instead of a 2-3mph slow roll is a pain. That is why I sometimes get honked at even if I drive manually and stop as you describe. Not much patience where I drive.
 
  • Like
Reactions: nvx1977 and GSP
Don't agree. No matter how well you slow down the added stopping instead of a 2-3mph slow roll is a pain. That is why I sometimes get honked at even if I drive manually and stop as you describe. Not much patience where I drive.
The behavior I notice is that the car approaches the stop sign quite slowly, comes to a brief complete stop with the nose right at the stop sign, but then creeps forward (again slowly) until the nose of the car is almost at the creep limit i.e. the actual edge of the crossing road. So if there's any cross traffic to wait for, it's effectively a double stop action. For anyone behind me, it would be at least a little confusing and probably annoying.

Perhaps I've been doing it wrong all these years, but I treat the stop sign as the stop command, and if there's no crosswalk, painted stop line or pedestrian traffic, I come to a brief-stop or near-stop closer to the crossing road edge, i.e. i stop only once, far enough forward that I can clearly see the cross traffic.

So I wonder, when NHTSA forced Tesla to do the complete stop, did they mandate that it must be at or behind the stop sign? That forces the excruciatingly slow double stop, because around here the stop signs are set quite far back from the actual cross street.
 
The behavior I notice is that the car approaches the stop sign quite slowly, comes to a brief complete stop with the nose right at the stop sign, but then creeps forward (again slowly) until the nose of the car is almost at the creep limit i.e. the actual edge of the crossing road. So if there's any cross traffic to wait for, it's effectively a double stop action. For anyone behind me, it would be at least a little confusing and probably annoying.

Perhaps I've been doing it wrong all these years, but I treat the stop sign as the stop command, and if there's no crosswalk, painted stop line or pedestrian traffic, I come to a brief-stop or near-stop closer to the crossing road edge, i.e. i stop only once, far enough forward that I can clearly see the cross traffic.

So I wonder, when NHTSA forced Tesla to do the complete stop, did they mandate that it must be at or behind the stop sign? That forces the excruciatingly slow double stop, because around here the stop signs are set quite far back from the actual cross street.


The worst part is I often see it do this when there's no reason to creep... there's nothing at all impairing visibility to the sides
 
The worst part is I often see it do this when there's no reason to creep... there's nothing at all impairing visibility to the sides
Maybe partly related to the B pillar cameras. They don't seem to work well but especially while the EGO is in motion. I had my son walk diagonally along side the vehicle and the UI showed him moonwalking in all directions with the general trend being correct. One could only imagine huge noise associated with estimating his velocity and direction with that data. Said another way - garbage in is garbage out.
 
  • Informative
Reactions: FSDtester#1
I had something of an epiphany with FSDb recently. After seeing that the camera report button disappearing and the "Early Access" moniker being dropped, it got to checking what the car is uploading after drives.
Previously when I got home to wifi, the car would upload significant amounts of data after I had tapped the camera to report issues.
Now my car uploads no more than a few k after each drive. Occasionally it will for some drives.
So I tried to correlate which drives cause uploads and have (for me at least) an idea of which routes don't cause uploads.

To the point eventually.
That means there is no point me using FSDb on the routes where it fails but doesn't trigger an upload - so I just manually drive them :)
Why go through the frustration of a bad FSD experience when there is no point.
Once a route doesn't trigger an upload I can quit using FSD on that route unless it does well.

As an aside, that suggests that they are looking for specific types of data.
 
Maybe partly related to the B pillar cameras. They don't seem to work well but especially while the EGO is in motion. I had my son walk diagonally along side the vehicle and the UI showed him moonwalking in all directions with the general trend being correct. One could only imagine huge noise associated with estimating his velocity and direction with that data. Said another way - garbage in is garbage out.
This could be as far down stream as just the UI image rendering processing and have nothing to do with the cameras or even anything related to FSD processing.
 
I had something of an epiphany with FSDb recently. After seeing that the camera report button disappearing and the "Early Access" moniker being dropped, it got to checking what the car is uploading after drives.
Previously when I got home to wifi, the car would upload significant amounts of data after I had tapped the camera to report issues.
Now my car uploads no more than a few k after each drive. Occasionally it will for some drives.
So I tried to correlate which drives cause uploads and have (for me at least) an idea of which routes don't cause uploads.

To the point eventually.
That means there is no point me using FSDb on the routes where it fails but doesn't trigger an upload - so I just manually drive them :)
Why go through the frustration of a bad FSD experience when there is no point.
Once a route doesn't trigger an upload I can quit using FSD on that route unless it does well.

As an aside, that suggests that they are looking for specific types of data.
I think you're right, but the specific scenarios they are looking for can change over days and weeks, so if you give up on using FSDb for a route, you still could miss the chance to contribute later.

Of course this entirely up to you and I'm not suggesting one thing or another, just noting this point as you implied a motivation to contribute test dara.
 
I know some of us were tracking which intersections had lane guidance data... But based on an experience I had today, I don't think the 10.69 branch actually uses that data to navigate yet.

On this one intersection, FSDb always chooses the far left turn lane despite having a right turn less than a mile afterwards. So I manually positioned the car into the middle lane and reactivated FSDb. It was still turning left, showed in the navigation that it knew a left was possible from the middle lane, but FSDb would only go straight from the lane, and refused to turn left. I took a photo just because it was so odd to see the lane guidance data and FSDb's behavior clashing so obviously.

PXL_20230114_133617230~3.jpg
 
It seems quite clear that FSDb is not using exactly the same map information and routing as the UI navigation, or at least can become out of sync with it.

The simplest way to see this is to invoke a nav route that includes a U-turn. For me, the navigation on screen will show the U-turn maneuver up until FSDb declines to do it, typically when the only thing I can really do is to take over. At that point, the on-screen nav route will show some kind of crazy re-rerouting to accomplish its goal, but by that time the critical decision point is over. The point here is not just that U-turns need to be implemented, but that they highlight someof the underlying conflicts and contradictions within the overall system.

I've seen a few other examples, a little harder to describe, like where FSDb suddenly decides that the destination is on the opposite side of the main road, i.e. requiring a left instead of a right, all while the nav UI continues to correctly show (and verbally announce) the originally calculated and correct destination on the right.

Despite these frustrations and my inability to figure out exactly what is going on behind the scenes, I take it as potentially good news - that so many foibles of FSDb could be solved by better route mapping and a more reliable reconciliation of its planning with the usually correct on-screen map.

There is definitely potential for a significant percentage of problem resolution in the near future. Sorry if that runs counter to the general "FSD sucks" narrative, but I'm looking hard for these improvements (without excusing the failures).
 
  • Like
Reactions: GSP
My best explanation for this behavior is that the road semantics NN is built on 1+ million intersections (AI Day 2), so when it approaches an intersection, it predicts all lane semantics based on all sorts of inputs (what it sees and maybe what it saw). Sometimes, this prediction is wrong.

The planning behavior is not human-like. The car doesn't look and make judgments based on single visual cues (like "oh I just saw a left turn arrow below, so I'm definitely in the left turn lane"). It simply processes the entire camera data stream and makes a prediction about the road rules, including all the other lanes as well...
 
showed in the navigation that it knew a left was possible from the middle lane, but FSDb would only go straight from the lane, and refused to turn left
There's a difference between lane selection control before an intersection is even visible and what FSD Beta perceives of the intersection to drive through it. For example, Navigate on Autopilot exit selection has been plain map data control as one might need to switch multiple lanes miles ahead of an interchange, and this has been inherited by FSD Beta resulting in unnecessary lane changes as the map data can make it seem like you need to get out of an upcoming turn lane. Whereas the perception of the intersection now makes sure FSD Beta follows road markings like turn arrows where early versions of FSD Beta (before Safety Score) would happily drive straight from a turn lane and cut off other people because navigation said to go straight.

In your case, the FSD Beta visualization makes the intersection look like 3 lanes in your direction could very well go straight. Maybe it does actually look like that, and many roads with that type of lane structure does not have the middle lane also making a left turn. Even if the neural network "saw" the turn arrow road markings, it probably was too weak of a signal for it to predict that there were 2 lanes that could make left turns, and theoretically more training data with this example would fix it.

Potentially "starting to make use of neural nets for vehicle navigation & control" helps address some of the more common "unnecessary lane change" issues that seem to be from too strictly navigating with bad/incomplete map data.
 
This could be as far down stream as just the UI image rendering processing and have nothing to do with the cameras or even anything related to FSD processing.
It could be but with the AMD Ryzen upgrade and release notes about ethernet throughput improvements and other circumstantial evidence it's doubtful. I think this is one reason for spectacular release notes of improved target path/velocity estimation but yet there's only slight real world improvement.
 
After Tesla swapped the computer I am hardly getting any nags on the hwy or local roads (FSD). Same driver but a different computer. What a difference! Now I know those were all false FSD strikes.
Interesting.

So, it could be that a subset of the Grumpy-The-FSDb-Can't-Do-Anything-Right could be suffering from the effects of a malfunctioning computer. bhakan had issues with nags and FSD strikes that seem, now, to have cleared. And, while I've had the occasional nag, it was nothing like what he was complaining about.

Note that whatever self-diagnostics run on the computer that was replaced, it wasn't good enough to catch a bad comp. And that could imply that there might be other malfunctioning computers that have different things wrong in there. Which might explain, well, some (but I'm sure, not all) of the complaints around here.

Hmm.. I'm a getting to be retired these days, but, when I was designing hardware for telecom, for that hardware we had factory diagnostics, out of service diagnostics, in-service diagnostics, and background diagnostics. The general idea was kind of like that bit about a tree falling in the woods: If there was a failure, one should be able to catch it.

As time went on I've run into systems where, well, there might be an error bit somewhere that wasn't monitored by software; in fact, given the push to get hardware out the door, there might be a lot of those error bits. Usually the logic is that if something does go bad on Device A, it would very likely make Device B fail in an obvious way. Hence, if Device B failures are what you monitor, then, in principle, one doesn't have to monitor Device A. Which is nice when it works and saves development time. But only works well when it's failures that one knows about. (As compared to the unexpected ones that one doesn't.)

And I don't want to get into the area where hardware designers don't even put diagnostic hardware into the design. Silent failures, anybody?

As if we don't have enough trouble getting things to work right. Hmm...