Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta 10.69

This site may earn commission on affiliate links.
I think the problem is FSD doesn’t look far enough ahead. When a human drives and the lane starts to widen you look 50 feet down the road and see that it’s actually becoming a turn lane so you pick the lane you want and head towards it. FSD seems to focus on the next 10-20 feet and then suddenly realizes there’s 2 lanes that it needs to decide between.
This! They should have a meta planner that is doing a coarse course plan many seconds ahead. It can use map data data or cached history (just like a human would). It would be an input to the more "short term" planner. Obviously safety and comfort as dictated by immediate and actual situation would override the recommendations. This wouldn't be that different from their recent incorporation of map data. The use language processing that they revealed in AI day 2 might lend itself to this.

The other approach is to brute force which is to have local planner look much further ahead. They basically said that some of the improvements came from optimizations that allowed them to "focus" the compute that they have available on the right objects. It wouldn't surprise that if in simulation they can solve the problem with something like HW4 driving computer but obviously they want to try to fit it in current driving computer.
 
  • Like
Reactions: PACEMD and Jeff N
I think the problem is FSD doesn’t look far enough ahead. When a human drives and the lane starts to widen you look 50 feet down the road and see that it’s actually becoming a turn lane so you pick the lane you want and head towards it. FSD seems to focus on the next 10-20 feet and then suddenly realizes there’s 2 lanes that it needs to decide between.
But the AI and the ML and the giga processors and the DOJO and the single stacks and all of the cutting edge tech that was showcased on AI day.

And yet...FSD still cant decide on a lane and cant consistently designate red lights from green lights when stopped in the left turn lane and the other lights go green but the left turn lane is red..and the chime still goes off indicating green light.

However, I am enjoying my newly improved smart summon that came out 09/30/2022 as promised by Elon.

Oh wait...🤣
 
....And yet...FSD still cant decide on a lane and cant consistently designate red lights from green lights when stopped in the left turn lane and the other lights go green but the left turn lane is red..and the chime still goes off indicating green light.....
FUD, the Red Light courtesy chime has absolutely NOTHING to do with FSD Beta. If you had Beta you would know that Beta does an excellent job identifying Green turn signals. I would say it is 100% at Green turn signals for me in the last year.

I have had Beta try to run a couple of Red lights while still moving (not stoping) but I have NEVER had it sit at a Green Light. This is a TRUE Beta problem and NOT sitting at Green lights.

EDIT: Just to add in the current Beta it will even do a Left turn on a Flashing Left Turn Yellow even if the straight through lights have turned Red. Had it do this last week and surprised me it was able to understand this.
 
Last edited:
FUD, the Red Light courtesy chime has absolutely NOTHING to do with FSD Beta. If you had Beta you would know that Beta does an excellent job identifying Green turn signals. I would say it is 100% at Green turn signals for me in the last year.
I agree the red light chime appears to be largely unrelated. Sometimes it's right, sometimes it's wrong, sometimes FSD is right, sometimes FSD is wrong (I get the sense that FSD is better at its determinations than the red light chime. The below is a counterexample - but N=1 does not indicate that FSD is worse than the red light chime).

But anyway, it will screw up from time to time. No idea why!

 
Last edited:
I agree the red light chime appears to be largely unrelated. Sometimes it's right, sometimes it's wrong, sometimes FSD is right, sometimes FSD is wrong (I get the sense that FSD is better at its determinations than the red light chime. The below is a counterexample - but N=1 does not indicate that FSD is worse than the red light chime).

But anyway, it will screw up from time to time. No idea why!
Yes, running Red lights seems to be Beta's biggest flaw (as I stated in my post). No doubt this is far more serious than sitting at a Green. I'm fine with ACTUALLY criticism of Beta mistakes but just making up stuff and posting as a negative is wrong. Let's critic Beta for what it is doing wrong and not unrelated hyperbole.

If you don't have Beta you shouldn't be criticizing it.
 
Yes, running Red lights seems to be Beta's biggest flaw (as I stated in my post). No doubt this is far more serious than sitting at a Green. I'm fine with ACTUALLY criticism of Beta mistakes but just making up stuff and posting as a negative is wrong. Let's critic Beta for what it is doing wrong and not unrelated hyperbole.
Moving this response into a reply to this message, as you added to your original post:
I have had Beta try to run a couple of Red lights while still moving (not stoping) but I have NEVER had it sit at a Green Light. This is a TRUE Beta problem and NOT sitting at Green lights.
Thanks for the clarification. But I think false positives (going on red) are worse than false negatives (not going on green) in this case.

(As you state in the response above which you posted after I posted the above. We seem to agree!)
 
Moving this response into a reply to this message, as you added to your original post:

Thanks for the clarification. But I think false positives (going on red) are worse than false negatives (not going on green) in this case.

(As you state in the response above which you posted after I posted the above. We seem to agree!)
Here is a post I made a couple of weeks ago about Beta trying to run a Red Light. I tried this same drive Saturday and it tried the same stunt. Navigation is saying it is a right turn and Beta is "believing" this over its own "eyes' showing it is straight through Red.

 
  • Like
Reactions: AlanSubie4Life
..this guy just said the only persons that should be concerned with FSD RUNNING RED LIGHTS AND PLOWING THROUGH INTERSECTIONS are the people who have FSD.

Not anyone else on the road.

you cant make this up. 🤣
Making stuff up is what you do about Beta since you don't have it or have first hand experience operating it.

I'm the driver of the car and 100% responsible for what the car does and take over before it happens. So it doesn't actually run through red lights but just tries to (for me probably 3 or 4 times in a year of testing). So it would be the driver that is RUNNING RED LIGHTS AND PLOWING THROUGH INTERSECTIONS and not FSD Beta.
 
The issue of the car centering itself on the expansion of the seemingly drivable area has been around for over a year. It is one of my major complaints as if it occurs where the added pavement is for a turn lane it signals drivers possibly entering from the right and those behind you that you intend to turn right, then suddenly the car moves back to the left and continues straight. I am certainly not an AI expert, but it seems like a delay in lane centering would go a long ways to eliminating this.
What would really solve this, and many other problems is mapping. Use crowdsourced maps of where human-driven cars actually drive and average their paths. This isn't high data bandwith, but low bandwith.

Another problem is improper lane choice for turns. Again, solved by crowdsourced path mapping; what to humans do, on average, to get from 'here' to 'there'? Sure, the machine learning should be able to guess OK without maps, but most cars drive most of the time in crowded areas.

I suspect the barrier is an ideological directive from the top.
 
  • Like
Reactions: Jeff N
I’m one of the most recent additions into FSDb.

One thing that I’ve noticed that is divergent from all of my previous AP experience and something that I so far really dislike:

If I’m going say 45 in FSDb and I see something ahead that I want to slow down for, previously I would scroll the speed down with the right scroll wheel, say to 25. In my previous experience with AP the car would gradually slow down. With FSDb the car seems to completely ignore the right scroll wheel input IF speed is reduced from current speed.

Anyone else experience this?
Yes. Mine does this, really badly. I've emailed the FSDb team about it. It usually happens when I'm approaching a speed zone where I roll down manually to give it more time to slow down, and it still blows past the speed limit sign at 30+ above the limit. Almost got me a ticket once. Now I roll it down and give it up to the speed limit sign to slow down, and when it doesn't I disengage with the brake and send a snapshot to make sure it's being recorded.
 
  • Like
Reactions: sleepydoc
it doesn't actually run through red lights but just tries to
So we have the green light chime on non FSD cars that works poorly/inconsistently.

And the response to that is "thats not FSD code/stack! That is TOTALLY different! The FSD code/stack has AI and ML and Gigaprocessing and the superior to radar DOJO Vision Only and No Lidar and NN's and handles the red/green lights differently, as it sometimes will actually attempt to drive THROUGH the RED LIGHT".





 
Last edited:
  • Disagree
Reactions: EVNow
I don't think any driving behavior is determined directly by a neural network at the moment, is it?

I think only perception is driven by neural networks, and personally I haven't noticed any degradation in perception performance in varying scenarios. Unless there's a drastic reduction in perception confidence or trajectory forecasting that's not visualized on the display.

But assuming perception quality is relatively equal across all circumstances, what would explain day-to-day variation in FSDb performance?

Actually it is.

They take the output from their perception system (environment/object/dynamic actors) and now also industry standard multi-modal prediction of the dynamic actors (they moved away from running a copy of FSD on other dynamic actors which I criticized them last year for.

Trajectory Generation
  1. They generate physics based trajectory optimization from classical robotics control algorithm
  2. They also generate trajectories from a trained ML (neural) planner (imitation learning) seeds.

These trajectories are candidates and there are hundreds/thousands of them, at this point quantity trumps quality, but the car has to narrow down and pick one to execute.

So They then run a parallel tree search algorithm on these trajectory candidates.

Trajectory Scoring

Finally after getting a ton of trajectory, they run checks on each candidate to see which ones pass.

Scoring Algorithms:
  1. Collison checks (will this maneuver lead to a collision)
  2. Comfort checks (is the steering/acc/braking profile of this trajectory comfortable?), etc.
  3. A discriminator neural network trained on pure human driving data. Its job is to determine whether the trajectory candidate is close to a trajectory a human would drive given the input.
  4. A neural network trained on bad examples (intervention data). Its job is to determine whether its likely that the human driver would take over. Think about when you disengaged because the car got too close to a pedestrian or the car was about to mount the curb or clip a parked car, or made some herky jerky wild movement, etc. The networks goal is to train on these bad examples to figure out if the trajectory candidate could lead to a human disengagement. The hope is that this network generalizes to be able to figure out if the trajectory is un-comfortable, unsafe, etc.
The goal of these checks is to filter out the bad/un-optimal trajectories leaving the good trajectories to pick from.

Tesla wasn't doing this in 2021, they have completely abandoned their previous driving policy architecture, which made me think how much Cruise's planning architecture reveal in 2021 influenced their new planning architecture.

DgrTGDQ.png
 
Last edited:
Actually it is.

They take the output from their perception system (environment, dynamic object) and now also industry standard multi-modal prediction of the dynamic actors (they moved away from running a copy of FSD on other dynamic actors which I criticized them for last year.

Trajectory Generation
  1. They generate physics based trajectory optimization from classical robotics control algorithm
  2. They also generate trajectories from a trained ML (neural) planner (imitation learning) seeds.

These trajectories are candidates and there are hundreds/thousands of them, at this point quantity trumps quality, but the car has to narrow down and pick one to execute.
So They then run a parallel tree search algorithm on these trajectory candidates.

Trajectory Scoring

Finally after getting a ton of trajectory, they run checks on each candidate to see which ones pass.

Scoring Algorithms:
  1. Collison checks (will this maneuver lead to a collision)
  2. Comfort checks (is the steering/acc/braking profile of this trajectory comfortable?), etc.
  3. A discriminator neural network trained on pure human driving data. Its job is to determine whether the trajectory candidate is close to a trajectory a human would drive given the input.
  4. A neural network trained on bad examples (intervention data). So the trajectory candidate has to pass/fail whether its likely that the human driver would take over. Think about when you disengaged because the car got too close to a pedestrian or the car was about to mount the curb or clip a parked car, or made some herky jerky wild movement, etc. The networks goal is to train on these bad examples to figure out if the trajectory candidate could lead to a human disengagement. The hope is that this network generalizes to be able to figure out if a trajectory un-comfortable, unsafe, etc.
The goal of these checks is to filter out the bad/un-optimal trajectories leaving the good trajectories to pick from.

Tesla wasn't doing this in 2021, they have completely abandoned their previous driving policy architecture. which made think how much Cruise planning architecture reveal in 2021 influenced their new planning architecture.

DgrTGDQ.png

Thanks for this input, but two things.

First, what's your source on them moving away from running their planning network from the point-of-view of other dynamic objects? I know they discussed physics-based simulations for highway scenarios, but I was still under the impression that they're running the planner for slow-speed scenarios (e.g. negotiating which car goes first along a narrow road).

And second, while this may explain variation in behavior with a lot of dynamic objects, it doesn't explain variation in behavior in scenarios without dynamic objects. I have a protected right-turn along one of my routes with a left-turn about a mile after, and FSD will occasionally turn right into the right-most lane, and it will occasionally turn right into the middle lane. Same set speed, same weather conditions, protected such that there's no vehicles possibly in the path of the car.
 
  • Like
Reactions: PACEMD
Is anyone that has FSD beta incredibly disappointed with it? It gives you a violation and stops working with literally no warning. No sound nor visual cue. It seems like you have to exert way too much force on the steering wheels when prompted to, often taking it out of autopilot.

It spends way too much time thinking about turns, especially right on red.

Overall, I turned it off, because it’s actually worse than EAP.
You'd have to give us something comparable to use as a yardstick. What else have you experienced that caused you to be "incredibly disappointed"?
 
Much as it was hyped I'd kinda hope 69.3 is on a new base firmware anyway
Has it been hyped? I certainly hope I get my movable repeater camera locations with that build though. That's going to be the huge huge major earth-shattering deal with 69.3 hopefully.

Anyway, other than that, 69.3 is not going to be anything special. I think we can all readily agree that it will be no big deal, right? No major changes, just incremental improvements, with most of the issues remaining. Now that is progress! They will just keep on making progress! Very exciting, just not really any change to the fundamental product quality.

Also hoping to get another beer. Need to cash in on my second still, before there is a chance for it to get nullified!
 
Last edited:
  • Like
Reactions: momo3605
So we have the green light chime on non FSD cars that works poorly/inconsistently.

And the response to that is "thats not FSD code/stack! That is TOTALLY different!

I mean...it's true. The FSDb version of green/red light detection is much much better than whatever code the chime uses. Green light chime very regularly goes off when the adjacent light changes, while FSDb rarely if ever has any such issues.

I don't see why that's complicated to understand?
 
I mean...it's true. The FSDb version of green/red light detection is much much better than whatever code the chime uses. Green light chime very regularly goes off when the adjacent light changes, while FSDb rarely if ever has any such issues.

I don't see why that's complicated to understand?
sorry...the "going through RED lights"...got me. I apologize. LOL
 
Is anyone that has FSD beta incredibly disappointed with it? It gives you a violation and stops working with literally no warning. No sound nor visual cue. It seems like you have to exert way too much force on the steering wheels when prompted to, often taking it out of autopilot.

It spends way too much time thinking about turns, especially right on red.

Overall, I turned it off, because it’s actually worse than EAP.
It is Beta for a reason
 
  • Like
Reactions: sleepydoc