Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta 10.69

This site may earn commission on affiliate links.
Slow down sooner for stop signs?! Mine abruptly goes from 45 down to 10 like 300-400 feet from the stop sign and then drives at like 10 mph toward it from a distance. It’s super bizarre and pisses off anyone behind me so I have to frequently disengage if anyone is behind me.

Fsd needs to be trained how to use regen braking to slow smoothly to a stop at the right distance like an experienced tesla driver can do.
Yep that is weird. I don't why mine likes to accelerate to a stop sign just 150 feet away and then brake harshly.
 
Just downloaded 10.69.2.3, slightly worse than before, trying FSDb at night.

1. I was on the rightmost lane, within 1000 ft of the right turn, and clear traffic, it decided to turn to the left lane and then within 200 ft take a sharp right lane + the turn lane to make a turn.
2. With less than 100 ft before the right turn, it decided to take the left lane and then slowed to a halt at the junction, trying to make a right turn. I take it over.
3. I think this is the same as before, on poorly marked lanes, it tends to drive in the middle if there is no car in the next lane, to make it worse, the road has just been patched with two different colors on old and new concrete, some clear markings, some not so good markings, though visualization always looks good :)
4. Why it has to brake hard when the front car makes a right turn into a gas station or shop? It can just slow down and should be safe enough. Sometimes, it even taps on the brake when there is a car cutting into the lane but with plenty of distance, even maintaining the speed is safe.

Oh well, wait for the next one ...
 
  • Like
Reactions: DanCar and momo3605
No not exactly. Your head is far ahead of the B pillar camera. You can peak out before the B pillar camera can, so FSD needs to creep out further than a human does.

Exactly…so the sensor suite is insufficient…

The cameras are in vantage points that are better in almost all cases vs humans in the driver seat. Anyway, Elon and the AP team have constantly evaluated the cameras, placement, and all that. Elon has said a couple times already that the cameras and their placement are good enough for their goals (2-10x human safety). That's why you see the same cameras in the same locations in all the cars past and future up to 2023-2024 (Cybetrk).

The only argument I see constantly from fsd detractors is the argument for perfection. They'll come up with some hypothetical situation and then say fsd will never be able to navigate it. That's fine, but Tesla's goal isn't perfection.
 
  • Like
Reactions: loquitur
Blind spots are solved with software, creeping techniques and whatnot. If humans can deal with blind spots, so can fsd, with the right software. There are many instances where I am totally blind in a certain direction, but I can still manage by creeping out slowly, hoping whoever will stop for me, if not, I back in again.

Getting blinded by sunlight or rain: this will be an issue regardless of how many cameras you have.
Humans don't deal well with blind spots! A top-down full 360 view is a very desirable feature well liked by many human drivers and helps parking without collisions.

Cameras up flush against the windscreen are more likely to be blinded by sunlight and distorted by rain on the glass compared to human eyeballs 75 cm back. These are known optical issues in photography---why there are long dark baffles to preclude glare.
 
  • Like
Reactions: Ben W and momo3605
2.3 isn't that great for me either, vs 2.2. Check out this 2.3 fail (I wonder what it's thinking because it clearly sees those cross cars and highlighted them in blue):


It sees them in front, but not sufficiently well at a wide angle. This could be a perception problem as the resolution at far from center angles, where it needs to use the wide angle camera, is worse than humans and possibly it didn't see the cars from far enough away as they were fuzzy blobs. This is where a good imaging radar with wide angular input could be helpful.

It has to nudge itself out into traffic to get a better look (possible danger)----front corner cameras covering left and right arcs would have removed the need to do so and improved judgment and perception of oncoming cars.

It also clearly doesn't have good maps with speed limits either.
 
It sees them in front, but not sufficiently well at a wide angle. This could be a perception problem as the resolution at far from center angles, where it needs to use the wide angle camera, is worse than humans and possibly it didn't see the cars from far enough away as they were fuzzy blobs.

It also clearly doesn't have good maps with speed limits either.

I've watched 90% of his test loop videos, and there are versions where it does fine with that turn, with cross traffic in either direction.

I think 2.3 has some changes where in this case, 2.3 decided to wait in the road (had he not taken over). We've seen this type of behavior in prior versions.

CORRECTION! Fsd beta seems to think the middle part of that road is a suicide lane, but his refresh S dash doesn't show it as well. So this isn't really a perception problem with cross traffic, more of a training problem with interpreting certain roads with suicide lanes:

 
Something else correctable through mapping. Humans map for themselves, in addition to having better semantic understanding and stereo vision.

The thing is, I've seen humans do exactly what fsd beta was trying to do here. I personally wouldn't do it myself (use the middle space of the road to wait).

I don't think fsd beta should do what it did (for my driving style), but it doesn't really have anything to do with humans and semantic understanding IMO. There is large variability in human driving behavior.
 
  • Like
Reactions: GWord
The thing is, I've seen humans do exactly what fsd beta was trying to do here. I personally wouldn't do it myself (use the middle space of the road to wait).

I don't think fsd beta should do what it did (for my driving style), but it doesn't really have anything to do with humans and semantic understanding IMO. There is large variability in human driving behavior.

Humans might do that because they weren't sure about how wide the 'center' lane is or hadn't driven there enough. Good mapping would help it do better than average humans, especially ones which weren't familiar with the area. It would know, for sure, if it were prudent or not to use the center to stop in.

I think its necessary to use technology and fleet awareness to do better than humans in many common cases, because the AI will perform worse in other diverse unusual cases.
 
Yep that is weird. I don't why mine likes to accelerate to a stop sign just 150 feet away and then brake harshly.
The most likely scenario is that you saw the stop sign before the car did, because its perception or computational speed isn't as good. And it's not using a map-based prior to look for the stop sign, so that it would identify it earlier given otherwise ambiguous visual data. Underneath there are probabilistic classifiers looking for various objects outputting continuous scores which reflect the relative estimated likelihood of an object and its specific classification among the various options.

With a map-based prior you would expect a sign at a certain location which could be inexpensively (computationally) translated into an expected location on the sensor. That would increase the score in that location for the specific class of target object, so that it would be identified earlier. It would never force a detection where no visual evidence exists.

The fusion of a map based prior with current data could even be done even more intelligently with the machine learning model assuming it had ground truth from manual or autolabelling which reversed time. (Just as fusion of radar could be done with good ML). This would help learn how to deal with cases where the map is wrong (you train with some wrong mapping inputs) because the world changed.
 
Last edited:
  • Like
Reactions: DanCar
Humans might do that because they weren't sure about how wide the 'center' lane is or hadn't driven there enough. Good mapping would help it do better than average humans, especially ones which weren't familiar with the area. It would know, for sure, if it were prudent or not to use the center to stop in.

I think its necessary to use technology and fleet awareness to do better than humans in many common cases, because the AI will perform worse in other diverse unusual cases.

Ok, I'm looking at that intersection, and it does seem like the middle part is big enough to act as a suicide lane (so the perception is accurate in this case). Again, I wouldn't do it myself:

Screen Shot 2022-10-08 at 6.56.07 PM.png


 
  • Like
Reactions: DanCar
I haven't either and would be delighted to skip 69.2.3 to be an early 69.3 recipient. While some are feeling "placebo" improvements even Musk dismissed 69.2.3 as minor. And if Musk doesn't promote it even a little then it MUST be insignificant.

I believe the placebo effect is so easy with Beta. You can have one drive on 10.69.2.3 (or whatever ver) and it will be like it is from the future and version 15.x.x. Then you can repeat the same drive and it will be like you now have version 1. If you have a few drives like the former you are quick to believe it is a significant improvement and post that. Then next week you will have a few drives more like the latter and realize it wasn't a such a big or even any improvement after all.

Beta is just too dynamic to easily quantify nuance differences, though as humans we are all guilty of trying.
A model is just a model. Remember linear regression:

y = a + bx, right?

Wrong. It's actually:

y = a + bx + an error term.

I'm simplifying things drastically, but it's the same with FSDb. With every firmware update, Tesla reports % reduction in errors for specific components of FSDb. But there's still an residual error term. The magnitude probably varies with geography, traffic, lighting, time of day and may even specific vehicles. The error term will never be zero. It just to be a lot better than the "error term" that human drivers have.

And beyond errors in the NN model, every driver has a unique disengagement threshold. I'm too careful and my wife is even more ready to disengage FSDb. And then there a few influencers who take risks with FSDb. The variety in human behavior also affects how much each driver perceives FSDb.
 
Had fsd do something neat when letting it drive me back from picking up pizza. A kid was pacing in a parking lot near the road, walking toward the road and then back toward his mom sort of soldier like in a restaurant parking lot. He pivoted and was marching back to the road boldly but was still probably 10-15 feet from it and FSD saw him up ahead and predicted he might keep going into the street and slowed down significantly to prevent a kid from getting run over potentially.

After the car slowed down the kid got chewed out by his mother for freaking me out (really fsd). It was exactly what I would have done because the kid looked like he was going to potentially just March right out into the road and it’s better to slow down to have more reaction time if a kid does something stupid.

Fsd has also done quite well at giving space to pedestrians walking parallel to the road that are close to the curb or walking on the shoulder.

I also noticed that my car when I engaged FSD was able to see the pedestrian and stop. However, a pedestrian who started to cross the road and he did not see my car made a left turn as there was a big truck parked right of my car was scared and fell down and was very upset. FSD did an amazing job but the pedestrian was startled. See the dash cam video here.

 
10.69.2.3 appears to not weave on wide residential streets towards parked cars. The earlier versions would drift close towards the edge of streets just narrowly missing parked cars by drifting back to center and then again drifting towards to the streets edge. Very odd, un-humanlike behavior, this version seems to get rid of most of that.
 
I also noticed that my car when I engaged FSD was able to see the pedestrian and stop. However, a pedestrian who started to cross the road and he did not see my car made a left turn as there was a big truck parked right of my car was scared and fell down and was very upset. FSD did an amazing job but the pedestrian was startled. See the dash cam video here.

Really want to see what happened before this. Or at least a google pin of the location. This was a very bad line for the car to take and unnatural, which is (probably) why the pedestrian was upset (though a bit of an overreaction I feel, very hard to tell velocities and such in the video - it’s possible the car was just going too fast and that was an appropriate reaction).
 
Last edited:
  • Disagree
Reactions: sleepydoc
This was a very bad line for the car to take and unnatural
Well, the truck blocking the view of the pedestrian was parked in a no parking zone. It's probably intentionally "no parking" so that pedestrians could be more visible standing on the curb extension. The path that FSD Beta took wasn't that unreasonable as the bike sharrow is basically centered.

startled truck.jpg
 
Well, the truck blocking the view of the pedestrian was parked in a no parking zone. It's probably intentionally "no parking" so that pedestrians could be more visible standing on the curb extension. The path that FSD Beta took wasn't that unreasonable as the bike sharrow is basically centered.

View attachment 861671
Can’t really tell what it did without more video. Impossible to judge vs. normal human.

The truck in the No parking is fine (normal, ok). Obviously in such situations you use caution. No idea here, hard to tell from TeslaCam video.

Just don’t know why it wasn’t on the yellow line which is basically what everyone would do in this case. Was it after a turn? Google link (I have not tried to find one)?
 
Last edited: