Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta 10.69

This site may earn commission on affiliate links.
I've been saying since the very beginning that reporting does nothing.

Tesla neither responds to the most frequent reports nor addresses them in any way in the release notes.

Maybe someone has evidence to the contrary.

You would at least think that if we’re sending clips of the car doing stuff wrong, that those would be part of the thousands of clips used for training and noted in the release notes. Yet you’re right, in almost 2 years there are certain areas that FSDb continues to make the same mistakes with the report button being pushed dozens of times.

I bet the reason is that the FSD team is not using the data we provide
to improve FSDb as a whole. They are only training against one narrow problem at a time. X% recall/precision improvement for ULTs, VRU detection, latency, unmarked roads, being able to scoot into bike lane for right turns, etc… at a time
 
I’m pretty convinced follow distance and chill/average/assertive settings don’t do a single thing when on FSDb
I agree, they don't seem to do anything. I've never seen any change in behavior, though I haven't spent tons of time investigating all scenarios so who knows. They certainly don't seem to impact follow distance.

I've been saying since the very beginning that reporting does nothing.

Tesla neither responds to the most frequent reports nor addresses them in any way in the release notes.

Maybe someone has evidence to the contrary.
I wouldn't be surprised. But for me it acts as an indicator of how poorly it's doing; if I'm constantly pressing it, it's a good sign things aren't going well. Kind of cathartic. Minimal cost anyway. Seems like it might do something when they decide to work on a particular thing at some point in time.

Anyway, I figure in a couple years they'll cover most of the issues and go through these clips if they're useful to them.

They should just use the FSD Pro mode suggested elsewhere though, and feed that right back to the mothership.
 
Last edited:
I know the FSD team is never going to do this, but I wish for the following:

A "record" drive button where, given a pre-determined route, the car "remembers" all the lanes you drove manually, and then when FSD beta is activated in the future, it simply attempts to move into those lanes as soon as possible (and stays there unless some obstruction / dbl parked car is in the way).
 
I know the FSD team is never going to do this, but I wish for the following:

A "record" drive button where, given a pre-determined route, the car "remembers" all the lanes you drove manually, and then when FSD beta is activated in the future, it simply attempts to move into those lanes as soon as possible (and stays there unless some obstruction / dbl parked car is in the way).

That’s essentially HD mapping. Doable and a promising idea by other AV companies, but not really the approach Tesla is trying to take
 
Tesla neither responds to the most frequent reports nor addresses them in any way in the release notes
If you're referring to the video snapshot button, I use it pretty extensively and typically upload 20GB+ each day for situations that FSD Beta perceives incorrectly or could have predicted earlier with more confidence. I've definitely seen improvements since 10.2 and especially with 10.69 such as correctly getting into lanes and not making unnecessary lane changes for places I've sent back video snapshots. I typically count 10 seconds after something of interest before pressing the button so that autolabelling can "see the future" to train new networks such as "deep lane guidance" or improve existing ones, e.g., forking lanes.

I don't bother pushing the button for logic / software 1.0 related issues, but I do press it even for various interventions and disengagements as above. I'm pretty sure video snapshots are not automatically recorded for many disengagements, but Tesla probably reports a trip summary including total disengagements by type, etc.

Tesla is building a system that is currently expecting human supervision, so the types of reports to address will be prioritized differently than a system that can't rely on a human. So issues like logic to slow down for school zones has clearly been lower priority than perception issues that result in dangerously driving into oncoming/cross traffic.
 
I've definitely seen improvements since 10.2 and especially with 10.69 such as correctly getting into lanes and not making unnecessary lane changes for places I've sent back video snapshots. I typically count 10 seconds after something of interest before pressing the button so that autolabelling can "see the future" to train new networks such as "deep lane guidance" or improve existing ones, e.g., forking lanes.

Good advice on the 10 second rule. I see many YouTubers press the button at the instant something happens, but I assume the AP team has enough premature videos to adjust the record button programming to give some extra buffer (e.g. 10-20 seconds after button press).

As for lanes, I doubt the improvement has been due to the report button; more likely from coarse mapping along with better perception (general lane connectivity training) to confirm the coarse maps, etc.
 
  • Like
Reactions: AlanSubie4Life
but I assume the AP team has enough premature videos to adjust the record button programming to give some extra buffer (e.g. 10-20 seconds after button press).

Yep, no need to worry about it. If the team needs that they’ll make the adjustment. They would expect people to make the report promptly. No need to overthink it. Just report, mash the accelerator and carry on.
 
  • Like
Reactions: Oil Freedom
First drive on 10.69.2 today. It seemed smoother and more confident than 10.2, and it's the very first time it's managed the three-mile drive to my daughter's preschool with zero disengagements. (Tricky narrow winding streets; I've tried a LOT of times.)

But I'd still call it only an incremental update. Drove about 20 miles today and had several disengagements: randomly swerving lanes in an intersection, trying to turn left from a middle lane of traffic, trying to turn right from a middle lane of traffic, a puzzling FCW when there was nothing in front of the car, and then one random panic/take-over-immediately with no apparent cause on a perfectly normal street.

I do wonder whether the car is able to recognize retroactively when it's made a mistake, such as a phantom FCW? It would be great to have a special "sorry, my bad" chime :) (And to remove the ding from the safety score!) Few things more annoying than getting phantom FCW safety-score dings from tree shadows.
 
randomly swerving lanes in an intersection, trying to turn left from a middle lane of traffic, trying to turn right from a middle lane of traffic

I don’t get how with all this time, effort and beta versions things like this that are seeming so common, straightforward and just “simple” are such a challenge to get working 100%.
 
I don’t get how with all this time, effort and beta versions things like this that are seeming so common, straightforward and just “simple” are such a challenge to get working 100%.
It's definitely an interesting mystery. Only Tesla knows! It's a very complicated problem though, to distill a noisy picture of the environment from multiple cameras with different alignments on each vehicle, into something that the computer can easily understand, without any errors. For us it's effortless to do this! To me, with the very little I know about neural networks, it is surprising it works as well as it does! Of course, the path planning doesn't seem to use NNs, so programming that must be incredibly complex. So many cases!

FSD still doesn't merge into the bike lane when taking rights. This is all kinds of messed up, as in people expect that driving in OC. Hopefully with better pedestrian awareness it will come soon.
Yep, it's either got to get way over onto the line, and/or get in the lane when legal to do so. A real problem and a safety issue. Signaling in a timely manner would be helpful too (not something it typically does).
 
This is why they incorporate vision as well. However, we humans remember past drives of the same path and don’t make the same mistake picking a route twice. Currently, because of bad map data, FSD makes the same mistakes over and over again. They at least have to try and make the map data a little better. You can’t just use the excuse “map data will never be 100% correct.” We make lane changes as humans before we ever see the arrows that show it’s a turning lane because we have map data ingrained in our brains.

By this logic humans are incapable of driving in different cities/states. I used to be a management consultant and lived in 17 cities in 3 years. Saw plenty of weird ass roads and crazy weather. I just drove wildly conservatively in a new situation. Did I get honked at many times? Yes. I don't care, because I adapted. FSD is meant to adapt with a universal approach.

Also...I question how much of these complaints are just peoples' driving preferences. Since you're behind the wheel you still have a lot of the same feelings as you in control driving, and nanny-ing the system. Very different than when you Uber and you're on your phone the whole ride, not super paying attention to the road.
 
  • Like
Reactions: Supcom
I don’t get how with all this time, effort and beta versions things like this that are seeming so common, straightforward and just “simple” are such a challenge to get working 100%.
It's trivial to program a rule that says "don't turn right from the middle lane of traffic". The difficult part is determining, based on pure vision alone, whether the lane to your right is actually a real lane, and whether/when you're required/expected to get into it in order to make the turn. Tesla is intentionally trying to solve these problems "the hard way", without explicitly mapping/labeling each specific instance, in order to be able to adapt with maximum flexibility to changing roadways and unmapped/novel environments. In the short term (and medium term) this approach will invariably result in some embarrassing fails, but in the longer term Tesla is betting that it will make the system far more robust than it could be made any other way. I agree with them, though I think we still have at least a decade before they achieve L4 autonomy on city streets. 2030 at the earliest.
 
FSD should profile the way you drive, acceleration curve, following distance, turning radius, speed based lane changes, etc... and then it should do its best to emulate your driving style.

The end goal of FSD is robotaxi (and not getting into accidents), not emulating the exact way you want to drive. While I can see some tweaks in slider settings based on location (i.e. California vs. Atlanta vs. NYC) I doubt it will ever go to individual preference, nor do I want to. It's a distraction.
 
I've been saying since the very beginning that reporting does nothing.

Tesla neither responds to the most frequent reports nor addresses them in any way in the release notes.

Maybe someone has evidence to the contrary.

the autoresponse email you get when you send an email to fsdbeta at tesla dot com has consistently changed over time. I don't see why they would bother adjusting this email if "reporting does nothing."

I wouldn't say I'm a heavy reporter (email); since 10.2, I've sent maybe 15 issues. But a few things in that list have been addressed, most notably, performance on unmarked residential roads. This is working a lot better on 10.69.2 in terms of positioning itself more consistently to the right.

The latest phrasing in the autoresponse is interesting:

We appreciate your participation and feedback; all feature related feedback will be forwarded to the appropriate engineering team. If any additional information is needed, we will reach out directly.

It suggests that issues related to the firmware, even if not strictly FSDb, will get forwarded to corresponding teams. This is great news to me. It gives me another avenue to report UI hardships.

Also, there's a reason why some of the horrible changes made for v11 UI are being undone over time. User feedback.

Just because they don't respond doesn't mean they aren't using the feedback. There's no point trying to guess how they prioritize these issues.


-edit-

regarding the snapshot button: given the insane amount of data generated from each press, it seems crazy to me that tesla would do nothing with this data. Even if they aren't using my video clip to fix my specific issue, I bet they are finding value in the clip to train for some other purpose. That seems fine to me. Again, we will never guess how/why the prioritize what they work on.
 
Last edited:
FSD drove me 21 miles to the Tysons Tesla service center. For the most part it was totally fine, except the toll plazas. The first one was ok but I was almost hitting the toll booth to the right. Tesla is favoring the right side of the road again. The directions were actually right in NAV for once. The trick is Spring hill road which goes to the dealership is a right hand exit, unmarked on the right side, end toll booth. The car didn’t know what to do and just quit and gave me the red hands and shut down. I even gave it the turn signal to coach it along. Everything else was fine though. Hope to get my car back today and drive some more. It was nice to see NAV pick the proper route for once.

Last night my car charged into a traffic circle ready to get T boned by traffic in the circle. the car had the blue cars on the IC, so why didn't it stop??? I sent both off to Tesla. Otherwise the car really has been a lot smother.
 
It's trivial to program a rule that says "don't turn right from the middle lane of traffic". The difficult part is determining, based on pure vision alone, whether the lane to your right is actually a real lane, and whether/when you're required/expected to get into it in order to make the turn. Tesla is intentionally trying to solve these problems "the hard way", without explicitly mapping/labeling each specific instance, in order to be able to adapt with maximum flexibility to changing roadways and unmapped/novel environments. In the short term (and medium term) this approach will invariably result in some embarrassing fails, but in the longer term Tesla is betting that it will make the system far more robust than it could be made any other way. I agree with them, though I think we still have at least a decade before they achieve L4 autonomy on city streets. 2030 at the earliest.

Elon stated this multiple times and yes, a fully autonomous system needs to be capable to detect lanes in absence of map data or when the map is wrong. Still, map data can be beneficial. It's the silicon equivalent of long term memory. Same as a human driver with local knowledge, the car select the best lane far ahead and when visual cues are poor (e.g. snow cover, degraded markings, other cars covering, ...), which would impact even a superhuman vision stack.

From various reports, I gather that older versions of FSD were giving more weight to map data or perhaps lane-level map data is currently ignored entirely. To me, Tesla should seek to improve the map data, at least in a certain reference area. They should bring it to a level where it's almost always correct like 99.9%. Given that there already is a feedback channel, this would not take a lot of effort, especially if done for a limited region only. That would not only improve the user experience but also make it easier to detect cases where FSD gets it wrong.


The Mobileye-based Autopilot was first introduced in October 2014. Tesla started from scratch in July 2016 after the breakup from Mobileye. What we're experiencing today is the outcome of 6 years R&D at Tesla. According to wikipedia, the FSD Beta started in October 2020.

Is your experience so bad that it feels only like 20% done?