Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta 10.69 Release Notes

Teslascope has confirmed the release of FSD Beta 10.69 on Twitter:


Amid this exciting release, one Twitter user shared the release notes they received from Tesla.


FSD Beta v10.69's release notes are as follows:

- Added a new "deep lane guidance" module to the Vector Lanes neural network which fuses features extracted from the video streams with coarse map data, i.e. lane counts and lance connectivities. This architecture achieves a 44% lower error rate on lane topology compared to the previous model, enabling smoother control before lanes and their connectivities becomes visually apparent. This provides a way to make every Autopilot drive as good as someone driving their own commute, yet in a sufficiently general way that adapts for road changes.

- Improved overall driving smoothness, without sacrificing latency, through better modeling of system and actuation latency in trajectory planning. Trajectory planner now independently accounts for latency from steering commands to actual steering actuation, as well as acceleration and brake commands to actuation. This results in a trajectory that is a more accurate model of how the vehicle would drive. This allows better downstream controller tracking and smoothness while also allowing a more accurate response during harsh maneuvers.

- Improved unprotected left turns with more appropriate speed profile when approaching and exiting median crossover regions, in the presence of high speed cross traffic ("Chuck Cook style" unprotected left turns). This was done by allowing optimizable initial jerk, to mimic the harsh pedal press by a human, when required to go in front of high speed objects. Also improved lateral profile approaching such safety regions to allow for better pose that aligns well for exiting the region. Finally, improved interaction with objects that are entering or waiting inside the median crossover region with better modeling of their future intent.

- Added control for arbitrary low-speed moving volumes from Occupancy Network. This also enables finer control for more precise object shapes that cannot be easily represented by a cuboid primitive. This required predicting velocity at every 3D voxel. We may now control for slow-moving UFOs.

- Upgraded Occupancy Network to use video instead of images from single time step. This temporal context allows the network to be robust to temporary occlusions and enables prediction of occupancy flow. Also, improved ground truth with semantics-driven outlier rejection, hard example mining, and increasing the dataset size by 2.4x.

- Upgraded to a new two-stage architecture to produce object kinematics (e.g. velocity, acceleration, yaw rate) where network compute is allocated O(objects) instead of O(space). This improved velocity estimates for far away crossing vehicles by 20%, while using one tenth of the compute.

- Increased smoothness for protected right turns by improving the association of traffic lights with slip lanes vs yield signs with slip lanes. This reduces false slowdowns when there are no relevant objects present and also improves yielding position when they are present.

- Reduced false slowdowns near crosswalks. This was done with improved understanding of pedestrian and bicyclist intent based on their motion.

- Improved geometry error of ego-relevant lanes by 34% and crossing lanes by 21% with a full Vector Lanes neural network update. Information bottlenecks in the network architecture were eliminated by increasing the size of the per-camera feature extractors, video modules, internals of the autoregressive decoder, and by adding a hard attention mechanism which greatly improved the fine position of lanes.

- Made speed profile more comfortable when creeping for visibility, to allow for smoother stops when protecting for potentially occluded objects.

- Improved recall of animals by 34% by doubling the size of the auto-labeled training set.

- Enabled creeping for visibility at any intersection where objects might cross ego's path, regardless of presence of traffic controls.

- Improved accuracy of stopping position in critical scenarios with crossing objects, by allowing dynamic resolution in trajectory optimization to focus more on areas where finer control is essential.

- Increased recall of forking lanes by 36% by having topological tokens participate in the attention operations of the autoregressive decoder and by increasing the loss applied to fork tokens during training.

- Improved velocity error for pedestrians and bicyclists by 17%, especially when ego is making a turn, by improving the onboard trajectory estimation used as input to the neural network.

- Improved recall of object detection, eliminating 26% of missing detections for far away crossing vehicles by tuning the loss function used during training and improving label quality.

- Improved object future path prediction in scenarios with high yaw rate by incorporating yaw rate and lateral motion into the likelihood estimation. This helps with objects turning into or away from ego's lane, especially in intersections or cut-in scenarios.

- Improved speed when entering highway by better handling of upcoming map speed changes, which increases the confidence of merging onto the highway.

- Reduced latency when starting from a stop by accounting for lead vehicle jerk.

- Enabled faster identification of red light runners by evaluating their current kinematic state against their expected braking profile.

This is FSD Beta's largest update yet, with many exciting changes! Some of the most notable changes include creeping for visibility at intersections, reducing false slowdowns at crosswalks, improved unprotected left turns, increased smoothness for protected right turns, and overall increased smoothness! This update also boasts an almost 50% reduction in error rate by using both map and video data to adapt to road changes and increase the smoothness of the drive.

FaqWQluUcAABlpR-2.jpeg
 
Last edited by a moderator:

diplomat33

Average guy who loves autonomous vehicles
Aug 3, 2017
10,103
14,466
Terre Haute, IN USA
- Added a new "deep lane guidance" module to the Vector Lanes neural network which fuses features extracted from the video streams with coarse map data, i.e. lane counts and lance connectivities. This architecture achieves a 44% lower error rate on lane topology compared to the previous model, enabling smoother control before lanes and their connectivities becomes visually apparent. This provides a way to make every Autopilot drive as good as someone driving their own commute, yet in a sufficiently general way that adapts for road changes.

I wonder if this is designed to solve some of the map issues where the car does not recognize the correct lane? I have an issue on 10.12.2 where FSD Beta completely ignores a left turn lane. It never moves over to the left turn lane to make a left turn and just continues straight and misses the turn. I wonder if 10.69 will fix this issue?
 
  • Like
Reactions: NoGasNoBrakes
- Added control for arbitrary low-speed moving volumes from Occupancy Network. This also enables finer control for more precise object shapes that cannot be easily represented by a cuboid primitive. This required predicting velocity at every 3D voxel. We may now control for slow-moving UFOs.
What? Are flying objects an issue for FSD Beta algorithms? As an aside, I think the government has replaced UFO with UAP (unidentified aerial phenomena).
 

tivoboy

Active Member
Jun 12, 2018
2,437
5,860
palo alto, ca
Oddly the sometimes hard slowdown prior to a crosswalk (when nobody was there at all and often in the middle of the night or dawn, again when nobody was there) was a frequent issue around here and will be on that I should be able to see easily since it happened every day and every time I went down the Main Street.
 
What? Are flying objects an issue for FSD Beta algorithms? As an aside, I think the government has replaced UFO with UAP (unidentified aerial phenomena).

I assumed that was in reference to an Elon quote from a long time ago (possibly years) saying FSD could respond to objects it didn't recognize. Something along the lines of "a UFO could land in the middle of the road and FSD would be able to avoid it..." I think it may have been on an earnings call or AI day, but don't recall specifically.

Edit - looks like I may have misremembered that. It may have just been this tweet:

 

KArnold

Active Member
May 21, 2017
1,036
1,196
Columbus OH
Does anyone know if 2017 models made before Sept 2017 will get the same update? Currently have FSD but not sure if it has enough cameras for it
See Wiki - FSD’s Earliest Adopters Still Waiting. Cars prior to 9/17 need MCU2 (not free) and AP3 and updated cameras. The latter two are free if you bought or are subscribed to FSD - but there is a long wait. Some are still waiting after nearly a year in queue.
 
I assumed that was in reference to an Elon quote from a long time ago (possibly years) saying FSD could respond to objects it didn't recognize. Something along the lines of "a UFO could land in the middle of the road and FSD would be able to avoid it..." I think it may have been on an earnings call or AI day, but don't recall specifically.

Edit - looks like I may have misremembered that. It may have just been this tweet:

Ok, thanks. Elon's tweet puts the release notes UFO comment into perspective.
 
  • Like
Reactions: PCTAZRichard
This is my first major update. Is it normal for the download to take long? About how long should I expect? It’s been 40 min so far and at 60%
Yes, as Supcom says. And to add that the progression is not linear. Example, the % completion might progress smoothly, yet seem to stall at 50% or 60% for awhile. Don't panic. In fact, don't watch it at all. Just go to lunch and come back in an hour.:)
 
  • Like
Reactions: PCTAZRichard
Some context for understanding the release notes. When you read the word 'recall' those are referring to part of a performance measurement (precision/recall) for a machine learning model which is trying to predict a binary ground truth condition (yes/no) by emitting a number representing something scaling as probability. So when you see "recall improved by NN%" it means that a neural network to predict a certain internal value has been improved.

On its own the recall % isn't enough, you'd need a recall % at a given fixed precision to actually know the true performance but of course they don't discuss that publicly. Presumably they chose some operating point with a given precision and optimize that but they don't tell us what it is.

Conceptually you can imagine precision recall by taking the many scores (floating point numbers emitted from the net) and the corresponding actual ground truth binary values, and sorting by the score, and then looking at cumulative counts/fractions of the corresponding 0's and 1's. An ideal error-free model would have all the 1's coming ahead of all the 0's but that never happens of course (and if it did, it meant you had a bug and data leakage in your computation).

precision-recall is one statistical curve you could compute from it (asymmetrical in 0/1), receiver operating characteristic is another (symmetrical in 0/1), etc, and there are single numbers which summarize them too. But that would give away some absolute performance measurements which nearly always aren't given by anybody without a NDA.
 
Last edited:
  • Informative
Reactions: Dewg
- Added a new "deep lane guidance" module to the Vector Lanes neural network which fuses features extracted from the video streams with coarse map data, i.e. lane counts and lance connectivities. This architecture achieves a 44% lower error rate on lane topology compared to the previous model, enabling smoother control before lanes and their connectivities becomes visually apparent. This provides a way to make every Autopilot drive as good as someone driving their own commute, yet in a sufficiently general way that adapts for road changes.
.

View attachment 843224

Sooo, they are finally doing what everyone told them they needed to be doing: maps. Not high res lidar maps for positioning, but semantic maps of who goes where to help actual driving beyond basic navigation.

Also, the low resolution in the front cameras means that they really need this supplementation (as people can discern things further ahead). Given their large fleet, if they started crowdsourcing them (like MobileEye) from manual driving and enhancing with soft constraints to drive similarly to humans, they could have a very good product. There are enough manual fleet drivers to keep maps updated reasonably swiftly.

After lanes, learn static obstructions and overpasses, and then lane rules (no right turn on red 7am-4pm).
 

About Us

Formed in 2006, Tesla Motors Club (TMC) was the first independent online Tesla community. Today it remains the largest and most dynamic community of Tesla enthusiasts. Learn more.

Do you value your experience at TMC? Consider becoming a Supporting Member of Tesla Motors Club. As a thank you for your contribution, you'll get nearly no ads in the Community and Groups sections. Additional perks are available depending on the level of contribution. Please visit the Account Upgrades page for more details.


SUPPORT TMC
Top