Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta 10.69

This site may earn commission on affiliate links.
This makes me wonder what is happening with all the training data Tesla is receiving from the fleet
The fleet video snapshots most likely are used not only to train a given network architecture to its natural learning limits but also to design new architectures that are able to understand the nuances. In some sense, that could also be why Tesla doesn't need more frequent FSD Beta releases as the continuous data collection even with "outdated" 10.12 is still useful for auto-labeled training data.

While the specific "Chuck Cook style" intersection was most publicly watched, Tesla now has a huge corpus of unprotected left turns that can be queried to specially train occupancy, e.g., using Radiance Fields (with neural networks [NeRF] or without [Plenoxels]), to evaluate whether a new approach correctly determines an appropriate creep limit based on visibility would work or not without even needing to deploy via shadow mode.
 
The fleet video snapshots most likely are used not only to train a given network architecture to its natural learning limits but also to design new architectures that are able to understand the nuances. In some sense, that could also be why Tesla doesn't need more frequent FSD Beta releases as the continuous data collection even with "outdated" 10.12 is still useful for auto-labeled training data.

While the specific "Chuck Cook style" intersection was most publicly watched, Tesla now has a huge corpus of unprotected left turns that can be queried to specially train occupancy, e.g., using Radiance Fields (with neural networks [NeRF] or without [Plenoxels]), to evaluate whether a new approach correctly determines an appropriate creep limit based on visibility would work or not without even needing to deploy via shadow mode.
Many of the snapshots are likely worthless because there may not be an obvious issue. Since there's no good way to explain why the snapshot was recorded, I'm sure that many of my snapshots would be difficult for a person to decipher, let alone for an automated process that receives millions of clips per day.
 
Many of the snapshots are likely worthless because there may not be an obvious issue. Since there's no good way to explain why the snapshot was recorded, I'm sure that many of my snapshots would be difficult for a person to decipher, let alone for an automated process that receives millions of clips per day.
This is mostly what I'm suspecting, and that the snapshots might be used for training object detection but not necessarily actual maneuvers

All this makes me wonder if the actual maneuver training etc is happening the old fashioned way, with humans employed by Tesla who are out in test vehicles running scenarios in the real world
 
I'm sure that many of my snapshots would be difficult for a person to decipher, let alone for an automated process that receives millions of clips per day
Categorizing what happened and what's problematic in a snapshot is indeed difficult, but unless Tesla is trying to accurately quantify the types of failures, just having any video can be quite valuable. As part of auto-labeling a snapshot with the ability to look into the future, the system can automatically determine if the realtime evaluation would have resulted in a misprediction. These evaluations don't even need to be from the same version of network that originally sent back the video and could even be an in-development network.

For example, maybe you pressed the button because you prevented FSD Beta from making an incorrect lane change leading up to an intersection. Ideally the video includes when you were actually closer/past the intersection where offline auto-labeler would be able to determine the correct lane connectivity, and now Tesla can build a "deep lane guidance" module that can correctly pick a lane say 10 seconds earlier. The automated process could be looking for situations where there was a late / sudden change to the lane prediction, and while this won't correctly capture every scenario sent back, it likely finds enough examples from their corpus to be useful.

Even if your snapshot isn't used now, presumably they're stored somewhere so that in the future, if your specific problem still exists and is actively being looked at, a new query could be used that retrieves your data to address the current priorities. Although that's a whole separate topic of how does Tesla prioritize what to focus on as clearly there have been some long-standing issues people have complained about FSD Beta.
 
...
Now that the occupancy network has been added, I hope we see features like this added to the visualization. It could even help manual driving.
This is an interesting sub-discussion about when & whether the occluded objects, that the Occupancy Network knows about, will be shown in the visualization. To me there's an interesting question regarding the safety of using the visualization as a manual driving aid. We can start by assuming that Tesla does not yet have near-100% confidence regarding false positives and false negatives coming from the Occupancy Network.

So if they activate visualization that might have false negatives (failure to show occluded objects), one could argue that's no worse than what we have now (never showing them). But if it shows them >90% yet misses them sometimes, that's more dangerous because people start to over-trust the false negative visualization. And a few false positive might seem less problematic, but could actually contribute to the over-trust syndrome, ie people thinking the system is super-careful in looking out for them, when actually it's just imperfect.

Personally, I find the (non-Beta) visualization interesting but I'm very uncomfortable looking down and over at it, vs looking at the road. And the increasingly active cabin camera is uncomfortable with me doing that also! I look at it more when sitting at a stoplight than in any kind of complex driving scenario.

So considering all of the above, I could understand if Tesla made an active decision to wait for more development and testing before turning on Occupancy Network visualizations. It also could be there's been no active decision, and the ON connection to the visualization module simply isn't finished yet.
 
The focus on visualizations has always struck me as odd for those ^^ reasons, it would make more sense if they were right in front of you in a HUD but not over on a screen when you're using a system that requires eyeballs forward and that will potentially ding you for looking away from the road. Mercedes has a really excellent augmented-reality HUD + ADAS interface.

At least FSD's are useful for doing armchair analysis on videos uploaded by Beta YouTubers, but otherwise it feels a bit like putting the cart before the horse

I'm a bit wary about this new blue creep barrier too, because I already see people in the 10.69 comments saying stuff like "Let the vehicle do its thing, the blue barrier is there!" while the driver should be looking forward and this imaginary barrier could give a false sense of confidence + let the car do something it shouldn't
 
The focus on visualizations has always struck me as odd for those ^^ reasons, it would make more sense if they were right in front of you in a HUD but not over on a screen when you're using a system that requires eyeballs forward and that will potentially ding you for looking away from the road. Mercedes has a really excellent augmented-reality HUD + ADAS interface.

At least FSD's are useful for doing armchair analysis on videos uploaded by Beta YouTubers, but otherwise it feels a bit like putting the cart before the horse

I'm a bit wary about this new blue creep barrier too, because I already see people in the 10.69 comments saying stuff like "Let the vehicle do its thing, the blue barrier is there!" while the driver should be looking forward and this imaginary barrier could give a false sense of confidence + let the car do something it shouldn't
Use of the visualizations is best used in post-drive analysis, as we see with Chuck Cook's videos. Certainly they should not be used while executing a dangerous maneuver! Eyes on road, of course.

However, I do like the creep barrier because it could help people gain some confidence in the car. Without it, I expect that many people disengage as the car is creeping because they fear the car will creep into the intersection. By placing the creep barrier in the visualization and showing the car's relationship to it, people can see that the nose of the car is really not (hopefully!) sticking into the lane of traffic.

Chuck Cook often commented about not being comfortable with how far his car creeped even though the drone footage would clearly show the car outside of the cross lane. Once the creep barrier showed up, he seems to much more comfortable with the creep.
 
The focus on visualizations has always struck me as odd for those ^^ reasons, it would make more sense if they were right in front of you in a HUD but not over on a screen when you're using a system that requires eyeballs forward and that will potentially ding you for looking away from the road. Mercedes has a really excellent augmented-reality HUD + ADAS interface.....
I can say that living in the middle of a big city with pedestrians, scooters, bikes, construction, trucks, cars, double parking and my CRAZY ass Beta I never look at the visualizations. The only time I look at the visualizations is when in the burbs or on the highway or stuck in traffic/at a red light.

Just look at this obtuse Beta driving yesterday (Sunday morning). Would you be watching the visualizations?:eek::oops:

 
Last edited:
It didn't seem risky at all to me. It was an easy turn, and with the correct capabilities it should be possible to make it. Remember that previously there was ample evidence that it could not "understand" occluded vehicles and would go at incorrect times (which led to disengagements).

Now the question is whether it can be done reliably and safely, now that it seems to have the basic capabilities and framework to be able to successfully proceed. Can it do it "100%" of the time as Elon said? Right now it clearly has issues but are they just going to be resolved with simple tuning, or are there issues with capability (not clear what that would be at the moment)?
You keep saying all that: Oh, it's an easy turn. It isn't. Even Chuck in his recent video refers to another ULT "that has better visibility" than "his" ULT. You just want to belittle Tesla's achievement, but saying that it was actually an easy maneuver.
 
  • Funny
Reactions: AlanSubie4Life
Man I would love to have a look into the processes and what's happening behind the scenes, pretty envious that you're involved in any capacity
Well sir, I am more of an electronics design guy than a software one. While it may look interesting and exciting from the outside, there are lots of people pulling their hair out in a stressful long-hours environment. Young people a lot smarter than me. I try to cut them some slack any way I can. I'm no one special. Your job is actually probably more exciting and rewarding :)
 
Use of the visualizations is best used in post-drive analysis, as we see with Chuck Cook's videos. Certainly they should not be used while executing a dangerous maneuver! Eyes on road, of course.

However, I do like the creep barrier because it could help people gain some confidence in the car. Without it, I expect that many people disengage as the car is creeping because they fear the car will creep into the intersection. By placing the creep barrier in the visualization and showing the car's relationship to it, people can see that the nose of the car is really not (hopefully!) sticking into the lane of traffic.

Chuck Cook often commented about not being comfortable with how far his car creeped even though the drone footage would clearly show the car outside of the cross lane. Once the creep barrier showed up, he seems to much more comfortable with the creep.
How can you trust the creep barrier when the car goes through it? Then pretends there was no creep barrier.
So much for that trust.

Creep1.jpg

Creep2.jpg
 
  • Like
Reactions: Enginerd
Oh, it's an easy turn. It isn't. Even Chuck in his recent video refers to another ULT "that has better visibility" than "his" ULT. You just want to belittle Tesla's achievement, but saying that it was actually an easy maneuver.
We’ll have to disagree. It really is easy. For a human. You can see that in his videos - he talks about human drivers doing complex maneuvers there all the time. I did not claim that this turn has the best visibility of any turn, so of course there are turns with better visibility (no creeping required). It is just good visibility, and completely adequate to complete the maneuver safely after a creep.

I think accomplishing this successfully a lot of the time is an excellent step forward for Tesla. Now they need to perfect it, and start by getting to the first 9. (Or maybe the second, depending on how you count failures. Again, I am a stickler about counting failures because I am predicting when I would take over. Remember Chuck is not taking over except for safety issues since he is demonstrating the system, so sometimes he does not intervene when any normal human would. I’m not really interested in a driving system where I have to take over every half mile, hence I apply reasonable requirements about what is acceptable).

I’m obviously supportive of what Tesla is trying to do here, and I don’t know why you think otherwise.
 
Last edited:
  • Funny
Reactions: MrTemple