Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta 10.69

This site may earn commission on affiliate links.
I'd recommend you watch Ashok Elluswamy's CVPR presentation, if you haven't already. But this tweet summarizes the part relevant to your comment:


The new volumetric occupancy network does seem to be designed to persist vehicles through occlusions. Hopefully the performance of those predictions will be improved in 10.69.1 and 10.69.2.
Yes, I saw his comments, that's why I was hoping to see something in the visualizations. But, the videos I've seen so far do not display any tracking through occlusions in the visualizations. Tesla originally introduced this on AI day, as I recall. If it was actually doing it I would expect the visualization to show it.
 
  • Like
Reactions: AlanSubie4Life
I was hoping to see evidence of the car tracking vehicles through an occlusion
Yeah, it seems like the labelled moving object detection and visualization still does not have the video module whereas the lane geometry and now occupancy network do have the video module. That could mean FSD Beta 10.69 is using the new occupancy flow to determine if it's safe to cross even with temporary occlusions but the visualization happens to be using the old behavior.

This could also be the reason for the "extra cautious" comment as if FSD Beta is now depending on the new occupancy network for determining potential collisions, that is quite a significant departure from relying on the existing moving objects predictions. Then again, to be extra cautious, they're probably limiting the scope of the new network and only using it for "low-speed moving volumes" and still relying on the old behavior for high speed cross traffic even if it can be confused by occlusions.
 
Yeah, it seems like the labelled moving object detection and visualization still does not have the video module whereas the lane geometry and now occupancy network do have the video module. That could mean FSD Beta 10.69 is using the new occupancy flow to determine if it's safe to cross even with temporary occlusions but the visualization happens to be using the old behavior.

This could also be the reason for the "extra cautious" comment as if FSD Beta is now depending on the new occupancy network for determining potential collisions, that is quite a significant departure from relying on the existing moving objects predictions. Then again, to be extra cautious, they're probably limiting the scope of the new network and only using it for "low-speed moving volumes" and still relying on the old behavior for high speed cross traffic even if it can be confused by occlusions.
The Chuck Cook video showing the car threading through stopped traffic had his car move into a lane that could have had a hidden car T-bone him. It might have been luck that the lane was clear. Neither the car nor Chuck could see this before committing unless the car could track through an occlusion. Otherwise, this was a poor decision that just happened to work out.

Without some evidence of tracking through occlusions is working, I would not permit my car to make that maneuver. I would not make it myself unless I could do so visually as well.
 
Neither the car nor Chuck could see this before committing unless the car could track through an occlusion.
No, I watched the video, and at least from the camera shown, it was clear there was no traffic. Given the vehicles doing the occluding I think this was possible from the driver’s seat in the car as well.

Note I am not suggesting that it works.
 
Yes, I saw his comments, that's why I was hoping to see something in the visualizations. But, the videos I've seen so far do not display any tracking through occlusions in the visualizations. Tesla originally introduced this on AI day, as I recall. If it was actually doing it I would expect the visualization to show it.

Maybe the expansion of the tweet occluded the second part of my message, but I was saying that since we know the occupancy network is operating in 10.69, we can hope that the wider release versions of 10.69.1 and 10.69.2 would improve the accuracy of those occluded vehicle predictions. I'm assuming the visualizations are programmed to not display vehicle predictions with low confidences.
 
No, I watched the video, and at least from the camera shown, it was clear there was no traffic. Given the vehicles doing the occluding I think this was possible from the driver’s seat in the car as well.

Note I am not suggesting that it works.
There was one car that came through on the center lane just before Chuck's car crossed. That car was invisible on the display until it became visible in the camera. Had Chuck's car made its move before that car was displayed, it could have been a problem - especially if it was moving at normal traffic speed.

There was a little bit of time to see a car since the occluding pickup was a full lane width to the left, but I still don't think it was enough and I would have been very uncomfortable. Too many wrecks happen in just that situation.
 
  • Like
Reactions: Electroman
I don't understand all the negative/smart a** comments in this thread. This release is IMPRESSIVE! It way outperforms my expectations, so far.

I hope what I'm seeing in various videos will also produce similar results in my area.

It's not 100% perfect, but it sure is improving at the right places.

This is very exciting!

Sure it's impressive for beta test software but it's not worth 15k and there is no way in hell that they will complete fsd by the end of the year at this rate, which is why everyone is roasting it.
 
Maybe the expansion of the tweet occluded the second part of my message, but I was saying that since we know the occupancy network is operating in 10.69, we can hope that the wider release versions of 10.69.1 and 10.69.2 would improve the accuracy of those occluded vehicle predictions. I'm assuming the visualizations are programmed to not display vehicle predictions with low confidences.
Perhaps so. Logically, we know that another vehicle that becomes occluded is still out there somewhere and, depending on its speed and time occluded, can have its position predicted with an quantifiable error ellipse.

Now that the occupancy network has been added, I hope we see features like this added to the visualization. It could even help manual driving.
 
Sure it's impressive for beta test software but it's not worth 15k and there is no way in hell that they will complete fsd by the end of the year at this rate, which is why everyone is roasting it.

Only people who held unrealistic expectations are disappointed about this release. You don't strike me as the kind of person to hold unrealistic expectations, but somehow now your expectations far exceed reality. So either you're a closeted mega-fan, or you're a hypocrite doing a very sloppy job of moving the goalposts.
 
I guess we all have to move to Chuck's neighborhood so the cars will drive better. :oops:
The key might be getting Elon’s or the FSD team’s attention so they send people to your neighbourhood

This makes me wonder what is happening with all the training data Tesla is receiving from the fleet. Chuck has been grinding away at this turn for ages, wearing out that Camera button, and now it has seemingly been solved the next update after we’ve watched Tesla’s team physically out there working on the turn.

Is Tesla regularly sending their own teams to different spots for running through scenarios and building in improvements?
 
Before I purchased the FSD package, all I had to do was to watch about 10 FSD Beta videos to realize that it's not even close to being ready for street use by the masses. I still purchased it because I wanted the experience and ride together in this wave, and it sure is exciting. This is much more than getting a good value out of a car.

I suppose different people have different wants/expectations from current generation FSD Beta.
 
The key might be getting Elon’s or the FSD team’s attention so they send people to your neighbourhood
Yea, you got a point. My comment about we should all move to Chuck's neighborhood to get better driving, was a joke. I think you realize that.
Actually several of us up in my neighborhood either work for Tesla or consult for Tesla, and still I can't get the roundabouts to work well. :)
 
Remind me again how we know the occupancy network is operating in 10.69?
There's multiple mentions of occupancy network in 10.69 release notes:
  • Added control for arbitrary low-speed moving volumes from Occupancy Network. This also enables finer control for more precise object shapes that cannot be easily represented by a cuboid primitive. This required predicting velocity at every 3D voxel. We may now control for slow-moving UFOs.
  • Upgraded Occupancy Network to use video instead of images from single time step. This temporal context allows the network to be robust to temporary occlusions and enables prediction of occupancy flow. Also, improved ground truth with semantics-driven outlier rejection, hard example mining, and increasing the dataset size by 2.4x.
However like things presented last year at AI Day, there can be a difference between what's actually implemented and deployed vs in progress internal development. And even if they're implemented, some aspects might only be available for shadow mode evaluation.
 
Yea, you got a point. My comment about we should all move to Chuck's neighborhood to get better driving, was a joke. I think you realize that.
Actually several of us up in my neighborhood either work for Tesla or consult for Tesla, and still I can't get the roundabouts to work well. :)
Man I would love to have a look into the processes and what's happening behind the scenes, pretty envious that you're involved in any capacity
 
  • Like
Reactions: FlyF4