Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
**IF** this is true, then Tesla is fully closed loop (aka NO HUMANS IN THE LOOP FOR TRAINING and FULLY UNSUPERVISED).

Huge step!


I track my Model 3's data usage on WiFi, and I can confirm that upload volumes are no longer tied to how often I press the clip button. You can see a sample of the data usage for last week, and despite pressing the clip button at least once per day, it was autonomously deciding what data to upload when:

Screenshot_20221027-203911-526.png
 
I track my Model 3's data usage on WiFi, and I can confirm that upload volumes are no longer tied to how often I press the clip button. You can see a sample of the data usage for last week, and despite pressing the clip button at least once per day, it was autonomously deciding what data to upload when:

View attachment 869004
So, pressing the clip button doesn’t do anything anymore?
 
Really enjoyed this video with Omar (Whole Mars Catalog) doing a full drive with Cruise.

Seems like Cruise is not doing UPLs or UPRs (defining these to be at lights or stop signs where you have the right of way or in other words, it is NOT doing ANY decisions around moving target vehicles on a collision vector for turns).

After their accident earlier this year while doing an UPL they did completely disable UPLs for a few weeks, I think, but restored them later after adjusting the software.
 
What does it take to get FSD beta, currently?

I was hanging around 95 on the safety score, and then a road-trip to/from LA and dealing with the traffic there killed my average. Had out Model Y about 2 months, still haven't been invited in.
 
Reviewing some FSD Beta 10.69.3 footage... I have to admit, it handled this intersection extremely well! If the time stamp doesn't work, it's about 7 min into the video.


I liked how it waited for the train crossing lights to go out, like you are supposed to do, unlike the other drivers who did not, and one of them almost got a barrier arm dropped on them because of it.
 
Last edited:
So, I've been going over the 10.69.3 release notes and trying to put them into more easy to understand language.

Does any of this below help? or what do y'all think of this?

Tesla Release notesLaymens terms and how it applies to the user experience
-Upgraded the Object Detection network to photon count video streams and retrained all parameters with the latest autolabeled datasets (with a special emphasis on low visibility scenarios). Improved the architecture for better accuracy and latency, higher recall of far away vehicles, lower velocity error of crossing vehicles by 20%, and improved VRU precision by 20%.Object detection is taking the output of the Occupancy map and identifying, classifying and probably prioritizing each item as an "Object of Interest" that is tracked by the Occupancy Flow map. This is the new way that 'raw-as-you-can-get-from-the-camera-sensor' photons are digitized to pixels and then turned into 3d representation of the world.
The autolabled dataset is what the NN is trained from which is constantly being refreshed as new data comes in from the fleet and trained on huge GPU clusters. Tesla has opted to prioritize "low visibility scenarios" like fog, mist, low light, blinding sun rays...etc. The new dataset requires retraining of the NNs and Tesla has done this on "all parameters", which means everything that the NNs are looking for has been essentially reset and is new to the model. Nothing has been kept or held over from the last trained model and Tesla has opted to start completely fresh.

The new architecture refers to Tesla changing how the NN layers are built. NNs have many layers each with different properties for inputs and outputs. These layers could have been potentially modified to interact differently making them more efficient. Common practice in building out NNs.

The reason for all of this is to get better at detecting cars that are at the very limit of what the cameras can capture, get better at estimating speed of vehicles in crossing lanes and decreasing false positives (aka reduction of seeing things that aren't there; ghosts or improper slowdowns) for VRUs (pedestrians, bikes...Vulnerable Road Users)
 
Just got notified and installed a FSD beta update. Went for a test drive.

Surprise surprised - I almost did not need to touch the steering wheel at all for the 9 minute drive and didn’t get any warning to keep hands on the wheel.

Almost zero disengagement- the only time I touched the wheel was on a left turn on a red in heavy traffic (I was already waiting inside the intersection), then when my 3 started to move and crossed two lanes, the car on the shoulder lane decided to run the red. My 3 froze in the intersection for a split second and I decided to take over.

Overall this version feels far better than previous versions.

EF989F08-C809-48DF-97FF-1A19C563A26C.jpeg


Follow up: to be sure FSD beta really no longer need hands on wheel, I took a second test drive after dinner. Drove 15 minutes and I did not need to touch the wheel at all and there was no warning to keep hands on the steering wheel except for rare occasions (about 8 times in total) when the fog was so heavy that even I could not see the road clearly, and FSD self disengaged with that big red steering wheel icon display and loud chirps and message displayed asking me to take over the driving due to poor visibility. Few seconds later I was able to re-engage each time. Overall, based on these initial drives, I would give FSD beta a high mark so far.

Photo below was when the fog was not that heavy and little traffic. Photo taken on a cell phone.
1667527189987.png
 
Last edited:
Almost zero disengagement- the only time I touched the wheel was on a left turn on a red in heavy traffic (I was already waiting inside the intersection), then when my 3 started to move and crossed two lanes, the car on the shoulder lane decided to run the red. My 3 froze in the intersection for a split second and I decided to take over.

Sounds good, but I am having a hard time visualizing the situation you described?
 
Sounds good, but I am having a hard time visualizing the situation you described?
- I was in the intersection in heavy traffic waiting to turn left
- opposing side had two lanes and a left turn lane - 3 in total
- lights turned amber but traffic still flowed
- lights turned red and the opposing cars in the left turn lane and passing lane stopped. No car was in the curb lane
- my 3 started to complete the left turn
- a car on the curb lane came to the intersection and ran the red
- FSD beta braked to a full stop
- even after the curb lane car passed , my 3 stayed in the intersection blocking traffic for perhaps 0.5 to 1 second without moving

Edit: I reviewed the snapshot recording and it was not exactly how I described above but close enough to not bother spending more time on this. Overall FSD had zero chance of causing or become involved in an accident in my one case and that is what’s important.
 
Last edited:
Here's my full notes for 10.69.3.

This is a big update! And it is cool as it is focused on comfort, rather than safety.

Anywho, my summary and things to look for when driving this build is at the end...

Tesla Release notesLaymens terms and how it applies to the experience
-Upgraded the Object Detection network to photon count video streams and retrained all parameters with the latest autolabeled datasets (with a special emphasis on low visibility scenarios). Improved the architecture for better accuracy and latency, higher recall of far away vehicles, lower velocity error of crossing vehicles by 20%, and improved VRU precision by 20%.Object detection is taking the output of the Occupancy map and identifying, classifying and probably prioritizing each item as an "Object of Interest" that is tracked by the Occpancy Flow map. This is the new way that 'raw-as-you-can-get-from-the-camera-sensor' photons are digitized to pixels and then turned into 3d representation of the world.

The autolabled dataset is what the NN is trained from which is constantly being refreshed as new data comes in from the fleet and trained on huge GPU clusters. Tesla has opted to prioritize "low visibility scenarios" like fog, mist, low light, blinding sun rays...etc. The new dataset requires retraining of the NNs and Tesla has done this on "all paramerters", which means everything that the NNs are looking for has been essentially reset and is new to the model. Nothing has been kept or held over from the last trained model and Tesla has opted to start completely fresh.

The new architecture refers to Tesla changing how the NN layers are built. NNs have many layers each with different properties for inputs and outputs. These layers could have been potentially modified to interact differently making them more efficient. Common practice in building out NNs.

The reason for all of this is to get better at detecting cars that are at the very limit of what the cameras can capture, get better at estimating speed of vehicles in crossing lanes and decreasing false positives (aka reduction of seeing things that aren't there; ghosts or improper slowdowns) for VRUs (pedestrians, bikes...Vulnerable Road Users)
-Converted the VRU Velocity network to a two-stage network, which reduced latency and improved crossing pedestrian velocity error by 6%.Ego should not wait so long after a ped has crossed in front. And also should be more confident to proceed prior to ped crossing in front.
-Converted the NonVRU Attributes network to a two-stage network, which reduced latency, reduced incorrect lane assignment of crossing vehicles by 45%, and reduced incorrect parked predictions by 15%.Ego should be able to make turns faster when cars are in other lanes than the lane that ego is entering or crossing. Ego should be able to tell when a car is parked vs waiting in traffic or waiting for it's turn to proceed
-Reformulated the autoregressive Vector Lanes grammar to improve precision of lanes by 9.2%, recall of lanes by 18.7%, and recall of forks by 51.1%. Includes a full network update where all components were re-trained with 3.8x the amount of data.Huge update to the lane connectivity graph to make the car perform better when lanes intersect, merge and diverge. Ego should not miss as many turns due to confusing lanes.
-Added a new "road markings" module to the Vector Lanes neural network which improves lane topology error at intersections by 38.9%Road markings are painted words and symbols like "Stop", "Turn Only", a turn line or turn and strait lines. Ego should miss less turns due to being in the wrong lane
-Upgraded the Occupancy Network to align with road surface instead of ego for improved detection stability and improved recall at hill crest.Ego will understand the world much better as it comes over a hill and have less unnecessary slowdowns
-Reduced runtime of candidate trajectory generation by approximately 80% and improved smoothness by distilling an expensive trajectory optimization procedure into a lightweight planner neural network.Ego will determine which way is the best way to go much faster and smoothness of turning is improved as it is now controlled by a fast, simple NN
-Improved decision making for short deadline lane changes around gore areas by richer modeling of the trade-off between going off-route vs trajectory required to drive through the gore regionEgo will go out of the lane in order to make lane changes when lanes diverge or merge when it needs to happen quickly. Gore region is simply the area between two lanes when they diverge or merge
-Reduced false slowdowns for pedestrians near crosswalks by using a better model for the kinematics of the pedestrianLike #2, this improves how ego deals with peds crossing where it will 'do the right thing' more often. In other words, it will go when it should and NOT go when it shouldn't.
- Added control for more precise object geometry as detected by the general occupancy network.Guessing here as this description is pretty vague. But, this seems like the first mention of using smaller voxels for parking situations. As it becomes more precise at close distances and slower speeds, it will be able to more accurately measure distance. Should become way better than the sonar pucks as this control gets better
- Improved control for vehicles cutting out of our desired path by better modeling of their turning/lateral maneuvers thus avoiding unnatural slowdownsOh thank goodness for this one. Ego should not wait for a car to completely leave the lane in order to proceed. Currently, ego will wait until the target car is completely out of the lane of travel when the target car makes a right turn.
-Improved longitudinal control while offsetting around static obstacles by searching over feasible vehicle motion profilesEgo should proceed at a more natural speed when a car passes in a tight scenario. Currently, ego will come to a complete stop on unmarked roads. So hoping this is much improved.
-Improved longitudinal control smoothness for in-lane vehicles during high relative velocity scenarios by also considering relative acceleration in the trajectory optimizationThis is another big one and really needed. Ego will now act more normally when coming up to a stopped car that has just started accelerating as well as when during the #11 scenario of a car turning right and is braking for the turn.
-Reduced best case object photon-to-control system latency by 26% through adaptive planner scheduling, restructuring of trajectory selection, and parallelized perception compute. This allows us to make quicker decisions and improves reaction time.Just like it says in the last sentence...Ego will overall be making faster decisions.
SUMMARY
Nearly all of this stuff is for comfort, rather than safety. Currently, ego is acting slow and too conservative. These changes allow ego to act much more human-like by making quicker, more confident and smoother decisions. Lots of these updates are targeted at less unnatural slowdowns.
Things to test/'pay attention to' with this build
When cars make rights in front of ego, should be much more human-like and not cause someone behind to honk
When cars leave the lane of travel while braking, should be much more human-like and not cause someone behind to honk
When peds are around the lane of travel
What ego does when on unmarked roads, should continue at a more natural speed when another car passes
What ego does when in a tight space while a car is coming the opposite way, should continue at a more normal speed
Does ego make faster decisions overall?
Does ego make smother turns?
Are there less unnatural slowdowns overall?
Does ego make turns into a lane even though there is a car in the adjacent lane? Hasn't done this well in the past
 
Here's my full notes for 10.69.3.

...
Thanks! A few questions.

Is it fair to characterize replacing chunks of open code with neural nets instead, as sort of going from what a novice driver does (thinking about each tiny thing) to what an experienced driver does (reacting intuitively based on experience)? So, faster and cheaper decision-making leading to a better drive. I'm thinking specifically of "distilling an expensive trajectory optimization procedure into a lightweight planner neural network". If so, can we expect lots of similar speedups to be coming?

Will this version reduce the number of totally bogus forward collision warnings when I am doing the driving? Meaning, is it the same software in the safety system?

Making decisions faster will mean reacting sooner. So not only should this be safer, but it should also be more wiggly. I mean, unless something has been done to smooth things out by looking ahead, this moment's small tweak will be reversed by the next moment's small tweak in the opposite direction. So before, where it wouldn't have done either small tweak, now it will do both because it's faster. At least it might, if nothing has been done to prevent it. Looks to me like Kim Paquette saw some of that in her test drive of 10.69.3 (see here from 8:05-8:50).

Lastly, purely for the benefit of your readers, please use "fewer" rather than "less" when appropriate. It hurts my eyes. Thanks again for your contribution and consideration. And, if you anticipate this explanation ever being seen by non-aficinados, you might want to consider removing jargon (e.g. "ego") and abbreviations, as it won't make your explanations much longer than they are.
 
Last edited:
Is it fair to characterize replacing chunks of open code with neural nets instead, as sort of going from what a novice driver does (thinking about each tiny thing) to what an experienced driver does (reacting intuitively based on experience)? So, faster and cheaper decision-making leading to a better drive. I'm thinking specifically of "distilling an expensive trajectory optimization procedure into a lightweight planner neural network". If so, can we expect lots of similar speedups to be coming?

Will this version reduce the number of totally bogus forward collision warnings when I am doing the driving? Meaning, is it the same software in the safety system?
Thank you for the feedback and for the above, the answer is less, I mean yes! :cool:

Old way was direct coding, otherwise known as heuristics (tolerance bands). A very basic way of saying, if you stay in the center of the lane all the time, everything will be great. Well, we know that is not true and we apex turns, don't jerk the steering wheel around when we are 1cm out of the middle of the lane every 0.5 seconds, drive in a more comfortable manner. The NNs (Neural Nets) are doing the latter in a much better way.

And yes again, we can expect many more speedups in the future as they take giant clunky versions of older NNs and heuristics and turn them into much lighter-weight (faster) NNs. There is seemingly no limit to making this faster as first principles would say that, in theory, you just need two frames of voxels (Occupancy map showing the 3d world) in order to make a prediction about what to do in the third frame (or next frame). I'm talking fast fast!

And yes again, it will reduce unnecessary slowdowns (i.e. ghosting, false positives, improve recall) but NOT the forward collision warnings while the human is driving as that is separate code. Hopefully, they will use this for the safety system in the future, but that code has to be uber, ultra cautious and must have far fewer false negatives (higher precision, missed a warning for something that was on a collision course).
 
Kim's drive that @Bet TSLA linked above is great!

I wrote up some notes...

9:00 Left at blind stop sign was great! Demonstrates how the front camera has a much better vantage than the human driver! 11:04 Notice that the planner (the wiggly blue line) goes grey and that is why the car stops. This is a great learning case for the neural planner and object detection NN as it needs to have more confidence in some tight situations so it does NOT come to a full stop due to losing a planned path 11:29 Two stop signs which are offset longitudinally about 10' from each other seems to be causing the creep wall to be placed too far forward into the intersection, probably by ~10'. I don't know what is causing this to happen as I don't know what inputs the creep wall NN is ingesting. Could be the lane connectivity graph or the object detection NN. Regardless, this is might be easily addressed by putting this into simulation (like placing it 20' back or 10' into the intersection or having three stop signs all in different places...you get the idea) and training out the behavior and/or ensuring that regardless of where the stop sign is placed relative to the intersection, that the creep wall placement NOT determined by that location 12:40 Notice the blue fan on the left? It is actually parked in a no parking zone, obscuring the turn. Regardless, FSD has enough room to make this turn. However, the Neural Planner loses confidence to make a left turn as it has a false positive (sees a car that isn't there) on the opposite side of the road from the blue van. Great example of the object detection network having a false positive! 13:00 Stop signs are again offset by about 10", but this time it seems to put the creep wall in the correct spot. Why? I'm guessing that since the road is bigger, that the placement isn't perfect, but feels a lot better to the driver. An interesting data point to figuring out the issue at 11:29 13:14 FSD seems to detect and control for peds prior to visualizing it on the screen. Or is it just slowing down from 24 to 23 MPH due to the car in front waiting at the stop light. Regardless, FSD handles this really well!!! 13:29 So cool to see a smooth U-turn!!!