Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
  • Please check out the latest TMC Podcast (#13) where we discussed new Model Y launches, Tesla improving service, nationwide EV charging expansion, and viewer comments from previous episodes YouTube or listen to it on all major podcast networks.

Decoding FSD Beta 9.2 release notes

Terminator857

Active Member
Aug 5, 2019
1,397
1,565
Ca
> Clear-to-go boost through turns on minor-to-major roads (plan to expland to all roads in v9.3)
Accelerates faster during turns.

> Improved peek behavior where we are smarter about when to go around the lead vehicle by reasoning about the causes for lead vehicle being slow.
When is peak? When it is needed most? Right before a decision is needed to be made?

> v1 of the multi-modal predictions for where other vehicles (are) expected to drive. Partial implementation
Multi-model means multiple modes. FSD will have predictions for different possibilities what a car will do. For example: 70% chance will continue, 20% chance will turn, 10% will stop. Multi-model may mean that the algorithm used depends upon the situation. For example if near an intersection.

> New lanes network with 50k more clips (almost double) from the new auto-labeling pipeline.
More lanes mapped? Is this a right turn lane, a bike lane, etc...

> New VRU velocity model with 12% improvement to velocity and better VRU clear-to-go performance.
VRU = vulnerable road user. Pedestrian, bike, escooter, etc... Thanks to Daniel in SD for that.

> Model trained with "Quantization-aware-training" (QAR), an improved technique to mitigate int8 quantization.
int8 quantization is used during machine learning training. Initially floating point was used, but to conserve memory a switch was made to int8. This introduces rounding errors. QAR is a technique used to take into account the rounding error, and provide better overall performance.

> Enabled inter-soc synchronous compute scheduling between vision and vector space processes.
They have a new task scheduler that is synchronous based. In other words, discrete time blocks are allocated for computer vision and vector computation.

> Planner in the loop is happening in v10.
The planner is what FSD is going to do. "in the loop" suggest this will be given a discrete amount of time to compute also. This is compared to an unbounded amount of time to compute and interrupted as needed by the task/process scheduler.

> Shadow mode for new crossing/merging targets network which will help improve VRU control.
There is a new algorithm being tested for VRU prediction for pedestrians, bikes, e-scooters, etc that are crossing in the cars path or merging.
 
Last edited:

diplomat33

Average guy who loves autonomous vehicles
Aug 3, 2017
9,452
13,419
Terre Haute, IN USA
> Improved peek behavior where we are smarter about when to go around the lead vehicle by reasoning about the causes for lead vehicle being slow.
When is peak? When it is needed most? Right before a decision is needed to be made?

I am thinking peek behavior refers to figuring out the best time for initiating a passing maneuver. So if the car decides that yes, it needs to pass a slow moving lead car, it then needs to decide when it is the best time to start passing the car.

> New lanes network with 50k more clips (almost double) from the new auto-labeling pipeline.
More lanes mapped? Is this a right turn lane, a bike lane, etc...

Yeah. Tesla is using more clips for auto-labeling. More data will make the NN better at detecting different types of lanes.
 
> Clear-to-go boost through turns on minor-to-major roads (plan to expland to all roads in v9.3)
Accelerates faster during turns.

> Improved peek behavior where we are smarter about when to go around the lead vehicle by reasoning about the causes for lead vehicle being slow.
When is peak? When it is needed most? Right before a decision is needed to be made?

> v1 of the multi-modal predictions for where other vehicles (are) expected to drive. Partial implementation
Multi-model means multiple modes. FSD will have predictions for different possibilities what a car will do. For example: 70% chance will continue, 20% chance will turn, 10% will stop. Multi-model may mean that the algorithm used depends upon the situation. For example if near an intersection.

> New lanes network with 50k more clips (almost double) from the new auto-labeling pipeline.
More lanes mapped? Is this a right turn lane, a bike lane, etc...

> New VRU velocity model with 12% improvement to velocity and better VRU clear-to-go performance.
VRU = vulnerable road user. Pedestrian, bike, escooter, etc... Thanks to Daniel in SD for that.

> Model trained with "Quantization-aware-training" (QAR), an improved technique to mitigate int8 quantization.
int8 quantization is used during machine learning training. Initially floating point was used, but to conserve memory a switch was made to int8. This introduces rounding errors. QAR is a technique used to take into account the rounding error, and provide better overall performance.

> Enabled inter-soc synchronous compute scheduling between vision and vector space processes.
They have a new task scheduler that is synchronous based. In other words, discrete time blocks are allocated for computer vision and vector computation.

> Planner in the loop is happening in v10.
The planner is what FSD is going to do. "in the loop" suggest this will be given a discrete amount of time to compute also. This is compared to an unbounded amount of time to compute and interrupted as needed by the task/process scheduler.

> Shadow mode for new crossing/merging targets network which will help improve VRU control.
There is a new algorithm being tested for VRU prediction for pedestrians, bikes, e-scooters, etc that are crossing in the cars path or merging.

> Improved peek behavior where we are smarter about when to go around the lead vehicle by reasoning about the causes for lead vehicle being slow.
When is peak? When it is needed most? Right before a decision is needed to be made?
Peek !== Peak

Im guessing they added more code to the car will steer to the left or right a bit to try and peek ahead to see whats going on further down the road.

> New lanes network with 50k more clips (almost double) from the new auto-labeling pipeline.
More lanes mapped? Is this a right turn lane, a bike lane, etc...
I think network refers to new NN. Likely meaning they have a new network to the replace an older one with double the training data. Most significant aspect of this sentence to me is the reference to auto-labeling pipeline which I believe means DOJO
 
wow they are in early development of their prediction model, they are further behind than i thought.

Is the glass half-full or half-empty? ;)

Like. I said before, you can't really develop the best prediction / planning algorithms if your perception algorithm is crap.

Maybe, now, finally, their perception algorithm is progressing to "better than crappy". Maybe.

It may not take them 5 years to catch up on prediction and planning though.

Maybe.
 
  • Like
Reactions: DanCar

Bladerskb

Senior Software Engineer
Oct 24, 2016
2,695
4,519
Michigan
I think "multi-modal" (not model) refers to the different modes of transportation sharing the roads: cars, motorcycles, bicycles, busses, semis, pedestrians, etc.
Not quite: It basically means the prediction of multiple future trajectories of an agent in contrast to a single trajectory. Ex: Vehicles are multi-modal because they can do multiple things; go straight, turn left, turn right, etc
 
wow they are in early development of their prediction model, they are further behind than i thought.

> Enabled inter-soc synchronous compute scheduling between vision and vector space processes.
They have a new task scheduler that is synchronous based. In other words, discrete time blocks are allocated for computer vision and vector computation.


yeah, that's a redo of something pretty fundamental. that should have been tested and validated at the POC level, really early.

the stuff about 8bit int and floating point, that also worries me. a lot.

what the heck is going on? are you guys just stumbling in the dark? sure seems like it.
 
The new prediction and planning NN will require compute power. Will the current FSD computer be good enough?
my hint take-away is that they are now trying to 'fit' things into their compute model.

if you convert FP to int, that's because of speed. your stuff is too slow. its not about memory. the mem diff between 1 byte and a float (32bit) or even a double (64 bit) is not that much, unless we're talking about most of ram being used for this set of structs.

but speed is the thing that comes to mind, when I see a dev going from fp to int.

that has all kinds of issues and should not be done lightly. its not just round-off, but also timing diffs. and that ALL the tests have to be re-validated (not just re-run but re-validated).

pretty big oops, here.
 
  • Helpful
Reactions: diplomat33
it's even int8 which is a byte.
and since its signed (if we take that literally) then its not a full 0-255 range, but its 127/128 plus or minus, with sign. so the range of absolute value is half, with signed data types vs unsigned.

its definitely a speed-up when you do this, but .... it makes me wonder WHY they did this. this is huge. like when y2k happened and we had to look at all the code that made assumptions about data width and re-test it with some unit tests that may not actually have been written.

you dont just change data types like that toward the end without a really good reason. it adds a lot of risk and needs lots more time to validate, until you get confidence that the new data model works as well as the old (just faster).

I wish them will on this. it would not be something I would want to do unless there was no other choice. or, its paying technical debt, now, instead of a more costly fix, later on.
 

MP3Mike

Well-Known Member
Feb 1, 2016
18,209
45,125
Oregon
my hint take-away is that they are now trying to 'fit' things into their compute model.

My take-away is that you don't have any idea what is going on.

if you convert FP to int, that's because of speed. your stuff is too slow. its not about memory. the mem diff between 1 byte and a float (32bit) or even a double (64 bit) is not that much, unless we're talking about most of ram being used for this set of structs.
Nothing in those release notes say anything about changing a data type.
 

emmz0r

Senior Software Engineer
Jul 12, 2018
1,277
1,112
Norway
  • Like
Reactions: MP3Mike
Yup. Tesla's NPU/inferencing engine was designed around int8.

It's laughable what that Linux guy posts. S/he's pretending and doing very badly since it's so obvious.

Hint: It's digital therefore it's quantized.

My take-away is that you don't have any idea what is going on.


Nothing in those release notes say anything about changing a data type.
 

Products we're discussing on TMC...

About Us

Formed in 2006, Tesla Motors Club (TMC) was the first independent online Tesla community. Today it remains the largest and most dynamic community of Tesla enthusiasts. Learn more.

Do you value your experience at TMC? Consider becoming a Supporting Member of Tesla Motors Club. As a thank you for your contribution, you'll get nearly no ads in the Community and Groups sections. Additional perks are available depending on the level of contribution. Please visit the Account Upgrades page for more details.


SUPPORT TMC
Top