Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Decoding FSD Beta 9.2 release notes

This site may earn commission on affiliate links.
From what I read it's the training that has changed to account for int8 (which has been there for I don't know how long), maybe verygreen knows.
The OP also says this

read the article and I would agree with that conclusion; they are saying that if you train, knowing the target is 8bit int instead of 32fp, then the error in downresing isnt' so bad. if you didn't train with that in mind, its not acceptable.
 
if you convert FP to int, that's because of speed. your stuff is too slow. its not about memory. the mem diff between 1 byte and a float (32bit) or even a double (64 bit) is not that much, unless we're talking about most of ram being used for this set of structs.
Your high level comment about "speed" is generally right, but you were probably referring to actual cycle time difference between integer and floating point operations. The memory / bandwidth aspect of using 1 byte per weight is quite significant for neural networks where I would guess "this set of structs" for Autopilot is actually on the order of hundreds of millions of values, so transferring that data and improving caching behavior with more compact int8 neural networks has a big impact on performance.
 
Correction from reddit on lane mapping:
They trained a neural network to identify lanes, lane choice options, and lane purpose. They used 50k video clips to train this network. The clips were labeled by their proprietary auto-labeling software that runs (essentially) really fancy regression/confirmation testing.

The car can now better understand a lane and it's purpose without explicit nav data.

Would be nice if the system would allow me to edit the original post with this info.
 
Last edited: