Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
L4 doesn't require geofencing. Its ODD can be limited in other ways.

EDIT: Here is the SAE chart:

View attachment 1019133

Here is a better synopsis chart:

View attachment 1019135

Just for kicks here is terminology:

View attachment 1019136
I know all of what you pointed out, but when some people say "level 4" they refer specifically to the definition in that diagram and specifically geofenced. They do not mean "SAE level 4" or J3016's definition of level 4 (where geofencing is optional as you point out). That's why I had to clarify specifically which definition they are using (and find out they were using the Synopsys one). I tend to use SAE's definition myself in discussion, but many (most?) people don't.
 
Last edited:
Geez, I was out of town for the week. Just logged in to see 15 pages behind.
After a few updates to the ignore button, got it down by 5 pages.
Nonsense arguments being made, and replying to the offenders is just giving them more fuel for their straw man arguments. How about a new thread for these discussions for whoever may be interested in these OT discussions.
Does anyone have any information about a potential expansion date for V12 to posters in this thread?
I was off for a day and I'm 3 pages behind!
 
Ignorance is not a defense. Just because some people misunderstand or mischaracterize the SAE levels, does not mean we just accept bad interpretations of the SAE levels.
As a consumer, I don't care about the levels, I care about what the car will do for me. If 'Level 11' is when I can sit and read a book while the car cruises down the highway then I want SAE level 11.
 
As a consumer, I don't care about the levels, I care about what the car will do for me. If 'Level 11' is when I can sit and read a book while the car cruises down the highway then I want SAE level 11.

The SAE levels tell you what the car will do for you. And you don't need "level 11". There is already a SAE level that would give you what you describe. SAE Level 4 with a highway ODD means that you could sit and read a book while the car cruises down the highway.
 
  • Like
Reactions: zoomer0056
I disagree. The Synopsys levels are not a different classification system than SAE. Synopsys is simply a bad interpretation of the SAE levels. They do mean the SAE levels. Using the SAE levels wrong is not an acceptable alternative version of the levels. And the NHTSA levels are a pre-SAE version before the SAE formalized the levels. And I've never heard anyone use the NHTSA levels since the SAE system replaced them.
The Synopsys's definition may be a bad interpretation of SAE's but what I mean is the specific person using it is using exactly that diagram when they say "level 4" and that is what they mean when they use that term.

They are not using the "SAE level 4" definition in the J3016 diagram. So if you assumed they meant "SAE Level 4" instead of "Synopsys Level 4" during a discussion, you would have misunderstood what they meant.

You come to the similar situation as the whole "drive" discussion above. The OP meant the vehicle can move around the block with no user intervention, not that it would operate as SAE level 4. But if you insist that you can only discuss things in SAE terms, then you end up with that whole argument about what "drive" means. Or you can clarify with the OP what he means and find out it has nothing to do with SAE.

To bring it back to the point, when Elon says "level 5" does he mean truly the SAE level 5, or does he mean SAE level 4 with no geofencing (but other ODD limits allowed like rain for example) or something else (door to door L3)? If he said "SAE Level 5" then we can assume he means that. If not, he may likely be using a different or more colloquial definition, as many people do (including in this forum). That's my main point.
 
Last edited:
  • Like
Reactions: MP3Mike
Exactly. He will not relate to it. It is up to you (according to him) what level you want to relate it to.
That's a good point, which is why he doesn't outright say "Full Self Driving, namely, level 5" (even if you ignore the whole point above about SAE or otherwise), and he says "what you want to call it" and two levels (4 or 5). That's why I said that one is not the one that makes the point about him saying FSD will be SAE Level 5.

But I remember there was some other interview where the person asking said specifically the term "SAE" and some level (don't remember if it was 4 or 5). Maybe some better sleuths can dig it back up.
 
  • Like
Reactions: enemji
The lack of skittishness is certainly welcome
Now that I've counted the 12 times of steering wheel jerks back and forth for a single turn of 11.x video, I'm noticing it a lot more when driving around especially when just going straight. I previously suggested people count the number of these odd behaviors they're noticing per city street mile to have a reference when experiencing 12.x. Perhaps my previous threshold was too lenient without including this skittishness aspects, but now including these micro-adjustments, I could easily see how even early wide deployment of end-to-end will improve my oddness metric by an order of magnitude or more.
 
Now that I've counted the 12 times of steering wheel jerks back and forth for a single turn of 11.x video, I'm noticing it a lot more when driving around especially when just going straight. I previously suggested people count the number of these odd behaviors they're noticing per city street mile to have a reference when experiencing 12.x. Perhaps my previous threshold was too lenient without including this skittishness aspects, but now including these micro-adjustments, I could easily see how even early wide deployment of end-to-end will improve my oddness metric by an order of magnitude or more.
Remember when you had to count the number of jerks in a single turn?
 
Um. Individual cells in a Neural Network are analog computers.

Say that a cell has 100 connections to cells in the previous stage. Each input to a cell has a weight on it; think, digital value, digital to analog converter, and that analog value thus generated going into an analog multiplier.

So, for this hypothetical scenario, 100 inputs, some at zero, some at one, some at some intermediate value; each input gets multiplied by a weight from the DAC; then the sum of resulting 100 inputs gets up against a comparator (whose other input probably comes from Yet Another Comparator); if the summed value is bigger than yea, then the output of the cell goes to 1; else, it's 0.

This is purpose built hardware. It doesn't resemble a digital cpu without a lot of squinting. Typically, one applies a zillion inputs on one clock; waits the propagation delay (which is short, this is all analog), samples the outputs, then loads in the next set of weights. The loading of ye weights is probably Extremely Pipelined.

I'm guessing, since I'm not really somebody who's ever worked on these things, but one can see how it goes: These NN's probably have huge numbers of inputs and outputs and can do multiple jobs at the same time: One looking for dogs, another for humans, another for horses, cars, and on and on.

Heh. Sort of reminds me of using ROM to do complicated logic functions. Say one has a ROM chip with, I dunno, 16 address bits and 8 output bits. Some of the data bits can be turned around and fed into some of the address bits, or all the address bits can be input data. One can write up a bunch of boolean equations that would max out a Xilinx device and simply use that to program the ROM, so long as the number of inputs is less than 16 (minus whatever feedback one has in mind). One one ROM read cycle, then, one gets 8 outputs, and those 8 outputs are any function of the 16 address bits. Handy for nasty looking state machines, and it runs at ROM cycle speeds, which can be fast.
The neural network implementations, in almost all machine learning hardware applications as of 2024, are binary digital logic machines. They are not implemented with analog neurons with associated DACs, multipliers and ADCs (or specialized nonlinear analog decision comparators or whatever). Nor are they pulse-density analog machines.

There have indeed been such concepts, and research performed in that vein - I myself am interested in that and think it could have a real future - but that isn't the way it's being done in any serious large scale commercial ML/NN deployment that I'm aware of.

The ML computers are very high speed arrays of familiar synchronous-logic processing units to make up the "NPU", architecturally specialized to perform accumulate and multiply (dot product) functions, and of course associated high-speed memory to hold dynamically changing operation results as well as the NN "program" in terms of the associated weights, all using relatively coarse fixed-point "like" numerical representations (I read that Tesla came up with their own preferred number format, but I don't know where in the training/inference universe it's actually being used). The closest widely-deployed silicon that met these requirements, as of a few years ago, were found in graphics cards and their core GPUs. That's why you often hear and read about giant GPU training clusters, and lots of business for Nvidia as the leading example of companies that somewhat lucked into the huge hardware market for the current AI boom.

With each generation of development beyond the early graphics-card arrays, I think the architecture is becoming more refined towards purpose-built ML computing. I personally don't know a lot about this, nor at what point people will stop calling them "GPUs". Tesla's own Dojo project is a non-Nvidia example of custom silicon and extensive support hardware, but again it's an extremely high bandwidth digital computer module intended for efficient expansion, and in the meantime Tesla is buying tons of Nvidia along with most if not all other big players.

On a smaller scale, but with impressive computing power and efficiency, the same comments hold for the inference processors within the car. Tesla did design a fairly impressive and power efficient autopilot computer for HW3, a better (but higher power) one for HW4, and I wouldn't be surprised if they already have prototype silicon for HW5. In terms of volume deployment I think Tesla is currently the leader in this regard. Nvidia, Qualcomm, probably Intel for Mobileye's Orin, and others including Huawei et al in China, are working on these things.

Most of these projects are not just silicon processor development, but I think these companies all have in-house self-driving platform developments beyond just the idea of selling chips or computer boards to carmakers. I think some of the recent and existing-generation robotaxi companies don't have particularly efficient in-car computing; we hear about trunks stuffed full of computers and cooling equipment. But their volume is currently low and I'm sure they will be taking advantage of supplier developments in this space .

To finish this by throwing in the requisite v12 content: a big question that hangs over the v12 approach is whether the couple of million HW3 computers already out there, or even the faster HW4, have enough compute (inference) power to achieve the goal. People make a lot of pronouncements here in the forum, but I don't think the answer is completely known even inside Tesla, much less to the rest of us. There's a lot of talk these days about the ratio of training compute and data size to the inference compute assets. The field is moving rapidly and there are very encouraging reports that massive and properly targeted training effort can result in a very compact, efficient and capable inference implementation, i.e. could work very well on HW3. The counterpoint is that the training investment could be too high to achieve that goal, and that a more tractable training infrastructure (and training cycle time) could be enabled if the in-car hardware were better than HW3|4 by some factor. I'm far from being knowledgeable enough to make a prediction, but of course I have my hopes!
 
To finish this by throwing in the requisite v12 content: a big question that hangs over the v12 approach is whether the couple of million HW3 computers already out there, or even the faster HW4, have enough compute (inference) power to achieve the goal. People make a lot of pronouncements here in the forum, but I don't think the answer is completely known even inside Tesla, much less to the rest of us. There's a lot of talk these days about the ratio of training compute and data size to the inference compute assets. The field is moving rapidly and there are very encouraging reports that massive and properly targeted training effort can result in a very compact, efficient and capable inference implementation, i.e. could work very well on HW3. The counterpoint is that the training investment could be too high to achieve that goal, and that a more tractable training infrastructure (and training cycle time) could be enabled if the in-car hardware were better than HW3|4 by some factor. I'm far from being knowledgeable enough to make a prediction, but of course I have my hopes!
I had almost given up reading this thread because the signal to noise ratio is so bad. Then @JHCCAZ goes on to make this excellent post.

The last paragraph is especially interesting. Those here who proclaim that robotaxi could never work on HW3 or even HW4 simply don't know. Nobody knows.

Another interesting point in that last paragraph is one I hadn't thought about much. That is the relationship between training cycle time and the required performance of the in-car hardware. You seem to be saying that training cycle time could be reduced if the in-car hardware was faster/beefier/more efficient. Is that correct? And do you think the inverse is also true?
 
Last edited: