LowlyOilBurner
Active Member
I want full level 6 autonomy. You don’t hear any talk of level 6. It has to be better than level 5, since 6 is a higher number.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Did he actually say that? I am not going to twitter to watch that…
Where? When? No way he will ever relate FSD to any SAE level.He is not sure if FSD would be L4 or L5. But he does refer to it as one of the SAE levels.
I know all of what you pointed out, but when some people say "level 4" they refer specifically to the definition in that diagram and specifically geofenced. They do not mean "SAE level 4" or J3016's definition of level 4 (where geofencing is optional as you point out). That's why I had to clarify specifically which definition they are using (and find out they were using the Synopsys one). I tend to use SAE's definition myself in discussion, but many (most?) people don't.L4 doesn't require geofencing. Its ODD can be limited in other ways.
EDIT: Here is the SAE chart:
View attachment 1019133
Here is a better synopsis chart:
View attachment 1019135
Just for kicks here is terminology:
View attachment 1019136
Did he actually say that? I am not going to twitter to watch that…
Where? When? No way he will ever relate FSD to any SAE level.
I was off for a day and I'm 3 pages behind!Geez, I was out of town for the week. Just logged in to see 15 pages behind.
After a few updates to the ignore button, got it down by 5 pages.
Nonsense arguments being made, and replying to the offenders is just giving them more fuel for their straw man arguments. How about a new thread for these discussions for whoever may be interested in these OT discussions.
Does anyone have any information about a potential expansion date for V12 to posters in this thread?
As a consumer, I don't care about the levels, I care about what the car will do for me. If 'Level 11' is when I can sit and read a book while the car cruises down the highway then I want SAE level 11.Ignorance is not a defense. Just because some people misunderstand or mischaracterize the SAE levels, does not mean we just accept bad interpretations of the SAE levels.
As a consumer, I don't care about the levels, I care about what the car will do for me. If 'Level 11' is when I can sit and read a book while the car cruises down the highway then I want SAE level 11.
The Synopsys's definition may be a bad interpretation of SAE's but what I mean is the specific person using it is using exactly that diagram when they say "level 4" and that is what they mean when they use that term.I disagree. The Synopsys levels are not a different classification system than SAE. Synopsys is simply a bad interpretation of the SAE levels. They do mean the SAE levels. Using the SAE levels wrong is not an acceptable alternative version of the levels. And the NHTSA levels are a pre-SAE version before the SAE formalized the levels. And I've never heard anyone use the NHTSA levels since the SAE system replaced them.
Exactly. He will not relate to it. It is up to you (according to him) what level you want to relate it to.Elon: "I think we will achieve full self-driving, depending on what level you want to call it, 4 or 5, I think by the end of this year."
That's a good point, which is why he doesn't outright say "Full Self Driving, namely, level 5" (even if you ignore the whole point above about SAE or otherwise), and he says "what you want to call it" and two levels (4 or 5). That's why I said that one is not the one that makes the point about him saying FSD will be SAE Level 5.Exactly. He will not relate to it. It is up to you (according to him) what level you want to relate it to.
You didn’t miss anythingI was off for a day and I'm 3 pages behind!
Actually you have missed something. It turns out that Tmc doesn't limit your ignore button from level 1 through level 5.You didn’t miss anything
Now that I've counted the 12 times of steering wheel jerks back and forth for a single turn of 11.x video, I'm noticing it a lot more when driving around especially when just going straight. I previously suggested people count the number of these odd behaviors they're noticing per city street mile to have a reference when experiencing 12.x. Perhaps my previous threshold was too lenient without including this skittishness aspects, but now including these micro-adjustments, I could easily see how even early wide deployment of end-to-end will improve my oddness metric by an order of magnitude or more.The lack of skittishness is certainly welcome
Remember when you had to count the number of jerks in a single turn?Now that I've counted the 12 times of steering wheel jerks back and forth for a single turn of 11.x video, I'm noticing it a lot more when driving around especially when just going straight. I previously suggested people count the number of these odd behaviors they're noticing per city street mile to have a reference when experiencing 12.x. Perhaps my previous threshold was too lenient without including this skittishness aspects, but now including these micro-adjustments, I could easily see how even early wide deployment of end-to-end will improve my oddness metric by an order of magnitude or more.
We’ve come a long way. Sometimes easy to lose sight of that, when you look at the enormously long road ahead.Remember when you had to count the number of jerks in a single turn?
The neural network implementations, in almost all machine learning hardware applications as of 2024, are binary digital logic machines. They are not implemented with analog neurons with associated DACs, multipliers and ADCs (or specialized nonlinear analog decision comparators or whatever). Nor are they pulse-density analog machines.Um. Individual cells in a Neural Network are analog computers.
Say that a cell has 100 connections to cells in the previous stage. Each input to a cell has a weight on it; think, digital value, digital to analog converter, and that analog value thus generated going into an analog multiplier.
So, for this hypothetical scenario, 100 inputs, some at zero, some at one, some at some intermediate value; each input gets multiplied by a weight from the DAC; then the sum of resulting 100 inputs gets up against a comparator (whose other input probably comes from Yet Another Comparator); if the summed value is bigger than yea, then the output of the cell goes to 1; else, it's 0.
This is purpose built hardware. It doesn't resemble a digital cpu without a lot of squinting. Typically, one applies a zillion inputs on one clock; waits the propagation delay (which is short, this is all analog), samples the outputs, then loads in the next set of weights. The loading of ye weights is probably Extremely Pipelined.
I'm guessing, since I'm not really somebody who's ever worked on these things, but one can see how it goes: These NN's probably have huge numbers of inputs and outputs and can do multiple jobs at the same time: One looking for dogs, another for humans, another for horses, cars, and on and on.
Heh. Sort of reminds me of using ROM to do complicated logic functions. Say one has a ROM chip with, I dunno, 16 address bits and 8 output bits. Some of the data bits can be turned around and fed into some of the address bits, or all the address bits can be input data. One can write up a bunch of boolean equations that would max out a Xilinx device and simply use that to program the ROM, so long as the number of inputs is less than 16 (minus whatever feedback one has in mind). One one ROM read cycle, then, one gets 8 outputs, and those 8 outputs are any function of the 16 address bits. Handy for nasty looking state machines, and it runs at ROM cycle speeds, which can be fast.
I had almost given up reading this thread because the signal to noise ratio is so bad. Then @JHCCAZ goes on to make this excellent post.To finish this by throwing in the requisite v12 content: a big question that hangs over the v12 approach is whether the couple of million HW3 computers already out there, or even the faster HW4, have enough compute (inference) power to achieve the goal. People make a lot of pronouncements here in the forum, but I don't think the answer is completely known even inside Tesla, much less to the rest of us. There's a lot of talk these days about the ratio of training compute and data size to the inference compute assets. The field is moving rapidly and there are very encouraging reports that massive and properly targeted training effort can result in a very compact, efficient and capable inference implementation, i.e. could work very well on HW3. The counterpoint is that the training investment could be too high to achieve that goal, and that a more tractable training infrastructure (and training cycle time) could be enabled if the in-car hardware were better than HW3|4 by some factor. I'm far from being knowledgeable enough to make a prediction, but of course I have my hopes!