Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
You started this exchange by quoting me on why I think additional sensors like Lidar and Radar are needed... I was discussing both. :rolleyes:

And I was giving real life examples of your claims not holding up to the actual history of radar in this context.

I don't care what performance Tesla got from their use of the Conti radar. It was never ment to be used in the way Tesla was trying to use it.

But it was not just Tesla- everyone who used radar for this purpose for their active cruise type systems had the same issues

Many car makers in fact had it worse- leading to numerous recalls and class action lawsuits over phantom braking caused by this exact issue with radar use.


Now you can argue they just ALL didn't use the "right" radar I guess... but given I'm unaware of any consumer-level vehicle that does NOT have these problems at all with radar I'm not sure that isn't One True Scotsmannning at this point.
 
  • Like
Reactions: willow_hiller
It's a rabbit hole possibly worthy of this v12 2E thread? Otherwise, no worries with letting it pass.

I frequently wonder about NN limitations. Of course a real-world trained NxN with W bit weights will be far less than theoretical results (fully connected something like 2^(WxNxN)) which is a ridiculously big number even with HW3 specs. I'm sure the FSD NN design isn't fully connected but HW4 went to 10 bit weights versus HW3's 8 bit weights so HW3 likely fell well short of the expected promised land.

On one hand there's maybe ~10 million to ~100 million scenarios to train (pulled that out of my backside) and a NN design with at least a trillion theoretical trained output results (not adding generalizations). If so, NNs aren't receptive to complex training data (aka thick as brick and not supple like a human brain).

Bard says a fully connected 10x10 network with 8 bit weights might max out at somewhere between 100 and 1000 trained output results.

Any NN expert input on this?
 
And I was giving real life examples of your claims not holding up to the actual history of radar in this context.

But it was not just Tesla- everyone who used radar for this purpose for their active cruise type systems had the same issues

Many car makers in fact had it worse- leading to numerous recalls and class action lawsuits over phantom braking caused by this exact issue with radar use.


Now you can argue they just ALL didn't use the "right" radar I guess... but given I'm unaware of any consumer-level vehicle that does NOT have these problems at all with radar I'm not sure that isn't One True Scotsmannning at this point.
If the goal is active cruise control type systems, and if we're discussing the last five years and not the coming five years, you have a point.

This is the V12 thread though and I was discussing what I think is needed for better performance. I'm not sure what you're doing.

Have you all given up on autonomy from Tesla? If so great. That's the most likely outcome by far.
 
Given last 1-2 years of NN advancements stemming from large training clusters and datasets, it's difficult to predict what sort of "jump" in capability we'll see with V12. For example, there was an order of magnitude jump in performance and reliability between GPT3 and GPT4...

Just because we see some linear progress with V11 doesn't mean much wrt V12.

It's one of those things where V12 can be an immense jump in performance and reliability vs V11, but we won't know for sure until Tesla releases V12 or shows more of its performance.

What I think Tesla has done / is doing is skewing the V12 dataset towards Palo Alto or Austin or whatever to test out its performance before going "full-tilt" into curating data across the USA.
 
Given last 1-2 years of NN advancements stemming from large training clusters and datasets, it's difficult to predict what sort of "jump" in capability we'll see with V12. For example, there was an order of magnitude jump in performance and reliability between GPT3 and GPT4...
and similarly with inference compute burden

for that equivalent we'd need much higher rate bitstream, much more training, and a huge net with screaming fast giant hot compute on giant 4 nm chipsets.
 
  • Like
Reactions: ZeApelido
Yes, it can and has. Nothing in my post contradicts.

Can you point to a video showing five successful attempts in a row? I don’t recall one existing when subject to the established success criteria. This is an extremely low bar of course. We can go to an example of ten in a row next!

I agree that it may well not be a sensor limitation for this particular maneuver (there seem to be a bunch of issues that contribute to the failures, and it is not clear why these issues exist). It will probably be easier to figure out the reason why it fails when it has a 99.9% success rate, but maybe not. Right now it is so bafflingly bad it is hard to narrow down any sort of root cause.
 
Last edited:
Can you point to a video showing five successful attempts in a row? I don’t recall one existing when subject to the established success criteria. This is an extremely low bar of course. We can go to an example of ten in a row next!

I agree that it may well not be a sensor limitation for this particular maneuver (there seem to be a bunch of issues that contribute to the failures, and it is not clear why these issues exist). It will probably be easier to figure out the reason why it fails when it has a 99.9% success rate, but maybe not. Right now it is so bafflingly bad it is hard to narrow down any sort of root cause.

 
it fails when it has a 99.9% success rate, but maybe not. Right now it is so bafflingly bad it is hard to narrow down any sort of root cause.

Right now, it seems V11 is see-sawing in performance. One version may excel in one aspect while causing regressions in another. I've noticed it going back and forth in this manner for the last 6 months with minor linear improvements overall. This is a symptom of generalization and/or HW3 inference limitations wrt autolabeled training set model size.
 
Right now, it seems V11 is see-sawing in performance. One version may excel in one aspect while causing regressions in another. I've noticed it going back and forth in this manner for the last 6 months with minor linear improvements overall. This is a symptom of generalization and/or HW3 inference limitations wrt autolabeled training set model size.
I haven't seen any significant changes in performance except incremental improvements for the last 6 months or whatever of v11.
 
I don't see any failing there, as it relates to the sensors.
It's hard to say what the problem is. All I know is it failed. Determining why is a bit harder. As I said, to me it seems that policy or whatever is likely a major issue. Whether there are sensor issues that will limit performance once that layer of the onion is peeled, I do not know (I would assume so at some point, but I really don't know for sure).

I have no idea whether v12 will have any ability to address any of these issues. Seems doubtful to me but I guess we'll find out in a year or so.
 
  • Like
Reactions: spacecoin
True that. I think that since Tesla committed to E2E, they're just tending to V11 as a fallback. I doubt there is any significant capability work being performed on V11.
Yikes. If so, it’s going to be a very tough year or two for them with no ability to update the public with their progress on autonomy. Hopefully cooler heads will prevail or v12 is something not quite as advertised (most likely outcome I think, and a good one - out of the range of possible outcomes at least).
 
If the goal is active cruise control type systems, and if we're discussing the last five years and not the coming five years, you have a point.

Right. Which is why I made it.


This is the V12 thread though and I was discussing what I think is needed for better performance.

here is LITERALLY what you said in the last post I replied to:

YOU said:
I don't care what performance Tesla got from their use of the Conti radar. It was never ment to be used in the way Tesla was trying to use it.

Was. Past Tense.

So I reiterated the point about how it went in PAST TENSE use. The thing I literally just quoted you mentioning.

If you wanna move goalposts now, well, seems a popular pastime here so go right ahead.





Have you all given up on autonomy from Tesla? If so great. That's the most likely outcome by far.

I've been pretty clear and consistent on this for years.

I think HW3 could do L3 just about tomorrow if they've got the stationary object issue fixed using HW3 sensors and computer, within the old EAP/AP ODD.

I think they MIGHT could do L4 within that ODD, depending on what fail safely methodology they have in mind, and with a good way to deal with the fact they'd be setting up a system that technically won't require a human (especially an awake one) on a divided freeway but WOULD require one to exit or enter it.

I think there's near-0 chance they EVER offer >L2 on city streets with HW3 sensors and HW3 driving computer.

I think there's probably a BETTER chance of them doing it with HW4 but it's still way to early to judge, and I'm especially still concerned with the blind spots to the left and right at occluded intersections I would've loved to see side-facing front fender cams for.

This has not changed from me in a long time (other than adding the HW4 stuff when we got that info). Have you confused me with one of those L5 TOMORROW BRO people?




Not "selling" either then? That was the statement. "No one has been selling FSD as autonomy".

To be clear, from March 2019 to today they literally haven't

You are told during the purchase that what you are buying is not autonomy

All Elons not-personally-said-to-you-during-a-purchase aspirational hopes and targets don't change any of that legally.


Now for pre-3/19 buyers it's a different story, those folks were promised (at least) L4 with a wide ODD as part of their purchase.

Some folks prefer to believe this change was not made on advice of legal to limit Teslas liability if they found they could never deliver L4 on existing cars, but I've yet to hear any rational alternative reason.
 
Last edited:
I think HW3 could do L3 just about tomorrow if they've got the stationary object issue fixed using HW3 sensors and computer, within the old EAP/AP ODD.

I think they MIGHT could do L4 within that ODD, depending on what fail safely methodology they have in mind, and with a good way to deal with the fact they'd be setting up a system that technically won't require a human (especially an awake one) on a divided freeway but WOULD require one to exit or enter it.
Here we disagree. I think there is zero chance for hw3 and hw4 to get to unsupervised autonomy in any meaningful ODD. Lack of redundancy, lack of sensing distance and compute. Plus all the unsolved research problems around CV/ML.
This has not changed from me in a long time (other than adding the HW4 stuff when we got that info). Have you confused me with one of those L5 TOMORROW BRO people?
No I think we’re on the same page.
To be clear, from March 2019 to today they literally haven't

You are told during the purchase that what you are buying is not autonomy
They are clear it’s not autonomous NOW. I believe they have been selling the hope of an OTA to autonomy. Why would people pay 12/15kUSD for an average highway L2 without cooperative steering? Or 8k in the EU and get nothing.

They have clearly been saying over and over again that autonomy is 6-12 months away over and over again. That hasn’t stopped.

I believe it’s fraud. Both investor fraud and consumer fraud. If it isn’t, the law needs to be changed. I know you don’t agree. That’s fine.

Elon can shove a snake charger up his butt.
 
Last edited: