"Advice for Robots" - Cameras should be used for driver attention. Not steering wheel torque.
AP's driver attention system allows one to "fall asleep with their hands on the wheel". One could make a case...
Maybe AP encourages it?
I want to "Like" and "Dislike" the same post, how can I accomplish that on TMC? /sBut no Tesla ever built has the hardware to do this and they're never gonna retrofit a million cars (or even the 500k+ on AP2+ HW).... yeah the 3 (and Y I imagine) does have A camera inside- but it's the wrong type and location to do this task.
Really it's the result of Tesla expecting to get beyond L2 much more quickly than they actually have been able to (which is- not only years behind schedule, but so far, not at all).
We are collecting data from over 1 million intersections every month at this point. This number will grow exponentially as more people get the update and as more people start driving again. Soon, we will be collecting data from over 1 billion intersections per month. All of those confirmations are training on neural net, essentially, the driver when driving and taking action is effectively labeling -- the labeling reality as they drive, and making the neural net better and better. I think this is an advantage that no one else has, and we're quite literally orders of magnitude more than everyone else combined. I think, this is difficult to fully appreciate.
I want to "Like" and "Dislike" the same post, how can I accomplish that on TMC? /s
I think Tesla has presented a unified front with technical presenations from Karpathy and team and "salesy" side from Elon.
The data they are getting from the fleet is the holy grail and Tesla has it!
It's not just that they have "raw data" they know what to do with it.I agree they have a lot more real world data than anyone else.
It's entirely possible that L4/L5 is simply not possible with 1 front radar, 12 surround sonars, and 8 720p cameras of the type and location Tesla is using, even given infinite data sets.
It's not just that they have "raw data" they know what to do with it.
They have clearly shown that they can perform campaigns/triggers for sub-NN validation in the wild.
But unlike cancer - they get real humans to validate the "correct" path via confirmation to proceed on current deployment.
And they get to specifically test against the scenarios that fail in real life (crashes, aborts, overrides).
You keep bringing that up as if that is somehow going to change the fact that Tesla has almost a million cars with a similar sensor suite and the next million has no indication of changes.As I say- if the sensor suite turns out incapable of handling the job, no amount of data can fix that.
You keep bringing that up as if that is somehow going to change the fact that Tesla has almost a million cars with a similar sensor suite and the next million has no indication of changes.
They would have realized that in the past 2 years.
- We would have seen changes go into Model Y to the sensor suite.
- just like we see a heater on the radar in the Model Y.
But IFF the sensor suite is insufficient, then you are correct, no amount of data from the sensors would be enough.
Uh... that's not true at all.
They only broke 1 million total cars THIS quarter.
A couple hundred thousand are AP1 or no AP at all.
Another big chunk are AP2.0 (different sensor suite)
Some more are AP2.5 (different computer from what's now "needed")
My point of even including that sentence, was that they -- Tesla -- would have changed the sensor suite dramatically within the past 2 years.Right- so insisting they'll get FSD as originally promised working on this HW because "DATA" is assuming facts not in evidence.
In June it will be three (3) years since Karpathy joined Tesla.
Lets say he needed a few months to get his vision/approach setup and communicated internally.
For 2.5 years he could have talked directly to Elon to request changes to the sensor suite, but all we got is incremental updates to sensors and massive leaps to processing.
The point was initially to port the AP1 capabilities (which is just hard coded logic) onto AP2 hardware stack -- that is exactly what they initially did.Apparently he was too busy spending the first 2 years working on AP code that wouldn't end up working out and needing to spend much of the third year doing a fundamental re-write of the code.
Elon did not allow them to give themselves crutches to keep leaning on... since the vision was always full self driving.We know Elon rejected things like cameras for driver attention when other engineers suggested it for example