Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla autopilot HW3

This site may earn commission on affiliate links.
By that logic, why even bother showing us the blue lane lines or cars around us? You are right that a FSD car would not need to give us that info but right now, since our cars are not FSD yet, I think showing us the information is useful to give us more confidence in AP.
You mean to give us LESS confidence in AP so we trust it less and pay more attention, right? ;)
 
You mean to give us LESS confidence in AP so we trust it less and pay more attention, right? ;)

HA HA. Maybe at first but over time, we would see it get better. I mean let's say that with AP3, we were to see intersections on the screen and all the cars moving correctly with no "dancing cars" and we correctly see traffic lights and stop signs, I think we would have more confidence if that were to happen.
 
  • Like
Reactions: DiamondHands
It could, but why? As humans become less responsible for driving, there is less reason to provide them with any information at all.

This is super dangerous to even suggest that anything about Autopilot makes humans less responsible. It will be a long time before FSD allows humans to safely abdicate responsibility even for a second. It's misinformation and complacency like this that has gotten people killed.
 
  • Like
Reactions: OPRCE and verygreen
Humans do not gradually become ”less responsible”. Either they are responsible or they are not in a given circumstance.

Indeed this is the difference between SAE Levels 3 and up (car responsible), and below (driver responsible).

So unless there truly is a driving scenario where the car is responsible, the driver is fully responsible and there is no ”less reason” to inform them...

Now, if and when Tesla becomes responsible for the drive, that is a different scenario of course.
 
  • Like
Reactions: OPRCE
I wonder if maybe one of the benefits of AP3 will be a better display. Right now, on AP2/2.5, we sometimes get the "dancing cars" on the display. I wonder if that issue will be solved with AP3. Also, since AP3 can process data faster and better, I also wonder if AP3 will be able to give us more info as well, where the display actually will show us intersections,traffic lights, stop signs, and cross traffic etc...

It’s on the left hand side bar. The speed limit sign disappears and gets replaced with the stop sign or stop light whichever it is.
 
By that logic, why even bother showing us the blue lane lines or cars around us? You are right that a FSD car would not need to give us that info but right now, since our cars are not FSD yet, I think showing us the information is useful to give us more confidence in AP.
The part of the logic you are not understanding is that everything displayed for humans takes programmer time and energy, and uses computer time when the car is operating. Putting these resources instead towards actual FSD is Tesla's tendency.

So the logic dictates devoting few resources to providing nicely formatted information for people. The fact that we've been seeing dancing vehicle icons for quite a while says that Tesla isn't devoting much effort to cleaning up the display. I'm good with that. I like that it's there. I like that as it improves I can imagine that what's going on internally is improving as well. But I'm capable of separating what I like as a techie nerd and what is actually useful for progress towards FSD.
 
It could be a chunk of work, but if sales are up, HW3 is built into new cars, and FSD development is stable, then they will have crazy amounts of cash coming in from new purchases of AP/ FSD that would more than cover upgrade costs.
Until we actually see some real FSD stuff, I don't think there will be much coming in from "new purchases". The find-me-in-a-parking-lot trick is neat, but would you pay $5k for it? Beyond that there isn't much FSD going on.
 
  • Like
Reactions: rnortman
Until we actually see some real FSD stuff, I don't think there will be much coming in from "new purchases". The find-me-in-a-parking-lot trick is neat, but would you pay $5k for it? Beyond that there isn't much FSD going on.
The $3k from AP plus a decent fraction of the $5k from the new definition of FSD times their new volumes should be sizable.
$3k * 300k cars/yr * 60% take rate = $540 million...
 
  • Like
Reactions: scottf200
HA HA. Maybe at first but over time, we would see it get better. I mean let's say that with AP3, we were to see intersections on the screen and all the cars moving correctly with no "dancing cars" and we correctly see traffic lights and stop signs, I think we would have more confidence if that were to happen.

There should ideally be an option to show a full-detail plan view 360° visualisation in a window on the CID, to educate in knowing exactly what the AP/FSD system detects at any point.

Also the option to record that 360° window plus IC feed with GPS and timestamp as video synced with the dashcam would be perfect for forensic analysis of situations encountered.
 
  • Like
Reactions: diplomat33
The part of the logic you are not understanding is that everything displayed for humans takes programmer time and energy, and uses computer time when the car is operating. Putting these resources instead towards actual FSD is Tesla's tendency.

What you're missing is that the dancing cars doesn't represent a limitation in the visualization code -- it represents the fact that internally, the AP system does not know where the neighboring cars are. You think AP knows exactly where they are and somehow the GUI can't get it straight? That's possible but Occam's Razor says otherwise. It is reasonable to hope that HW3 may improve AP's model accuracy and stability, which is what the original post was about, and that would be a meaningful achievement and would improve safety and performance of AP, independent of the fact that it would also make the GUI nicer -- that's just a nice side effect.

OTOH, it's also reasonable to worry that no amount of extra computing power or model refinement is going to help because fundamentally they are trying to do something essentially impossible -- determine the precise location of a vehicle in 3-D space given a partial view of it in one side camera. I can actually prove that this is impossible to do correctly in all circumstances, though that proof is not particularly relevant to practical real-world performance. In principle, they could make this reliable in typical conditions (setting aside non-ideal conditions or active attempts to confuse it) to some level of 9's. In practice it is a difficult problem and my guess is that dancing cars, especially when everybody is at a stop, are here to stay.

This might be a good time to point out that EAP scared the crap out of me this morning when it decided that a vehicle in the neighboring lane was suddenly in my lane and slammed on the brakes. I really hope they can fix this somehow.
 
What you're missing is that the dancing cars doesn't represent a limitation in the visualization code -- it represents the fact that internally, the AP system does not know where the neighboring cars are. You think AP knows exactly where they are and somehow the GUI can't get it straight? That's possible but Occam's Razor says otherwise.

The computer knows the direction and approximate location of a car. What it cannot do is say with certainty that the red car whose front is seen by one camera and whose back is seen by a different camera is actually one car and not a car with a flat back and another car with a flat front.

Image fusion is hard.
 
  • Like
Reactions: J1mbo
The computer knows the direction and approximate location of a car. What it cannot do is say with certainty that the red car whose front is seen by one camera and whose back is seen by a different camera is actually one car and not a car with a flat back and another car with a flat front.

The key word here is "approximate". It has a bounding box on part of a vehicle in 2D, maybe only one one camera and maybe on two cameras. To get from there to "these are the extents of the vehicle in 3D space" is a huge challenge.

And yet, being a vision-only system, they'd better solve that challenge somehow!
 
The key word here is "approximate". It has a bounding box on part of a vehicle in 2D, maybe only one one camera and maybe on two cameras. To get from there to "these are the extents of the vehicle in 3D space" is a huge challenge.

No question about that. But an approximation is almost guaranteed to be good enough. For the most part, all a car really needs to care about is "is there something between this car and the next white line". And for that, knowing the exact position is immaterial beyond knowing whether it is beside you or not (which is just an angle question).

Also, Tesla's cars have 360-degee ultrasonic sensors that can provide distance if you're close enough for the distance to really matter (in any direction but straight ahead).

So I'm pretty sure these issues really are purely a problem for visualization, and are not likely to ever be a safety issue, with the possible exception of parking in tight spaces.
 
  • Disagree
Reactions: rnortman
No question about that. But an approximation is almost guaranteed to be good enough. For the most part, all a car really needs to care about is "is there something between this car and the next white line". And for that, knowing the exact position is immaterial beyond knowing whether it is beside you or not (which is just an angle question).

Also, Tesla's cars have 360-degee ultrasonic sensors that can provide distance if you're close enough for the distance to really matter (in any direction but straight ahead).

So I'm pretty sure these issues really are purely a problem for visualization, and are not likely to ever be a safety issue, with the possible exception of parking in tight spaces.

The camera may not be able to see the line if another vehicle is close. The ultrasonics are mostly useless at highway speeds. And if this is only an issue for viz, why did my car slam on the brakes this morning when it decided the car in the next lane over was in my lane?

An approximation is not good enough if it can't tell the difference between one side of the line and the other. This stuff is hard to do with cameras, especially without stereo cameras.
 
Why would AP2 HW be better right now than HW3? The NN compute capability of HW3 is expected to be far superior to AP2, but the architecture also changes in ways that are not a superset of what AP2 does. Because of this the NN code has to be reimplemented on the new hardware and they may currently be at a point where it doesn't make sense to create bit-for-bit identical implementations of the NN for both hardware platforms. This could be related to how optimizations differ for the two platforms, it could be related to having different numerical library implementations, or it could just be a matter of quantization variation within the dataflow of the hardware. Another factor on the NN side is that regression testing in their datacenter can be done on Nvidia's enterprise class hardware using the same libraries and frameworks as are run in the vehicle, which allows for precise evaluation of the downstream effects of stuff like network compression which have to be done after training but before deployment. Tesla may not yet have datacenter implementations of their NN chip in which case regression testing would require emulation of the final hardware config, or possibly they would ignore the differences in favor of evaluating at a different layer. The overall result is that there would be minor differences which are currently hard to tune out for HW3 - possibly resulting in slightly lower performance in the field. That would change as they extend their infrastructure and refine the development and deployment frameworks and processes.

Also, HW3 differs in ways that will require changes to software that isn't directly NN related. The CPU architecture has changed slightly and it's possible that things like ram, flash, and IO interface devices have changed resulting in some changes being needed to drivers and libraries. To the extent that those parts haven't been polished yet there could be issues which impact performance ore reliability of ADAS operation.
Wow @jimmy_d that is like crack to by need brain... one last question, how much does changing the supplier of one radar or one camera cause a ripple effect in the FSD journey? butterfly flapping its wings at Hillary Clinton’s wedding causes Donald trump to be elected sort of thing? Or is it more like how we pronounce tomato or potato ?
 
I'm not sure the u/s sensors cover all 360 degrees, or at least do not have comprehensive side coverage, even if they cover 360 degrees over the front and back.

See here:

dJEQHXs.jpg
 
What you're missing is that the dancing cars doesn't represent a limitation in the visualization code -- it represents the fact that internally, the AP system does not know where the neighboring cars are. You think AP knows exactly where they are and somehow the GUI can't get it straight? That's possible but Occam's Razor says otherwise. It is reasonable to hope that HW3 may improve AP's model accuracy and stability, which is what the original post was about, and that would be a meaningful achievement and would improve safety and performance of AP, independent of the fact that it would also make the GUI nicer -- that's just a nice side effect.

OTOH, it's also reasonable to worry that no amount of extra computing power or model refinement is going to help because fundamentally they are trying to do something essentially impossible -- determine the precise location of a vehicle in 3-D space given a partial view of it in one side camera. I can actually prove that this is impossible to do correctly in all circumstances, though that proof is not particularly relevant to practical real-world performance. In principle, they could make this reliable in typical conditions (setting aside non-ideal conditions or active attempts to confuse it) to some level of 9's. In practice it is a difficult problem and my guess is that dancing cars, especially when everybody is at a stop, are here to stay.

This might be a good time to point out that EAP scared the crap out of me this morning when it decided that a vehicle in the neighboring lane was suddenly in my lane and slammed on the brakes. I really hope they can fix this somehow.
You're just making up stuff you can't possibly know. Sure, all sorts of things are possible. But insisting on pretty pictures for humans, and that they have meaningful connection to the probability clouds that the software operates on internally, is just your private fantasy.

If the pretty pictures for humans get nicer it will make me happy, but it's quite meaningless.
 
  • Like
  • Disagree
Reactions: ABC2D and rnortman