Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Why nearby cars jump around on the Tesla screen

This site may earn commission on affiliate links.

DrGary

New Member
Nov 4, 2018
1
2
USA
Is there any official information on why nearby cars jump around on the Tesla display?

As an engineer, I speculate that it's due to noisy sensors, primarily the noisy distance estimates of camera and ultrasonic sensors. I notice that the cars in front of my car are rendered much more stably, likely due to the accuracy of the forward-looking radar. But cameras have a much harder time estimating sizes and distances. And ultrasonic sensors are just noisy because of all of the signal reflections on multiple paths.

But that's not the complete story because is doesn't explain why the nearby cars are rendered in a way that includes the sensor noise. They could use filtering techniques to essentially make the best guess of each car's location despite the noise. Or they could add in physics constraints that say, for example, that cars don't jump 3 feet in a fraction of a second, don't bounce out of their lanes, and don't overlap with other cars.

The problem with these techniques is that they'll introduce lag or other errors.

Lag: The noisy sensor data is still being input to the physics model. It's just that the model updates more sensibly as new data arrives. But that means that it will take more data around a given spot for the model to update to that location: That's a lag. So that would make the nearby cars to appear sluggish. When the cars start accelerating from a stoplight, for example, the display would take some time to show them moving. That could be dangerous if a driver used the display to decide to change lanes.

Other errors: Because of the noisy data, you can't be sure where to apply the constraints. If you say, for example, that two cars shouldn't overlap, which car should you move? One, or the other, or both? Each adds information not present in the sensor data. That potentially introduces additional errors, the opposite of the original intention.

Tough problem. The above may provide some reasons why the cars dance around the Tesla screen. Other ideas?
 
  • Informative
Reactions: pilotSteve
supposedly, the UI is using some partially processed output from the neural net (i.e. from only X layers of processing, not the full set of layers), not even the finished output of it, so that is why the visualization is so jittery and buggy.
 
I suspect that the main culprit is the display routines. Since they are just showing it to you, not actually using it for safety, they aren't spending much time on it.
Also, remember that I believe that most of what you are seeing on the screen is coming from the camera algorithms, not the sensors.

And I'd also be surprised if the full object rendering isn't adding issues. Cars come in all sizes, but only one size on the screen. For safety, you really tend to care about the plane next to you, but the algorithm has to calculate the centroid to display. Lots of extra work that doesn't necessarily increase the safety.
 
Sometimes the simplest explanation is the best... buggy software. There is no reason this can't be resolved by various techniques as the OP mentioned. It seems that the driving system is independent of the visualization. My best guess is they are two different subsystems that are somewhat independent, probably for safety reasons.
 
  • Informative
Reactions: pilotSteve
The new neural network is massively more complex than the old one. The direct effect is that the car doesn't get full updates as often as before. I expect Tesla will push people to upgrade to the new chip (hopefully) next year to alleviate that with a real solution.

In the mean time the user interface has to manage. The full and correct neural network would give a very choppy updates (imagine only seeing the cars move every 2 - 5 seconds), so there are various actions Tesla did to make us have much more interactive graphics.

First, most cameras use a lower resolution. This obviously will have a slightly negative impact on the results. Especial out of sight of the radar.

Second, the user interface uses less processed data from the cameras. This means that the accuracy is less again, but the GUI updates much more often.

To remember;

Version 9 for the software is a foundational change, a change at the start of the data-processing machine. We see it started to use the cameras, there is a lot more data available to the driving software.
This is all foundational, not really visible to the user, but as essential as a good foundation under a house is to add a level or two.

I expect that in the next year we'll see the actually user-visible changes come much faster than in the last 2 years because the foundation is mostly ready, focus will start shifting to more user visible parts.
 
The videos verygreen has been posting seem to be more stable on the raw AP data than what we see on the screen to me. What's on the screen is just approximating that. It needs more work but probably hasn't been a priority since it's just informational.

Cars come in all sizes, but only one size on the screen. For safety, you really tend to care about the plane next to you, but the algorithm has to calculate the centroid to display. Lots of extra work that doesn't necessarily increase the safety.

Should be easy to fix by just scaling the representations.
 
I've been wondering whether the dancing or bumper cars on my screen were due to a calibration issue or just early programming efforts still to be refined. Interesting to read the thread.

BTW the most interesting screen traffic rendering I had that made me LOL and check out the rear camera was this one:

IMG_9434.JPG

IMG_9433.JPG


As you can tell it was a rainy day with rain drop apparently on the rear camera. It was having trouble deciding what to image. If this was on 10/31 might have wondered if I was being stalked by Jason or Freddie, who'd be there one second and gone the next.
 
Last edited:
  • Like
Reactions: Fuddinator