Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Crazy/Jittery Cars on Blindspot Monitor?

This site may earn commission on affiliate links.
is it normal for the cars to kind of "freak out" and spin around/flutter all over the place? are my sensors ok?

it also thinks there's 4 motorcycles stacked on top of each other in my garage. it does some funky things.
 
Yeah, that's pretty normal, at least when you are stopped, although I've noticed that with more recent updates it's a bit more stable than it used to be.

In the garage it thinks my wife's Bolt is a semi-truck!
 
  • Love
Reactions: 640k
It’s the car’s current object detection capability. Should be fixed before it becomes a robotaxi as it won’t be able to drive by itself if it thinks the real world is full of frantically spinning and teleporting vehicles.
 
  • Funny
Reactions: DriveMe
It's "normal," in that we all seem to suffer from that affliction... It's "not normal" in that it really shouldn't be happening. Tesla has yet to release an update to address this, but we continue to get regular updates with new games and other parlor tricks!
 
My M3 did not do this until it spent a couple days at the repair facility a few weeks ago. They jittered a little but I never saw them spin and dance around like they do now. To quote a passenger new to Tesla, "Jesus". Not a thing that Tesla should be proud of, and needs fixing if they want to rely on word-of-mouth for advertising.
 
It’s the car’s current object detection capability. Should be fixed before it becomes a robotaxi as it won’t be able to drive by itself if it thinks the real world is full of frantically spinning and teleporting vehicles.

I think it's more of a visualization problem, and not an inherent problem with how the car is deciding how to act upon it's environment. It's basically the object recognition has a more difficult time recognizing the orientation of stationary vehicles, and being stationary (both you and the other cars), it's not like the neural net is going to decide to steer to avoid a car that looks like it's pointed in your direction (but not moving). Even at slow speeds when the cars are still jittery, the path planning is not taking any adverse action. Actual closing speed data showing the adjacent vehicles on a non-intersecting path would seem to have a higher weight than the object recognition "seeing" a sideways car for a frame or two.

That said, it would be nice if the visualization software would "smooth" the movement somewhat so the cars looked more stable. And at times, it seems like this has been done. When I first got my car, adjacent cars were always shaking. In later s/w updates they have been rock solid, only to come back again in later updates.
 
I think it's more of a visualization problem, and not an inherent problem with how the car is deciding how to act upon it's environment. It's basically the object recognition has a more difficult time recognizing the orientation of stationary vehicles, and being stationary (both you and the other cars), it's not like the neural net is going to decide to steer to avoid a car that looks like it's pointed in your direction (but not moving).

From what I understand (based on NN demos), the NN assigns probabilities to what it detects and does so several times a second.

I’m less concerned about the random rotation from a safety/practical POV (I do think it’s a massive mistake and lapse of judgment by the responsible people at Tesla to release something that looks so broken for your marquee feature).

I’m more concerned that there seems to be a lack of SW layer that intelligently interprets the NN output.

The visualization suggests that there’s no plausibility filtering and the car simply renders (and likely uses) what the NN spits out. This is especially annoying if the probability of an object hovers around the visualization (or detection who knows) threshold. Then cars pop in and out of existence or morph from car to truck and back.

I’m most concerned that some of the decisions that the car makes are based on spurious, badly filtered NN output, for example phantom braking or collision alerts for objects that don’t exist in reality (and vice versa) and poor continuation of broken lane markers (that’s the reason why I don’t even use AP anymore because it’s too dangerous/stressful on Dallas highways that apparently aren’t as cleanly marked as the ones that people on this forum use with NOA).

So I don’t think it’s only a visualization problem. Right now, at least my Model 3, makes decisions based on flawed NN vision.
 
From what I understand (based on NN demos), the NN assigns probabilities to what it detects and does so several times a second.

I’m less concerned about the random rotation from a safety/practical POV (I do think it’s a massive mistake and lapse of judgment by the responsible people at Tesla to release something that looks so broken for your marquee feature).

I’m more concerned that there seems to be a lack of SW layer that intelligently interprets the NN output.

The visualization suggests that there’s no plausibility filtering and the car simply renders (and likely uses) what the NN spits out. This is especially annoying if the probability of an object hovers around the visualization (or detection who knows) threshold. Then cars pop in and out of existence or morph from car to truck and back.

I’m most concerned that some of the decisions that the car makes are based on spurious, badly filtered NN output, for example phantom braking or collision alerts for objects that don’t exist in reality (and vice versa) and poor continuation of broken lane markers (that’s the reason why I don’t even use AP anymore because it’s too dangerous/stressful on Dallas highways that apparently aren’t as cleanly marked as the ones that people on this forum use with NOA).

So I don’t think it’s only a visualization problem. Right now, at least my Model 3, makes decisions based on flawed NN vision.
i'm 100% with you on this. who cares what the car is doing/thinking in the background? the visualization of the representation of where the computer is calculating the position and location of the cars should employ a layer of basic logic, even if it's "late". on the road, they could switch up the accuracy/timing but when you're at a traffic light or in a parking lot, it either needs to not be there or use some logical assumptions about the positioning of objects/vehicles.
 
I am relieved to know that this isn’t an isolated event to my m3. Sitting a stop light sometimes the other lane a car is jittering and is going to ram into me. It is a bit unsettling.

I no longer trust the images of what is there any longer and ignore that display. As for being observant what is around me, I went old school and bought some small cheap octave mirrors. It really helps and can monitor my blind spots.

I am hoping that Tesla has a future fix for this
 
I am relieved to know that this isn’t an isolated event to my m3. Sitting a stop light sometimes the other lane a car is jittering and is going to ram into me. It is a bit unsettling.

I no longer trust the images of what is there any longer and ignore that display. As for being observant what is around me, I went old school and bought some small cheap octave mirrors. It really helps and can monitor my blind spots.

I am hoping that Tesla has a future fix for this

Sorry not octave, blind spot mirrors. Need more coffee before posting
 
i'm 100% with you on this. who cares what the car is doing/thinking in the background? the visualization of the representation of where the computer is calculating the position and location of the cars should employ a layer of basic logic, even if it's "late". on the road, they could switch up the accuracy/timing but when you're at a traffic light or in a parking lot, it either needs to not be there or use some logical assumptions about the positioning of objects/vehicles.
IMHO this is indicative of a major software architectural design problem. Tesla seems to just be using the sensor data directly rather than building a model of the world which is updated by the sensors and which the car uses to make driving decisions.

My concern is that without the model it's going to be subject to errors like seeing a truck in the opposite lane, then when the truck illegally turns left across the car's lane, it disappears because the camera is fooled by lighting etc. In a model based system, it would know that trucks don't simply vanish, in a sensor only driven system, out of sight, out of mind.