From what I understand (based on NN demos), the NN assigns probabilities to what it detects and does so several times a second.
I’m less concerned about the random rotation from a safety/practical POV (I do think it’s a massive mistake and lapse of judgment by the responsible people at Tesla to release something that looks so broken for your marquee feature).
I’m more concerned that there seems to be a lack of SW layer that intelligently interprets the NN output.
The visualization suggests that there’s no plausibility filtering and the car simply renders (and likely uses) what the NN spits out. This is especially annoying if the probability of an object hovers around the visualization (or detection who knows) threshold. Then cars pop in and out of existence or morph from car to truck and back.
I’m most concerned that some of the decisions that the car makes are based on spurious, badly filtered NN output, for example phantom braking or collision alerts for objects that don’t exist in reality (and vice versa) and poor continuation of broken lane markers (that’s the reason why I don’t even use AP anymore because it’s too dangerous/stressful on Dallas highways that apparently aren’t as cleanly marked as the ones that people on this forum use with NOA).
So I don’t think it’s only a visualization problem. Right now, at least my Model 3, makes decisions based on flawed NN vision.