when an adjacent car is moving it is generally pretty easy for the neural net to figure out which direction it is head because the cameras can figure out the relative speed (and direction) and cars generally only move forward. SO it assumes all cars are moving forwards and everything looks normal. When you and the cars around you are stopped it becomes very difficult to determine if a car is moving very slowly or if it is just error in its speed/velocity calculation. Without a speed or direction vector, the neural net has to guess which way the cars beside you are oriented based on history and visual cues, and the cars move around randomly while it guesses.
The system used to just pretty much assume everyone was going the same direction as you, since it was mostly tuned to highway driving and didn't need to consider how to render a car at 90 degrees (so the cars moved a bit but all stayed in the same direction). I believe that since Tesla is now working hard on improving in town AP, they are working on getting the net to determine orientation of other cars based more strongly on visual cues. Since a couple of updates ago, you'll now see some cars moving 90 degrees to you on the screen at intersections and you can see the car in front of you go around a turn. As a consequence, the cars beside you may now rotate 360 on the screen as it guesses their orientation with no direction vector. I would expect this to improve with time pretty rapidly.
All of this is baby steps towards figuring out the much more complex city scape.
And safety wise, when you aren't moving all the car has to do is not hit the car in front, which is largely controlled by the radar. It knows there are cars all around you as well but isn't confident which way they are pointed and thus will go next; but until one of them moves, it doesn't have to worry about it. Once that car moves, it will be able to do the math and figure out the direction vector and all is well.