After owning a Tesla for a couple of months and using auto-pilot / auto-steer pretty much wherever I can enable it, including some city streets, it's been my overall impression that the car is slow at reacting to its environment.
A very simple example is when someone cuts you off and the car does not do anything for a couple of seconds. Sometimes the car that cuts you off is long gone to the next lane and that's finally when the Tesla starts reacting by braking (there's no longer any obstruction in front of it whatsoever, obstacle is long gone).
This is a very repeatable behavior of the car.
I wonder where the delay is and how Tesla could sort this out purely by software (with the existing camera / computer hardware).
Is it a delay in the cameras sending the data to the FSD computer, or is it the FSD computer not processing and reacting to the data fast enough, or is it that the car signals commanded by the FDS computer take too long to get executed?
Where is the delay in reaction time? And how could they fix this?
I recall Elon recently discussing camera photon counting (instead of processing full camera frames) to shave a few milliseconds on the data capturing.
What are your thoughts on this?
For those with FDS BETA (which can be disabled on demand), which stack is faster at reacting to the environment, BETA or non-BETA auto-steer? Is there a perceivable difference?