EV Promoter
Member
Tesla has a front radar for TACC but the main sensor that Tesla is using for autonomous driving are the 8 cameras. Tesla's entire autonomous driving approach is really based on training the computer vision to understand the world around the car. In contrast with the "lidar approach" which is short hand for the other autonomous driving approach which also uses cameras and radar but involves lidar as a primary sensor for vision.
Excellent short synthesis.
I don't think any user is concerned of what method is used to get the results, as long the as the system gives the result.
But unfortunately, the very first step of automation, the quoted approach to TACC in Tesla system, is in actual fact vastly inferior to any common and cheap systems fitted on other cars, due to the too frequent phantom braking problem, still largely unsolved as all of you knows on TMC, but too many decided to ignore what does it implies.
And given that adaptativity cannot be excluded, as in most cars with Adaptive CC, if you don't want to be stressed waiting for a phantom brake with the foot on the gas pedal, you end up with a car without even a simple CC.
Is it unreasonable to wait when the Tesla system simply described by @diplomat33 will work safely on par with a $1200 plain Level 2 suite with adaptive CC and simple lane keeping, available in other cars, to compare different approaches? How to talk of Autopilot if you cannot rely on safe straight line automation?
Because if this is not possible for the low camera resolution and too excessive computing power and related absorbed power from the battery required, as someone in the field is claiming, perhaps some different layout and components will be required. And theoretical ambitions killed by the reality of limitations.
Last edited: