beatle
Active Member
That's actually what Electrek said in a way:but relying solely on vision seems to make it dependent only on one type of sensor to make all decisions.
As we previously reported, the idea of moving to only cameras using computer vision is that the only known system that can drive right now is the human brain. It relies on input from human eyes, which are closer to cameras than anything else.
With cameras all around the vehicle with different fields of view, Tesla can achieve greater vision than humans, and the problem becomes only solving computer vision, which the automaker believes it is on the way to solving.
That's great if cameras CAN actually do everything well enough. They're good enough for humans because that's all we've got. It seems narrow-minded to think that vision is solely the way to do it. As people have already stated in this thread, other tools for "seeing" the road have their own unique advantages. My guess is Tesla is tired of trying to reconcile the different data presented by the different sensors and is just punting to cameras.
This short post has some good info on the different sensors and their pros/cons: What's Best for Autonomous Cars: LiDAR vs Radar vs Cameras