I've supported cameras for a long time but now feel that cameras only, is going the wrong direction. Radar can most certainly be improved, and I myself have been saved an accident by it bouncing under the car in front to watch the one in front of it.
Fully agree with this. I’m an engineer working in a different field, but with some experience in ML. The arguments I hear in favor of camera-only autonomy generally are as follows:
1. Humans only have eyes, so why shouldn’t cars be able to drive with just cameras?
2. This is now just a software problem, the sensors aren’t the hard part
3. Teslas have more cameras than people have eyes, so they should reach super-human performance at driving tasks.
I find none of these arguments convincing, especially the idea that AI is just a software problem that needs some more data.
I think Tesla is headed towards a more feature-rich level2 ADAS that will still require human intervention at all times. Personally think that’s the worst of both worlds, if not dangerous.
To get to autonomy, you need a system that can drive without human interaction in some geographically or weather-constrained scenarios. I don’t see that happening without multiple redundant sensing modalities.
MobileEye, for example, is training independent systems that drive using cameras and radar+lidar. This gives them the ability to hand over control if the cameras are faced with a situation that where they can’t perceive well. Still a hard problem, but at least seems plausible.
I think Tesla is just pushing forward with the only path available to them that will keep the hype train moving.