Redundant subsystems can be used to improve overall system reliability.
I don’t disagree in general. But in a Waymo, if the lidar or camera system goes down, do you think the car would keep driving? I bet it immediately goes into “I need to park” failure mode.
Tesla has 8 cameras primarily to be able to see all angles, however with 2 forward looking cameras (with different focal lengths) and a forward radar, that’s enough redundancy to go into a pull over and park” mode if a camera or radar fails.
Rumble strips only exist because of human inattention and sleepiness, a uniquely human problem that doesn’t affect FSD.
But does LiDAR potentially provide useful data that cannot reliably be discerned by vision alone? Even with a very good vision system, I think the answer is likely "yes", particularly at night. Therefore, it makes little sense, in my opinion, for anyone to be categorically opposed to integrating LiDAR and/or HD maps when and if they're available, economical, and can be packaged efficiently.
IF it was economical and could be packaged efficiently, sure. However, in the longer term, these systems will be competing for customers based on cost. A lidar + vision system will always cost more. Cameras are cheap. Even if lidar gets to be as cheap as a camera, it’s still twice the cost of a camera alone.
More systems require more computing power. This requires more powerful hardware, higher cooling and energy requirements. More cost.
Precision maps are great when they’re right, but if there’s construction and an unmapped area, you now have disagreement between the map and reality. What resolves that discrepancy? How do you shuttle data for precision maps from a data center to the car?
For someone driving from LA to New York, that’s a lot of data transfer. More cost, more energy usage. I think this is another reason why Waymo operates in limited regions. To be able to drive generally from location A in one state to location B in another, you’d have to transfer many gigabytes, perhaps terabytes of data to the car.
This is why Musk believes LIDAR is a crutch. Yes, it adds redundancy and “helps”. But ultimately, to handle the edge cases, you need vision to work on its own anyway. Nice to have, but adds cost and complexity when vision is necessary anyway. Musk is successful because he’s able to boil complex problems to the simplest approach.
His mindset was “attentive humans can drive pretty well with just 2 cameras that can only look in one direction at a time, and situated inside the car behind the steering wheel”. So 8 cameras looking in all directions, plus a radar, is a superhuman perception system.
All in all, the true level 5 systems of the future will be competing on cost.
My overall feeling is that Waymo has adopted a "kitchen sink" approach to solving autonomy - spare no expense on hardware, whatever it takes to achieve maximum reliability as early as possible. I think they've been very successful relative to their own goals, and if I were in a Waymo service area, I'd feel quite comfortable riding in a Waymo vehicle (except for the fact that they use ICE Pacificas - yuck!).
Tesla, on the other hand, has a "hardware lite" approach - keep the hardware as simple and unobtrusive as possible, and work wonders in software. This approach involves compromises, but it seems like the only reasonable approach for a mass-market, relatively affordable product in the not-so-distant future. It may not be perfect, but we hope and expect that it will be more than good enough.
Agree with you here on all points. No system will be perfect. But I believe that a pure vision based system can still be several orders of magnitude safer than human drivers.