[Diagram that implies equal-vote participation of the Camera World Model and rge Radar/Lidar World Model]
...In the ME system, I think the idea is that the system only fails if both parts fail at the same time. So it is not dependent on the weakest link. The car can still drive the cameras fail. The car can still drive if the radar or lidar fails. ...
This is certainly interesting, but i believe it's an overstatement (by Mobileye) of their redundancy approach. I find it somewhat questionable that they would continue to drive the car for any period of time (beyond a safety pull-over or road-exit action) without all sensors functioning, and highly questionable with a camera-vision failure in particular.
First Radar: There are indeed credible arguments for the benefits of Radar as an adjunct to vision, but the uniquely helpful velocity information it provides is very poorly tied to specific objects unless fused with a much higher resolution sensor. This is quite definitely the case with standard low-cost radar equipment such as Tesla employ(ed), but really still true with more advanced radar arrays - higher resolution but not at all sufficient to "see" well enough to drive. But when successfully fused with Camera or high-resolution Lidar output, it makes sense.
So next we consider the all-important Lidar foundation of the right-side World View. Lidar, in good weather, can give those famously impressive 3D point clouds, and I'd grant that they'll be accurate with far higher precision and confidence than the emerging video-lidar proxy method (how necessary that is, is a key point of contention). But I think those false-color photo-like representations we see in some Lidar-promoting literature are misleading. Lidar is getting the correct 3D XYZ values for each surface-point in its World View, but it knows little else about the surfaces. Imagine shrink-wrapping everything around you in matte grey plastic film. You don't see colors or even a "Black & White TV" version of colors, you don't see lights, you can't read signs (except in special and unreliable-for-use conditions). It's better than nothing if the cameras fail, but you don't have a chance of driving safely for long this way - unlike the reverse, where cameras alone with the right software provide an extremely good chance of success (how extremely good, is it good enough in theory, are Tesla's cameras sufficient in practice, this becomes the debate). Even Mobileye's website says
"the camera subsystem is the backbone of the AV, while the radar-LiDAR subsystem is added to provide enhanced safety and a significantly higher mean time between failures (MTBF)."
IMO this is a critical footnote that belies the equal-capability implication of the diagram and of the claim
"An AV that can drive on radar/LiDAR alone"
(noting that above that they say, "a development AV").
The fair conclusion is that a system (whether Tesla's, Wayve's or Mobileye's own AV) might well become good enough on Camera vision alone, might be better albeit more costly by adding Lidar/Radar, but cannot operate solely on the latter equipment set as it exists today.