By the way, the paper "Safety First for Automated Driving" (2019) on page 47 says the following about sensor redundancy:
"As of today, a single sensor is not capable of simultaneously providing reliable and precise detection, classifications, measurements, and robustness to adverse conditions. Therefore, a multimodal approach is required to cover the detectability of relevant entities. In more detail, a combination of the following technologies shall provide suitable coverage for the given specific product:
CAMERA Sensor with the highest extractable information content as sensor captures visible cues similar to human perception. Main sensor for object/feature type classification. Limited precision in range determination, high sensitivity to weather conditions.
LIDAR High-precision measurement of structured and unstructured elements. Medium sensitivity to environment conditions.
RADAR High-precision detection and measurement of moving objects with appropriate reflectivity in radar operation range, high robustness against weather conditions.
ULTRASONIC Well-established near-field sensor capable of detecting closest distances to reflecting entities.
MICROPHONES Public traffic uses acoustic signals to prevent crashes and regulate traffic, e.g. on railway intersections. Thus, devices capturing acoustic signals are required for automation levels where the systems need to react to these."
So a collaboration between engineers from Aptiv, Audi, Baidu, BMW, FCA, Continental, HERE, Infineon, Volkswagen, Daimler, and Intel wrote that a camera-only approach does not work and that an autonomous vehicles needs multiple sensors, including cameras, lidar, radar, ultrasonics and microphones.
I believe this multimodal approach is basically an industry standard for autonomous driving.