Modern high resolution lidars can see and detect everything a camera can other than the status of traffic light.
Its quite simple. Lidar, Cameras, Radars has different fail modes and different optimal operational modes.
Lidar can work in pitch darkness and in direct sun light, gives you direct 3d dimensions of objects and their range but is sub-optimal in adverse weather (heavy rain, heavy snow, fog, mist)
Camera has the best resolution and is great for semantics but fails in direct sun light, low light conditions and is sub-optimal in adverse weather
Radar doesn't really have a fail mode and can see in heavy rain, heavy snow, fog, mist, direct sunlight, low light conditions.
The only problem being resolution which imaging radars solve.
Here are the failure modes:
Lidars
- Adverse weather
- Object Semantics
Camera
- Adverse weather
- Direct Sunlight
- Low light condition
- Object detection and range accuracy
Radar
Lets take a pedestrian at night wearing all black with no street lights, you need to detect not only the pedestrian but their distance (range) and velocity.
Camera fails to see the peds
Lidar see the peds and classifies it as a ped and assigns it distance and velocity.
Imaging Radar sees the peds and classifies it as a ped and assigns it distance and velocity.
Driving Policy navigates around the pedestrian
Lets take a pedestrian at night wearing all black with no street lights in adverse weather, you need to detect not only the pedestrian but their distance (range) and velocity.
Camera fails to see the peds
Lidar fails to see the peds
Imaging Radar sees the peds and classifies it as a ped and assigns it distance and velocity.
Driving Policy navigates around the pedestrian
Because the fail modes are different, all three sensors missing that one person in that instance is so improbable it will essentially never happen or since we are talking in probabilistic term. The probabilities will be very very very low.