dgatwood
Active Member
I disagree. More information is always better, and lidar provides additional information that cameras do not.
Actually, LIDAR provides much, much less information than cameras, and potentially with much lower accuracy. First, you have to fuse that with camera info, and that fusion has the potential for not accurately mapping the distances onto specific pixels from the camera accurately.
Second, LIDAR doesn't sample a scene instantly; the distance measurement can be accurate, but if you've moved three feet between when you sampled two points that happen to be near one another because of the scanning pattern, you now have to apply math to guess at what the actual shape of the object is.
By contrast, with two camera images, you have a point cloud in which all points were effectively sampled at the same time. Ignoring the errors caused by timing skew between cameras (which should be roughly constant from frame to frame), you're going to end up with higher accuracy using that approach, not lower accuracy, and thus should have much lower rates of false object detection, assuming all else is equal.