Correct me if I'm wrong, but your initial point was that lidar can "do" everything that cameras can with respect to FSD, read signs, markings, etc. Except recognizing traffic lights. To the point where you can use lidar alone to achieve FSD without HD maps, again except reading traffic lights? I always get confused when you say "do" or "has".
Here it is folks, bladerskb claiming that lidar can currently replace cameras for FSD, as long as the car has some way to read traffic lights, lol. This, by the way, was what I initially disagreed about, and I'm right about my disagreement. Bladerskb totally ignored the poor vertical resolution of lidar, which means it can't read text on any road signs or markings at a distance. And in the case of road signs, it probably can't even read them until 10 feet away.
Redundancy is the main feature of a good FSD and this is why many companies are using Lidar, Vision, Radar, etc. Even if lidar could do absolutely everything, or vision could do everything, it's still a safer plan to use multiple sensors or the best features of each. Even if lidar could read signs (vertical resolution is improving and angular resolution may be a more important spec anyway) it still may be a better idea to use vision to read the signs (perhaps after lidar finds them using reflectivity). Whichever gives the best outcome. Neither is always an all-encompassing solution, nor is that the expectation. As I see it anyway. It would be nice if one system did everything but why should it? Use several for the widest range of weather, obstacle, and detection conditions. Lidar does many things much better than vision, and vision has some advantages over lidar. Use them both.
I think we should be comparing Tesla vision to lidar versus Vision to lidar. The reason is Tesla vision still can't recognize, localize, and size estimate generic static objects on a consistent basis. It can't do it in the forward vision despite having three cameras with overlapping FOV areas. This is vastly important because not everything on the road is going to be recognized by a neural network. So there needs to be some kind of generic "blob" where the car tries to avoid it if possible, and stop if bigger than a pre-set size if unable to avoid it. Having front facing solid state lidar would solve a lot of Tesla's false braking, and late braking issues. Now that doesn't solve FSD because FSD needs to have redundancy. So it would force Tesla vision to improve to a point where the two systems combined made the system immune to a single fault failure from a perspective of safety. Having frontal lidar would give Tesla the ability to cross check the systems continuously. On the sides/rear it might not need lidar where radar could augment the visual data well enough that a single fault failure could happen, and the car could still use the remaining sensors to safely exit the roadway. Where it went into some limp mode.