As a lay person, the first thing you need to do is familiarize yourself of the strengths and weaknesses of all the sensor types. You mentioned a few situations where visible light cameras might struggle. But do you know how those situations impact other sensors? And can you rely on those sensors if your primary sensors (light cameras) are not working? In most cases, the answer is no.
Radar is the only sensor that reliably sees through weather obstruction, but it is incredibly low resolution that it cannot be relied upon as a backup when light vision is down.
Lidar also relies on visible light bouncing off objects, like cameras, but it uses lasers to actively bounce light off objects, whereas cameras rely on ambient light (passive) to do the bouncing.. Lidar can detect objects in pitch darkness while cameras will fail. But Lidar will also degrade in fog/snow, as the weather will weaken the laser beam on its round trip, requiring a higher power laser. Not reliable as a backup for primary vision. Lower cost lidar will also have a hard time reading signage and determining object density, as it only see things as solid or not there.
Now, for both radar and lidar, there are different levels of performance. To overcome environmental challenges, you can go with more expensive radar/lidar to get better performance, but that may not be viable from a business perspective.
Sensor fusion also means increased chances of false positives and negatives.
The primary challenge of autonomy is not perception of the environment. I'm not suggesting Tesla has solved perception. Just pointing out it's the easy part. The harder part is getting the car to do the right thing when presented with a picture of the environment. Throwing more sensors at the problem is just addressing perception, not the decision-making.