No one knows the inner workings of Waymo's Self Driving Software.
What we do know from various presentation and articles is that all data from their various sensor configuration play huge roles.
The Radar/Vision doesn't exist to contextualize the Lidar data. They have independent functions.
Radar exist to see and continuously track approaching vehicles, pedestrians and cyclists from all directions in increment weather. These radars can see "underneath and around vehicles, tracking moving objects usually hidden from the human eye.”
Cameras exist to detect and classify objects. Especially objects that's defined by their color such as "traffic lights, construction zones, school buses, and the flashing lights of emergency vehicles." These high resolution cameras allow Waymo to "detect small objects like construction cones far away even when we’re cruising down a road at high speed. And with a wide dynamic range we can see in a dark parking lot, or out in the blazing sun — or any condition in between."
Lidar exists to see and continuously track "shapes in three dimensions, detect stationary objects, and measure distance precisely."
Both the Camera and Lidar system runs two separate and individual Neural Network Models.
One NN is for Object Detection and Classification using images.
The other is for Object Detection and Classification using 3D cloud points.
You can find the relevant clip at 10 minutes 20 seconds
TLDR: The Waymo system is a complete complimentary system.