Not at all. It's nothing to do with mapping, but much more about situations where the cameras' physical limitations become problematic. (Fog/rain/dust/dirt, visual ambiguity / optical illusions, or low light.) I've had plenty of forward-collision warnings and phantom-braking events where the car sees e.g. the shadow of a tree on the road in front of me, and thinks it's an obstacle. This is less prevalent now, but it still happens from time to time. Radar+Lidar would add enough information to the network to allow it to disambiguate these situations far more reliably. Likewise for e.g. the 2016 fatality involving a tractor-trailer that was the same color as the sky. The cameras couldn't see it, and radar interpreted it as an overhead sign, but Lidar would have accurately identified it as an obstacle. An E2E neural network would be able to properly synthesize all this information together in a coherent and accurate way, and do the right thing in these cases.