This is the difficulty with trying to understand the true status of self-driving car development, and the rate of progress. Research and development is a pipeline that ends with deployment in production systems. If our approach is simply to wait until full autonomy reaches production, then we will have no idea how close or far away it is until suddenly one day it arrives. So, to anticipate the future, we have to look at the earlier stages of the R&D pipeline. But there are many reasons why something earlier on in the pipeline might never make it to a production system, or why it might take a long time to do so.
For example, Nvidia's end-to-end neural network might never be workable for a fully autonomous production car because of combinatorial explosion. If you have multiple modular neural networks, you can train each one independently. For example, you can have a perception network that is trained to deal with bright, direct sunlight shining into the car's front-facing cameras — something that tends to happen when the sun is low in the sky, after sunrise or before sunset. You can also have a motion planning/control network that is trained to deal with a wet road, and adjust for the reduced traction.
With the modular approach, you can train the perception network on sunrises and sunsets, and you can train the motion planning/control network on wet roads. If you encounter a situation where it just finished raining and now there is blinding sunlight, the car will be able to deal with this situation because the networks will deal with those two factors independently. But with an end-to-end neural network, the car won't know how to handle the situation unless it has already been trained to deal with wet roads and bright, direct sunlight simultaneously. This is what leads to combinatorial explosion. To train an end-to-end neural network, you have to multiply every conceivable variable by every other conceivable variable and generate a list of scenarios to train on.
This is why I'm inclined to write off Nvidia's demo as a cool science project, and not a sign of technology that is moving down the pipeline toward production systems. There is a good reason it might never make it out of an experimental setting to a working commercial product. Tesla isn't using an end-to-end neural network, and is simply using Nvidia's GPUs, not any of their software.
Wayve is the only company I'm aware of that is taking the end-to-end approach.
The big question with regard to self-driving car technologies generally is whether there is a reason what's working decently well in prototype can't make it to commercialization, or at least can't do so for a long time. The nightmare scenario is that some capability like pedestrian or vehicle detection, or semantic segmentation for driveable roadways, hits a ceiling that engineers can't find a way beyond. If the ceiling falls below human performance, then self-driving car development is just stuck.
The stuff Mobileye presented on localization is encouraging because now both a commercial R&D project and an academic experiment have converged on the same result: localization to within 10 cm with just cameras and camera-based HD maps (no lidar) is possible for self-driving cars. The only caveat I can think of is that camera-based localization might break down under certain conditions. Maybe in tunnels, where there are long stretches of flat, featureless concrete? But as long as you continue to do lane keeping and object detection in the tunnel, it doesn't seem like you need to HD maps to tell you what to anticipate next...
Assuming the error rate Mobileye gave for pedestrian detection is true, that's genuinely encouraging. If there's a false positive every 500,000 km/310,000 miles leading to unnecessary sudden braking (and possible rear-ending), and a false negative (leading to potentially hitting a pedestrian) even less often, then it's within 10x of beating human performance — if it isn't there already.
A consumers will never be able to test these claims by themselves because that would require millions of miles of driving. A consumer could drive 20,000 miles (more than an average year's worth of driving) and never encounter an error, and falsely conclude that errors never occur. The best evidence is large-scale statistical validation.
A self-driving car could kill someone on average every 50,000 miles (compared to 92 million miles for human drivers), and lull you into a false sense of security by driving safely for thousands of miles first. So production deployment isn't the final arbiter of actually developing a fully autonomous car superhuman performance.