Based on what was said in the demo, it seems like there's no explicit control logic about lanes or signs or traffic lights; yet it's able to follow the navigation route by getting into appropriate lanes to make turns and respond to traffic lights.Do they just input the route to be taken and NN figures out everything else ? How about traffic lights
Did anybody see V12 visualizations show any traffic lights or signs? I'm pretty sure 11.x would have shown the red lights and stop sign when first in line in these situations, but maybe they're small enough to get hidden by video compression:
If those visualization actually are gone in the V12 demo, then that could mean part of the 300k+ lines of code for control was in charge of determining when signals/signs were relevant to control and visualize. Presumably perception has been predicting all visible traffic lights and stop signs including those for cross traffic, and traditional control code has had mistakes especially for oddly angled situations, so neural networks could learn to do better.
Presumably if control network has learned about how to behave for stop signs, it should be able to learn how to handle school zones, no turn on red, and even upcoming lane use control signs based on how they look and how people drive when seeing those signs.