That will potentially improve how the car reacts to what the cameras see around it, but it will not address most of the points that
@SomeJoe7777 brought up (that mostly have to do with contextual knowledge based on past experience, such as knowing the exact locations where congestion typically occurs and what lane to take to avoid it). These things need to be solved in the routing engine and the HD maps that NoA uses. For example, Tesla could learn from uploaded route segments which lane human drivers going in a certain direction typically use at an interchange, and add the result as semantic information to their HD maps. There is a lot more to "full self driving" that just improved computer vision.