No, it is nothing like that.
A NN trained to recognize road traffic data that has been trained on billions of miles of real world traffic data is still a NN that has that knowledge.
It is like putting a pro driver into a new indy car.
They have the experience, but the feel of the new car the driver will still need to get accustomed to.
By your logic the vNew FSD would lead to the same behaviors as the vOld FSD. An existing, tested model executes an existing, tested action. If either of those statements is false, either the model or the action being untested, then by default the chain is untested.
You're absolutely right that the model will benefit in object recognition by all the billions of miles of real world data. That model will be further enhanced by the ability to recognize and track objects across multiple camera views, as Elon's said publicly. That improved model has
not been tested by billions of real world miles.
Similarly, you have logic executed based on those models. Today, you will see phantom breaks due to any number of interpreted signals from that model. Elon has suggested that no longer happens. That new, changed behavior has
not been tested by billions of real world miles.
This is in contrast to today, where adding one additional piece of functionality doesn't invalidate the entirety of AP, because the change is supplemental and iterative, vs. a fundamental replacement.
If you work in software, which it sounds like you do, you know that the con of a rewrite is that it has the potential to introduce far more bugs than a single, incremental update to an existing codebase.