Most likely because the jump from AP1 to AP2 involved almost a total rewrite of their AP code. AP1 utilized MobilEye EyeQ3 which Tesla claimed was slowing down FSD progress. When AP2 first came out, it didn't even have high beam support or auto wiper support. Things quickly progressed from there. Although I do think they should switch from radar to lidar-based sensors, they are basically operating with the concept that every camera on the car is like a human eye. If two human eyes can catch something, an artificially intelligent neural network should be able to do the same or better. There is some skepticism here, but the benefit of the new HW3 chip is that they will continue to develop from the existing platform. HW2 based features will clearly work on HW3, but how quickly HW3 features become more pronounced and HW2* based cars cannot keep up -- well they could end up forking the code and leaving in deprecations for HW2.. Over time, new code will be introduced that takes advantage of the enhanced computing power that HW2's GPU will simply not be capable of calculating in a timely fashion. It will reach a point where HW3 does surpass HW2/HW2.5 based on the pure computing power of the unit. To fully understand, two GPUs with redundancy on HW 2.5... one GPU on HW2.0. Faster GPU on HW3, with room for three way computational redundancy (speculated). This is because any true fully autonomous system would be considered a critical system that cannot be allowed to fail. Satellites, etc are often built with triple redundant systems in the event of failure. When you think about it, it is quite a brilliant idea. What may not have been so brilliant, as was obviously to keep the company afloat, was selling a bunch of 3's before the computer was available. But that is why they sold FSD as a pre-order option to begin with: to continue to fund that project/objective.