I really doubt it. I actually think this time is very different. And I say that as a bitter observer of FSD non-progress since I first paid for it in 2015.
All the local-maxima that have resulted in stalled progress in the past have been due to algorithmic changes. They kept changing the way they approach the problem, and thought that if they really worked at it, the new code paradigm would get them there. There was the whole 'lets swap to using video not images' 4D change, the 'occupancy network' stuff, the 'lets use raw photon counts' stuff, etc etc.
But what Tesla are doing now is not like any of that. TBH they are doing now what I thought they were originally going to do from the start, which is a programmatically extremely simple approach: Train a MASSIVE neural network on huge amounts of data, and let it control the car.
This is totally different, because improvements to NN outputs are almost entirely based on the volume of quality data. Thats it. Not hundreds of C++ coders like me writing complex spaghetti code in the millions of lines, and hoping it all works. They still need SOME code in there, to control for things like obeying local laws that real-world drivers may ignore, but nothing like what was required before.
I was an FSD skeptic, converted to true believer with this version. And as an investor its trebly good because:
1) Its big data dependent, and nobody has even 1% of the data Tesla has, so cannot compete
2) Its big data dependent. Its not some source code you can steal. Even in china. You would need semi trucks full of disk drives to steal it.
3) Its scalable very fast, very easily, and very predictably.
I actually think we might be at the SLOW point of true FSD. The bit where they merge the NN stuff into the other codebase. From here on, expect a lot of updates, and them to improve *everything* a little bit every time. Based purely on adding more video data from detected edge cases. Things could get scarily good, scarily fast.