FSD is not coming. Government has no plans to allow it and only big companies can test on public roads. Which means, only big companies can operate it if allowed. Next step is Govn't opening up testing for personal vehicles which could be 10+ years and probably limited permits that are 1st come 1st serve. Also, getting all 50 states aligned will take a long time. Maybe it can go FSD from CA to AZ, but once you get into TX, you have to drive manually. I don't see how cameras alone will get Govn't approval for FSD either. It is just so limited and can't see fast motorcycles lane splitting or 3 lanes down that a car is going to slam into you if you keep the same vector. Just like the v9 dashcam footage where the red Model 3 t-bones the white Honda. This happens all the time in LA and as a driver I can predict and slow down at each gap knowing there is a high percentage someone will shoot the gap. AP can't scan and predict. It will just go the speed limit and slams on the brake. This is why most FSD cars like Google and Cruise gets rear ended.
I agree with your conclusion (FSD will take a long time to develop), but not the reasons that you state.
You say: "It is just so limited and can't see fast motorcycles lane splitting or 3 lanes down that a car is going to slam into you if you keep the same vector."
It absolutely can *see* the motorcycle and the car 3 lanes down, it just doesn't "understand" (I use the term loosely, because AI is not really "intelligent" per se) those situations. This is a matter of training and compute capacity.
"AP can't scan and predict." - absolutely untrue. The AP can predict. The problem with using AI to predict is that it can also generate false positives (i.e., it's easy for it to think a given car is going a certain place when it isn't). Tesla's current implementation is (properly, IMO) very conservative - by primarily being reactive it avoids a lot of situations where it may mistakenly take action on something it thinks might happen, but doesn't.
A bit of the problem of prediction is evident in the false braking events people report when approaching overpasses - it's easy for the AI to misinterpret inputs (this is known as "overfitting" in AI training terms).
"This is why most FSD cars like Google and Cruise gets rear ended." - FSD cars get rear-ended because this is how the most classic human error exhibits itself - inattentiveness. They get rear-ended because they get hit by people texting, fiddling with the radio, doing their makeup, etc. This is also why non-FSD cars constantly get rear-ended. I've seen no evidence to suggest that FSD cars get into accidents different from the accidents human drivers get into. Note that by most reports FSD cars *cause* very few accidents - the human element is still the weak link here.
IMO, the reality here is that true FSD requires not only a "good" solution - it requires a "great" solution. FSD will be given little benefit of the doubt, so it will be held to a much higher standard than humans will be held to. This means that there is a ton of situations the FSD will handle cleanly before gaining adoption.
It's one thing to come up with a technology that solves 99% of the problems - it's much, much harder to come up with the technology that solves 100% of them. I think there are several vendors at the 99% stage - but not many approaching 100%.