...Looking at the current state of AI and CV, where it's not even trusted in radiology yet (which is not time critical) should tell you something about CV and safety critical applications.
I think this is a fair point, though personally I'm not familiar enough with the state of the art across AI sub-disciplines to draw too many conclusions. Also comma there are a number of dimensions that make it less than a direct comparison
I remember a small comment, I believe it was from John Gibbs aka Dr Know-it-all on YouTube, who attended AI Day 2. As the event was specifically intended for recruiting of top AI engineers (though noting a fair number of YouTube influencers like himself having been invited as well), he was interested in the reaction of some of those potential employees who came to see the presentations.
As I recall, he related that some of these people were indeed isuper-mpressed by depth of the Tesla AI work. Specifically, at least one or two Stanford grad students in the medical image interpretation field who seemed blown away by the level and sophistication of FSD AI engineering.
Again I'm not claiming a huge conclusion from that tidbit of information, but it suggests that we may not be able to measure or predict FSD's near-rerm potential by looking at medical Image analysis.
Looking at another AI sub-discipline, I've just become aware of the recent breakthroughs in cartoon/art synthesis with the Stable Diffusion AI. Perhaps it's just an interesting party trick, but when you first see working it sure feels like some kind of milestone software event, and not something that we could have predicted would appear so abruptly from following prior examples of AI-driven graphics tools. The entire field feels still quite open to disruptive breakthroughs. The challenge for Tesla's AI team is whether they can stay agile enough to consider and implement disruptive Improvements,
It's also human nature that we very quickly adapt to milestone developments and look for the next thing, evoking your example "just works, boring" response. I always think of the moon landings that seemingly went from a historic triumph of mankind to "Seen it already, what else is on?" in a matter of months. Likewise, mankind just got off the ground with powered flight in 1903; less than a century later, mass air travel was not just common but an unpleasantly low-end commodity - and no one even looks out the window anymore.
Predictions are risky but I think good FSD is much closer than 10 years away. I think that 1 year ago, the plentiful supply of skeptics on TMC would give you the derisive laughing response (an antisocial cop-out BTW) if you predicted the amount of progress to come in 2022. Now it has come, yet we know there's so much more needed.
Of course, one of the easiest ways to dismiss and denigrate the progress is to compare it to Elon's hopeful predictions, even allowing for some goalpost-moving on both sides. (Witness the helpful and original reminder just above, in case we'd forgotten.) I try always to remember that in the decades of my engineering career, significant development projects very rarely came in ahead of the set schedule, yet I very rarely came away with the impression that things would have gone better if we'd set the original milestone deadlines to what they actually turned out to be. A reasonable amount of unreasonableness is a key ingredient for engineering achievement. And people calling out from the sidelines "
that's not going to happen", though often correct in the specific, don't contribute to the result.
Regarding the Tesla vs comma .ai comparison: it's interesting in the context of engineering team size vs. accomplishment when we're in the 80% phase; it becomes less of an asset as the product gets closer to a major commercial entity that serves a huge customer base and requires ongoing support and fleshing out of use cases. I note that Tesla is heavily criticized around here (including me to some degree) for having an overly and even dangerously lightweight sensor suite, and also for being only an L2 supervised system at this point, even though robotaxi autonomy is the eventual goal. In a fair comparison then, comma.ai hardware would have to be considered criminally deficient, and Hotz has (or had) pointedly distanced their goals from autonomous unsupervised operation. Anyway, I gather that Elon's project management goals have less to do with a predetermined staffing level that fills holes in a chart, and more to do with a cultural imperative that each new engineering-development hire is there to make a significant forward contribution. Everyone expresses principles like this but few organizations achieve it as they mature. There may well be bloat in Tesla's now quite large organization, but evidently not so much on the AI team.