Yeah, I have no idea what everyone has been talking about.
FSD Beta has been running on Tesla Vision for, what, over two years with tons of incremental updates along the way. The V11.3
release notes alone have multiple examples of this ("Improved detection of rare objects by 18% and reduced the depth error to large trucks by 9%" for instance).
There seems to be a lot of misunderstanding what Tesla Vision
is and
isn't.
It's not one thing that drives the car (like one giant neural network). It's a whole bunch of assorted machine learning models with different outputs (everything from where other vehicles are to predicted behavior of pedestrians to lane topology). It's also only a piece of Autopilot. There's a whole pile of driving behavior logic built on top of the perception software (and IMO the cars have more problems with making bad decisions than perception, although that could be driven by low confidence perception).
HW3 cars with production AP moved to vision only quite a while ago too.