Exactly, I left it open so that people can see why USS/radar are not the way forward. The tech is moving in a different direction - Initially when halogen came in it wasn’t bright enough but still some of the bold car makers dropped the wipers and washers because of the amount of complex integration these wipers and washers needed. Then LEDs came in the last 10-13 years and made these wipers redundant but still some car makers like Volvo can’t invent or not bold enough to drop those washers.
Tesla is one of the early adapters of modern tech in cars, so you expect them to ditch few of the old tech - even if they are tried and tested - for new tech because that is their USP. At the moment no USS/sensors is like halogen bulbs without washers and wipers - not many car companies will do but Telsa will/can do - to reach the LED lights/matrix lights system what the modern car has. Unfortunately with the Tesla vision system you need more ‘uncluttered data’ that can be processed to create something more coherent. USS/radars are/were interfering in collecting those uncluttered data, so Tesla has no other choice other than to remove and collect these uncluttered data.
That's where I don't agree with your conclusion. I'm a tech enthusiast also but, as a safety equipment, a radar or lidar has the ability to detect obstacles. A camera does not. There is an overreliance on AI interpretation from a sensor than can only get 2D, flat images and get with an accuracy necessarily lower than 100% that 'this cluster of pixels' is an object in the way.
Why are iPhones Pro equipped with a Lidar for photography since a couple of generations? Because that is the only way to sense depth. Otherwise, Apple would gladly do without an expensive sensor part in their COGS. And yet they are leaders in computational photography and couldn't get to a better solution with pure software.
When it was just to replace a $2 rain sensor, it was a minor inconvenience, but now, putting your faith on FSD with that?
As the person in the car, with his life on the line, I'd rather have multiple sensors that each confront their information, than one saying 'well, I'm not sure because I don't have a certain way of knowing if there is an obstacle ahead but it's
probably fine based on my training data'...
Keeping USS and Radar was the safest choice. It was not driven by the ability to do better with Vision, but simply to reduce costs in the short-term, at the expense of end-user safety. Otherwise, they would have kept the hardware as a failsafe and developed Vision in parallel.
The reintroduction of an HD Radar in HW4 is an admission of fault. We'll see how long it takes before USS goes the same way.