It’s plausible that this is, like the USS replacement stuff, blocked on the single stack TV.
Speaking as an IT architect, IF you have the capability for your car to model the 3D space around your car, the natural way to build these capabilities would be on top of that foundation.
How do you provide parking sensors? Well, you consult your 3D model and provide feedback based on it. How do you do matrix lighting? Well, you’ve got a 3D model that already understands where the other entities around the car are, so you would consult that and decide where to shine your lights.
What I would not do, is build completely independent sub systems for all of these things in addition to the fancy 3D model capability that you still need anyway for self driving. It would be overall greater complexity and less efficient long term, both on the processing in the car and the development to support and enhance it.
I think there are exciting possibilities here if you consider what the TV model knows vs what a traditional headlight-spotting camera knows. It would be pretty simple, for example, for Tesla matrix to also mask out pedestrians/cyclists, not just oncoming cars. It would even be simple to only mask out oncomers and leave the lights on for those facing away from you. This would be really hard to do if you’ve built a system that spots headlights but trivial if you’ve already got a system that models everything and predicts its motion.
I don’t know that they’re planning to do this (and obviously I don’t know how good single stack is going to prove to be in the real world), but I know that if I was on that software team this is exactly how I would be advising they approach it. It would be poor architecture to do it the other way due to all the redundant capability.
There is a chance that ‘23 is going to be an exciting year for Tesla software updates. They could be about to leapfrog their way back in front of the competition again.