I’m just thinking, if they are SOO confident in Vision for this, why not just situationally NOT take the USS into the stream context for the 3D parking assist/view/sky view, if all vision is the same
Well, because they are
not so confident at this point. And it's no secret; Ashok is quoted (via notateslaapp, I think from his X posts):
“This is the v1 release of this technology, and will have follow up releases that have even better geometric consistency with the cameras, better persistence of occluded obstacles, etc.”
Now, I'm personally less convinced that later versions will become as good as we all wish. Not because vision couldn't do it in principle with the increasingly sophisticated local world modeling, but because of the simple and long-standing limitations of the existing camera viewpoints.
Although I don't yet have the latest 3D visualizations to compare, I find that the existing pseudo-top-down 2D view is reasonably correct when I'm backing in to my charging space at night next to a couple of my other cars. I think it visualizes the position of the adjacent car pretty well but not really the block walls and gate that I'm angling towards.
I think there's a good chance that the wall & gate, and trash cans, will be reasonably well modeled for the reverse parking situation, because there are cameras that can actually see most of what's going on to the rear. I expect them to be less well rendered if I drive in forward with only the windshield cameras to draw from.
Yes it will get better, but though I try to have an open mind, I'm not dropping my opinion that they should have placed the cameras better - back in 2016 if not earlier.