Logically if/when vision parking is solved then some kind of 360 parking view could be achieved, since the logic involved in the car "remembering" what it has seen in front as you drive towards or away from it would enable it to build a 3D map of your surroundings. This is tangential to the whole "occupancy network" thing that is touted.
The problem (and here is where I get to earn my flyby disagrees
) is that a) I don't have any confidence that the systems involved - cameras, AI, etc can discern all of these details in the sorts of conditions we experience (Some Stanford guys
have done work in this area, but this is in lab conditions, and it's unknown what processing power is involved), b) I don't have the confidence that Tesla will deliver it in a reasonable timeframe because it's not a simple problem domain and c) I don't think it will be reliable when it
is released.
if you think about what the cameras can "see" even just at night, forget about dirty rain contaminating the cameras, etc I can't conceive of how the car is supposed to know to a certainty - to the extent that it can with ultrasonics - about what obstacles are around it. I also don't think this is one of those first iPhones "you just don't get it man, you can't conceive of the magic you're about to see" moments either. You can't make a camera that has no night vision/IR capability suddenly see things clearly in the dark.
All of that is to say that I wouldn't trust a Tesla 360 view based on vision, rather than cameras installed and pointing in the right places (bumpers, mirrors, etc) as is done on other cars, or vision parking, except in absolutely ideal conditions - much like wipers, basically (I drove home last night at about 10pm with the car resolutely failling to wipe the screen often enough).