The main problem with Smart Summit is it is sooooo legacy and the Vision is very limited in what it can identify (no occupancy network). For instance it likely can only identify things like Stop signs and maybe pedestrians. It seems to rely heavily on USS to identify cars (example of the OP's accident), obstacles and curbs so things like trees and lane line/parking markings are invisible to the legacy Vision software.I don’t. The visualization is just that: a rendering, and it can be scarily wrong. I do think the actual information received from the sensors and the software in the car is far better though, just not rendered correctly. I don’t know this for a fact but…see below..
Yes, one would think. On my 2018 M3, the system uses the cameras, USS, and radar for smart summon…at least theoretically. Should be enough to understand the environment, and it seems to do so rather well.....