For example if you are at a Michigan U turn with cars going 70 MPH and you are making a LEFT on the STOP sign (green/red path).
Your completely separate Vision system would then at that moment accept information from the right corner radar and maybe the right lidar to detect fast approaching cars.
Sounds great in theory, but not sure how it improves the system? Human drivers do exactly this because they have limited "sensors", and this can mean they sometimes caught out by unpredicatable scenarios. But with a full, multi-spectrum 360 sensor suite, and plenty of processing power, where is the benefit?
Might seem counter-intuitive, but taking this approach becomes more of a cost than a benefit. The cost is there because someone has to pre-define where the car should "focus it's attention" in any given scenario. Further, the car has to be able to recognise when it is in a particular scenario in the first place.
It is likely that this knowledge would be stored in the HD map, which limits the benefit to mapped areas. A true L5 car would need a backup policy for when an HD map is not available. The backup policy is likely to be to use the full 360 sensor suite all the time... and if you have to to do that, why take on the additional cost of manual scenario planning?
Maybe this is exposing a weakness of their system - they find it hard to define the trust relationships between the various sensors in a way that is flexible enough to manage real-world scenarios.