cars don’t learn unless they have a software update. The shadow mode is to capture real world scenarios
Ever since my 'bug report' bubble got burst with the realisation that in effect, it did nothing, I have seriously doubted that there is much if any 'learning' going on, especially in the UK.
At one time, it did appear that when I returned home from a journey with many AP disengagements, my home internet ground to a standstill, presumably while the car sent back some data to Tesla. I am not aware of even that happening any more.
In any case, given Tesla's ongoing issues with FSD Beta as well as long standing problems like auto headlights, is there any evidence of a slick (or even mediocre) data-driven OTA update process keeping cars in an elevated status of technical superiority and performance?
if AP runs into trouble, it disengages with a bong
I play around with music software and for many years the software has included an indication of how much you are loading the processor to help avoid any sudden, unexpected dropouts. IF any Tesla AP / processor driven aids depend on available processing resources, it would be good to know how stretched your processing system is.
Similarly, I would expect the car to have a measure of confidence in what it thinks it sees. Regardless of intermediate steps, ultimately it has to decide 'brake or not to brake', 'wipe or not to wipe’, 'head beam or not head beam'? Once it make a decision either way, you'd think there should be a high level of confidence based on consistent observation over time, multiple sensor feedback or both. The glitchy, lurching, unreassuring feel to auto features suggests to me that the algorithms are happy ditching what they just saw and knew a fraction of a second ago in favour of what they see now, regardless of how feasible that could be, and the potential for throwing hands up at any moment 'over to you driver' both suggest that decisions are (still) made based on individual views rather than a temporal confidence bubble.
At least for FSD but likely for everything else too, I feel the car should share an indication that it at least believes itself to be well within its confidence zone and that all auto systems should therefore work dependably.
Especially with driver control interface changes requiring longer periods of driver distraction (hunting for controls), a human can only drive the car based on having a notional confidence bubble to gauge when it might be safe to try a more demanding task .... like turning on the heater or ac!