I'm not sure whether shadow mode is something that Tesla has already implemented or something that Tesla plans to implement at some point in the future. In either case, I have a theory about how it might work, based on another autonomous driving company, Aurora, describing a process that seems to achieve the desired goal of shadow mode, as described by Elon. Here it is:
“The ICML [International Conference on Machine Learning] talk from Aurora pointed out how human driving is a valuable source of learning when it comes to planning and decision making. Aurora particularly emphasized the importance of human interventions for imitation learning. Additionally, the Aurora speaker talked about flagging interesting human demonstrations without interventions. When a human driver takes a trajectory, Aurora’s software can determine how likely it is that this trajectory would be produced by Aurora’s planner. If the probability is low, this suggests a disagreement between the human driver and the planner. Aurora discussed this in the context of recorded data stored on Aurora’s servers (“offline data”), but I don’t see why you couldn’t run this live (“online”) in a car as well.”
(Source: my article “Why Tesla’s Fleet Miles Matter for Autonomous Driving”.)
So, on this theory, if the Tesla autonomous planner is “surprised by” or “disagrees with” the trajectory a human driver takes, it can trigger a sensor snapshot to be uploaded.
“The ICML [International Conference on Machine Learning] talk from Aurora pointed out how human driving is a valuable source of learning when it comes to planning and decision making. Aurora particularly emphasized the importance of human interventions for imitation learning. Additionally, the Aurora speaker talked about flagging interesting human demonstrations without interventions. When a human driver takes a trajectory, Aurora’s software can determine how likely it is that this trajectory would be produced by Aurora’s planner. If the probability is low, this suggests a disagreement between the human driver and the planner. Aurora discussed this in the context of recorded data stored on Aurora’s servers (“offline data”), but I don’t see why you couldn’t run this live (“online”) in a car as well.”
So, on this theory, if the Tesla autonomous planner is “surprised by” or “disagrees with” the trajectory a human driver takes, it can trigger a sensor snapshot to be uploaded.