Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

A theory on how shadow mode works (or will work)

This site may earn commission on affiliate links.
I'm not sure whether shadow mode is something that Tesla has already implemented or something that Tesla plans to implement at some point in the future. In either case, I have a theory about how it might work, based on another autonomous driving company, Aurora, describing a process that seems to achieve the desired goal of shadow mode, as described by Elon. Here it is:

“The ICML [International Conference on Machine Learning] talk from Aurora pointed out how human driving is a valuable source of learning when it comes to planning and decision making. Aurora particularly emphasized the importance of human interventions for imitation learning. Additionally, the Aurora speaker talked about flagging interesting human demonstrations without interventions. When a human driver takes a trajectory, Aurora’s software can determine how likely it is that this trajectory would be produced by Aurora’s planner. If the probability is low, this suggests a disagreement between the human driver and the planner. Aurora discussed this in the context of recorded data stored on Aurora’s servers (“offline data”), but I don’t see why you couldn’t run this live (“online”) in a car as well.”
(Source: my article “Why Tesla’s Fleet Miles Matter for Autonomous Driving”.)

So, on this theory, if the Tesla autonomous planner is “surprised by” or “disagrees with” the trajectory a human driver takes, it can trigger a sensor snapshot to be uploaded.
 
  • Informative
Reactions: pilotSteve
Shadow mode could be used to evaluate whether the software is a safer driver than the average human. (That's the purpose Elon originally described.) It could also be used as a way to catch software errors. For example, if the neural networks fail to detect an object that a human driver swerves around, that would probably constitute a “disagreement” or a “surprise” that could trigger an upload. Human annotators could review the video clip and label the undetected object. Then the labelled video clip could be added to the training dataset.

The same is true for errors caused by the planner. If the human-driven trajectory is improbable because the planner would make a mistake in that situation, the “surprise” or “disagreement” flags a planner error. In an imitation learning framework, the human-driven trajectory could be added to the training dataset. For a manually designed planner, the engineers could use mistakes as feedback on how to design the planner better.

As of Autonomy Day, Tesla's planner seemed to involve some imitation learning elements and some manually designed elements. So, how shadow mode data would be used to improve planning probably depends on the kinds of mistakes that shadow mode catches. For example, an incorrect steering angle through a highway cloverleaf would probably be thrown on the pile for imitation learning. A badly executed lane change might lead to manually written code getting tweaked. (As long as lane changes remain hand-coded.)

Prediction errors are the easiest to deal with because you don't even need shadow mode. Or human interventions. The predictor can determine whether its prediction for 5 seconds in the future is accurate or inaccurate simply by waiting 5 seconds and seeing what happens.
 
Last edited:
  • Informative
Reactions: pilotSteve
I just assumed that shadow mode meant that the software calculates what it would do if it were in charge and compares it to what the driver does. I cannot imagine that with so many cars on the road, a human could be involved in the process at all. All the data would be sent back, maybe once a day, and aggregated. The learning program would then try to make its responses more like the human ones in aggregate. That's how I imagined the concept. How well todays computers and deep learning algorithms can do it is something I have no idea about. I'm a bit more pessimistic than I was a couple of years ago.
 
pretty sure they've already covered all this in the autonomy day.
It seemed pretty clear then that shadow mode has been out for several years

It does seem odd to use an article you wrote yourself as a source reference, its not really a reference at all
 
pretty sure they've already covered all this in the autonomy day.

Here's what Stuart Bowers said about shadow mode on Autonomy Day:


He describes it at a high level, but the Aurora talk gives more detail about how to determine when the software and the human “disagree”. The relevant part of the talk is slides 15-17 from 14:09 to 16:27:

mBsG7Sp.jpg


xltQWxd.jpg


k5S0sKr.jpg


It does seem odd to use an article you wrote yourself as a source reference, its not really a reference at all

I'm not citing myself as a reference! Lol!
 
Last edited: