u/topper3418 on Reddit pointed out this excerpt from Tesla's Q2 update letter, which was published on July 24, 2019:
“We are making progress towards stopping at stop signs and traffic lights. This feature is currently operating in “shadow mode” in the fleet, which compares our software algorithm to real-world driver behavior across tens of millions of instances around the world.”
Now, this doesn't necessarily imply the software running passively (in “shadow mode”) included the bigger, more computationally intensive neural networks developed for HW3. But it would make sense for Tesla to deploy these passively as soon as possible to test and train them. Especially since Karpathy said in October 2018 (nine months before this update letter) that he was excited to deploy the new NNs.
Prior to the Q2 update letter in July, the last update I'm aware of was from Elon on Twitter in April:
Elon also tweeted that that the compute load on HW2.5 was “~80%”.
I only just realized that Elon said “for these tasks”, i.e. the features that were active in customers' cars at the time. That doesn't include anything that might have been running in shadow mode.
So — pending further evidence — my hunch is that the HW3-gen NNs have been running passively in HW3 cars since at least July. I would guess since HW3 began entering production cars in March/April. So, ~6-9 months already, rather than just the last week in which the FSD Visualization Preview got pushed.
u/keco185 on Reddit suggested that Tesla has been using human driving behaviour to train red light and stop sign detection. I think this makes sense, since if a human Tesla driver stops when the Autopilot planner isn't expecting it, that could be used to signal a false negative for a red light or stop sign. Conversely, if the human goes when the planner isn't expecting it, that could signal a false positive for a red light or stop sign. These “surprises” or “disagreements” could be used to curate examples to be hand-labelled for training (and also for testing). Aurora has described using a similar approach.
On r/SelfDrivingCars, u/brandonlive also speculated that Tesla may be using maps to detect false negatives for stop signs and traffic lights. I think that's a brilliant idea.
Other active learning techniques — like NN ensemble disagreement — could be used as well.
Karpathy's most recent talk about the new, HW3-gen NNs for anyone who missed it:
“We are making progress towards stopping at stop signs and traffic lights. This feature is currently operating in “shadow mode” in the fleet, which compares our software algorithm to real-world driver behavior across tens of millions of instances around the world.”
Prior to the Q2 update letter in July, the last update I'm aware of was from Elon on Twitter in April:
“The Tesla Full Self-Driving Computer now in production is at about 5% compute load for these tasks [i.e. Navigate on Autopilot] or 10% with full fail-over redundancy”
Elon also tweeted that that the compute load on HW2.5 was “~80%”.
I only just realized that Elon said “for these tasks”, i.e. the features that were active in customers' cars at the time. That doesn't include anything that might have been running in shadow mode.
So — pending further evidence — my hunch is that the HW3-gen NNs have been running passively in HW3 cars since at least July. I would guess since HW3 began entering production cars in March/April. So, ~6-9 months already, rather than just the last week in which the FSD Visualization Preview got pushed.
u/keco185 on Reddit suggested that Tesla has been using human driving behaviour to train red light and stop sign detection. I think this makes sense, since if a human Tesla driver stops when the Autopilot planner isn't expecting it, that could be used to signal a false negative for a red light or stop sign. Conversely, if the human goes when the planner isn't expecting it, that could signal a false positive for a red light or stop sign. These “surprises” or “disagreements” could be used to curate examples to be hand-labelled for training (and also for testing). Aurora has described using a similar approach.
On r/SelfDrivingCars, u/brandonlive also speculated that Tesla may be using maps to detect false negatives for stop signs and traffic lights. I think that's a brilliant idea.
Other active learning techniques — like NN ensemble disagreement — could be used as well.
Karpathy's most recent talk about the new, HW3-gen NNs for anyone who missed it:
Last edited: