Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Were HW3-gen NNs running passively since July?

This site may earn commission on affiliate links.
u/topper3418 on Reddit pointed out this excerpt from Tesla's Q2 update letter, which was published on July 24, 2019:

“We are making progress towards stopping at stop signs and traffic lights. This feature is currently operating in “shadow mode” in the fleet, which compares our software algorithm to real-world driver behavior across tens of millions of instances around the world.”
Now, this doesn't necessarily imply the software running passively (in “shadow mode”) included the bigger, more computationally intensive neural networks developed for HW3. But it would make sense for Tesla to deploy these passively as soon as possible to test and train them. Especially since Karpathy said in October 2018 (nine months before this update letter) that he was excited to deploy the new NNs.

Prior to the Q2 update letter in July, the last update I'm aware of was from Elon on Twitter in April:

“The Tesla Full Self-Driving Computer now in production is at about 5% compute load for these tasks [i.e. Navigate on Autopilot] or 10% with full fail-over redundancy”​

Elon also tweeted that that the compute load on HW2.5 was “~80%”.

I only just realized that Elon said “for these tasks”, i.e. the features that were active in customers' cars at the time. That doesn't include anything that might have been running in shadow mode.

So — pending further evidence — my hunch is that the HW3-gen NNs have been running passively in HW3 cars since at least July. I would guess since HW3 began entering production cars in March/April. So, ~6-9 months already, rather than just the last week in which the FSD Visualization Preview got pushed.

u/keco185 on Reddit suggested that Tesla has been using human driving behaviour to train red light and stop sign detection. I think this makes sense, since if a human Tesla driver stops when the Autopilot planner isn't expecting it, that could be used to signal a false negative for a red light or stop sign. Conversely, if the human goes when the planner isn't expecting it, that could signal a false positive for a red light or stop sign. These “surprises” or “disagreements” could be used to curate examples to be hand-labelled for training (and also for testing). Aurora has described using a similar approach.

On r/SelfDrivingCars, u/brandonlive also speculated that Tesla may be using maps to detect false negatives for stop signs and traffic lights. I think that's a brilliant idea.

Other active learning techniques — like NN ensemble disagreement — could be used as well.

Karpathy's most recent talk about the new, HW3-gen NNs for anyone who missed it:

 
Last edited:
  • Disagree
Reactions: croman
Could be. He has deep access to the Autopilot computer so if it's in production code he will see it and usually report it if it's noteworthy. Lots of good info on there about "shadow mode" too.
 
Yeah not arguing whether or not he has root. Mearly pointing out that he is one of the only real world people who have the level of access to answer the questions your asking and is a good source of information.
 
  • Like
Reactions: croman
“The Tesla Full Self-Driving Computer now in production is at about 5% compute load for these tasks [i.e. Navigate on Autopilot] or 10% with full fail-over redundancy”
Elon also tweeted that that the compute load on HW2.5 was “~80%”.
I believe this is just running the same HW2/.5 behavior "as-is" on the new hardware. From autonomy day, Bannon showed a slide with HW2.5 capable of handling 110 frames per second while FSD computer has a 21x increase to 2300 frames per second.

Doing some frame counting of @verygreen videos on youtube, it shows the video updating 30 frames per second while the overlay for lines and boxes are not as frequent for non-main cameras. Notably, it looks like every frame can update the main camera overlay while the other overlays update every 4th frame. 30 main camera frames + 7 cameras * 8 frames = 86 frames per second. And 86 / 110 = 78% compute load for HW2.5 while 86 / 2300 = 4% -- both numbers pretty close to the 80% and 5% numbers.

One guess is Autopilot will soon use full frame rate from all cameras on FSD computer as that would be 30 * 8 = 240, so just barely over 10% load (and 20% with redundancy). This will be especially important for reacting to quickly moving cross traffic as only handling 8 frames per second has a 125ms gap while 30fps has a 33ms gap. Or converting that to distance assuming cross traffic moving at 60mph: 33ms gap = 3 feet while 125ms gap is 11 feet.
 
  • Informative
Reactions: Dr. J
Doing some frame counting of @verygreen videos on youtube

1) Isn't verygreen using HW2 (or HW2.5) in the videos?

2) What I'm suggesting is that both generations of NNs are running on HW3. If verygreen is using HW3, he may be capturing visualizations from the HW2-gen NNs.

There is no real shadow mode.

I think shadow mode is most likely real. The reasons I think so are: 1) Tesla says so, 2) other companies like Aurora also use shadow mode by another name, and 3) I've never seen any strong evidence to the contrary.

As far as I can tell, green's argument that shadow mode doesn't exist is simply that Teslas don't record and upload sensor data or perception NN data on 100% of the miles they drive. That seems like a non-sequitur to me. That's not how Elon or anyone else at Tesla has described shadow mode, as far as I'm aware. Who said shadow mode means Teslas are recording video or perception NN ouputs all the time and uploading all of them at the end of each day?

In my understanding, the term "shadow mode" is used in two senses: 1) a broad sense in which software (including NNs) runs passively on cars for testing and/or training purposes and 2) a narrow sense in which when a human driver takes a trajectory that the Autopilot planner computes a low probability for, this is a signal that can be used to trigger a data upload. (Or something along these lines; I'm just describing how Aurora does it and assuming Tesla does the same thing.)

If ~99.9% of the time, the Autopilot planner and the human planner agree, there is no need to upload that data. The only data that should be upload is some carefully curated subset of the ~0.01% of the time that they disagree. And that is consistent with how Elon has described shadow mode.
 
Last edited:
  • Like
  • Disagree
Reactions: croman and Sharps97
What I'm suggesting is that both generations of NNs are running on HW3.
I know, and I'm suggesting the 80% and 5% load numbers match up with the same NN getting same inputs on both old and new hardware. The actual frame math is actually more nuanced with various optimizations to crop or differently prefer various cameras, but overall 80% of 110 fps budget of HW2.5 is close to 5% of 2300 fps budget of FSD computer.

It would probably be safer to load a different NN onto the B node than having driving-critical operation compete on the same node anyway, and the recent finding from verygreen might be a test run of this. Similar to adding cone visualizations as a test deployment in preparation to showing more visualizations like traffic lights, starting to use node B with even the same NN could be a test before running with a different NN. Although I believe so far verygreen hasn't noticed additional NN in the software, and I guess average owners can also notice by monitoring software update sizes significantly increasing from roughly 500MB.
 
1) Isn't verygreen using HW2 (or HW2.5) in the videos?

2) What I'm suggesting is that both generations of NNs are running on HW3. If verygreen is using HW3, he may be capturing visualizations from the HW2-gen NNs.



I think shadow mode is most likely real. The reasons I think so are: 1) Tesla says so, 2) other companies like Aurora also use shadow mode by another name, and 3) I've never seen any strong evidence to the contrary.

As far as I can tell, green's argument that shadow mode doesn't exist is simply that Teslas don't record and upload sensor data or perception NN data on 100% of the miles they drive. That seems like a non-sequitur to me. That's not how Elon or anyone else at Tesla has described shadow mode, as far as I'm aware. Who said shadow mode means Teslas are recording video or perception NN ouputs all the time and uploading all of them at the end of each day?

In my understanding, the term "shadow mode" is used in two senses: 1) a broad sense in which software (including NNs) runs passively on cars for testing and/or training purposes and 2) a narrow sense in which when a human driver takes a trajectory that the Autopilot planner computes a low probability for, this is a signal that can be used to trigger a data upload. (Or something along these lines; I'm just describing how Aurora does it and assuming Tesla does the same thing.)

If ~99.9% of the time, the Autopilot planner and the human planner agree, there is no need to upload that data. The only data that should be upload is some carefully curated subset of the ~0.01% of the time that they disagree. And that is consistent with how Elon has described shadow mode.

You are just making stuff up without a real basis in reality. Tesla doesn't do what you suppose.
 
The code is reality.
And you have the code ?

Seems dangerous to assume one hacker knows everything and misses nothing (even though Soylent has said clearly that verygreen misses a lot).

I've a clear idea of what we know for a fact and what we can infer and what is speculation. You seem to have difficulty distinguishing among those.

Fact : Tesla says they run some things in shadow mode.

Fact : Verygreen says he hasn't seen any "shadow mode" but has seen evidence of various triggers.

What is your inference, what is your speculation ?

ps : I've spent enough years in my life explaining s/w architecture to people who still don't get it ! So, hacking is more like the parable about blind men trying to figure out what they are touching - while touching an elephant.
 
Last edited by a moderator:
  • Disagree
Reactions: croman
There is no evidence for shadow mode. As everyone says it's just Tesla's word and they are proven liars, particularly Elon.

Green isn't God but I'm not the one operating on faith. His track record speaks better than Tesla and he does see the code and has the smarts and skills to properly interpret it. He also has ethics and morals unlike Tesla.

But ultimately I'm not the one making assertions without basis. Shadow mode had yet to be proven and it would have to be clear to anyone with root. All who have root access, particularly to APE, are in agreement except the true believers who don't need something as pesky and evidence to push their unfounded beliefs.
 
I believe green's claim is nothing more than that HW2/3 Teslas aren't recording video or object detections all the time and uploading all of them.

I think continuous recording is what green interprets as “shadow mode”, but I don't interpret it that way.

I interpret shadow mode as being just another set of event-based triggers of the sort that green has confirmed exist. In the context of shadow mode, events that trigger snapshots are likely disagreements between the trajectory outputted by the Autopilot planner and the car's trajectory while being fully human-driven.
 
  • Disagree
Reactions: croman