Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Seeing the world the wrong way in v9

This site may earn commission on affiliate links.

verygreen

Curious member
Jan 16, 2017
3,048
11,762
TN
So I was wondering how does AP v9 works in right hand drive countries and opportunity presented itself so I jumped on it. Thanks to everybody involved in making this happen!

Here's some footage from 18.40 in Japan.

Sadly there was a bit of a setback where when too many objects appear in the frame the buffer we use to collect it in overflows and it shows as nothing is being detected for some time. You'll see this at the start of the second video - don't worry the AP sees everything still, just artifact of our data collecting that I noticed too late to do anything about.

Highway->surface streets transition:

Busy surface streets->highway. It's interesting that the toll gate is not perceived as an obstacle even when closed.

Airport parking garage (really drives autopilot nuts), parking guard encounter also thee's a superlow flying plane at ~1:05 that AP does not see (I sooo wanted it to see and classify it! ;) ):

While the core NN seems to be the same, there are some processing differences, in particular non-adjascent lanes are detected a bit differently so needed to update the display logic to show them correctly.
 
Is there any evidence that autopilot is detecting obstacles at the moment? I thought there wasn't

One thing i like about @verygreen is that he tells it as it is.
He doesn't hype stuff up, or sugar coat and gloss over any negative and water (*cough kool-aid*) board himself with the positives.
While we may not always agree and come to the same conclusion. He's an honest stand up guy doing the lawd's work.

With that said. Tesla currently lacks detection for regular obstacle objects, debris, traffic light, traffic sign, overhead road signs, road markings, barriers/guardrails, curbs, cone (as far as i can tell), etc.
 
@verygreen Do you know how often snapshots occur? For example, could you count how many snapshots you've had in your (cars) and divide the miles you've driven by that number? I'm wondering about the miles of driving per image captured. Is it 100, 1000, 10,000?
If by snapshot you mean data captures for Tesla I noticed that they are not a fix distance and they seem to correlate to amount of issues you have in a drive.
If you never cancel a lane change, use autopilot features it seems to not upload more than 10 MB on a 41 mile drive. I am working on getting more concrete numbers in the next week's. Anyways i use it on my 44miles of daily driving and every day it uploads about 200mb
 
@verygreen Do you know how often snapshots occur? For example, could you count how many snapshots you've had in your (cars) and divide the miles you've driven by that number? I'm wondering about the miles of driving per image captured. Is it 100, 1000, 10,000?
the mandatory copy of calibrations occur about every 5 minutes, but these are just that.- calibrations.
The autopilot trip logs are every time you hit park after driving for a bit. Also when something crashes or stops working there's a logs snapshot, I get some of those, e.g. every time the backup camera thread stops (almost once per drive).
Also if you disengage AP, there's a very small snapshot with just coordinates of where it happened generated (no pictures).

That's about it. My car does not get any trigger requests from tesla so number of images per mile driven is zero for me. Of the old "campaigns" I saw (that they seem to be doing from once per day to once per week) it could be up to a a dosen or two images/snapshots requested per campaign in total, sometimes more sometimes less and the triggers are typically very precise so you might not trigger any pictures for a particular snapshot easily (e.g. I saw an "accelerate towards an obstacle" trigger that I have never seen triggered). Common trigger I saw was towards overriding autopilot (e.g. accelerate when it wants to brake, actively steering and so on), those get like 1-2 per campaign.
 
My car does not get any trigger requests from tesla so number of images per mile driven is zero for me.

Because it’s hacked?

Of the old "campaigns" I saw (that they seem to be doing from once per day to once per week) it could be up to a a dosen or two images/snapshots requested per campaign in total, sometimes more sometimes less

Woah.

(e.g. I saw an "accelerate towards an obstacle" trigger that I have never seen triggered). Common trigger I saw was towards overriding autopilot (e.g. accelerate when it wants to brake, actively steering and so on), those get like 1-2 per campaign.

Fascinating, thank you. How often were these campaigns?

I am working on getting more concrete numbers in the next week's. Anyways i use it on my 44miles of daily driving and every day it uploads about 200mb

Your car uploads 200 mb per day? :eek:

The reason I ask is I was trying to work out how many still images Tesla might have in its training library. An average of 1 image per car per month / per 1,000 miles would mean around 2.5 million images, and an average of 1 image per car per week / per 250 miles would mean around 10 million images.

10 million images * 10 minutes to label each image = ~1.7 million labour hours
~1.7 million labour hours * $25/hour = ~$43 million

So labour cost would be no problem. I was then trying to see what the cost of GPU hours would be to do Efficient Neural Architecture Search (ENAS) to optimize against a library of 10 million 1280x964 images. It took 16 GPU hours to do ENAS on the 50,000 training images in the CIFAR-10 dataset, which are 32x32. I converted CIFAR-10 and the hypothetical Tesla dataset into pixels. ENAS optimized against 3.2 million pixels per GPU hour, and I just assumed the same would be true of the hypothetical Tesla dataset. I then looked at AWS pricing, which is $3.06 per GPU hour, and found that it would cost around $12 million for the 386 million GPU hours needed.

The high-level theoretical question is whether the ceiling/bottleneck is data or compute. Does data run out before compute gets too expensive, or does compute get too expensive before data runs out? Assuming that Waymo has unlimited free compute, and knowing that Tesla has a limited R&D budget, does Tesla have an advantage in AutoML because of its larger fleet of cars?
 
Last edited:
  • Informative
Reactions: Cirrus MS100D
One thing i like about @verygreen is that he tells it as it is.
He doesn't hype stuff up, or sugar coat and gloss over any negative and water (*cough kool-aid*) board himself with the positives.
While we may not always agree and come to the same conclusion. He's an honest stand up guy doing the lawd's work.

With that said. Tesla currently lacks detection for regular obstacle objects, debris, traffic light, traffic sign, overhead road signs, road markings, barriers/guardrails, curbs, cone (as far as i can tell), etc.

One thing I don’t like about his videos is that there is no disclaimer that all his videos are only the approximation of how AP is processing the information.
 
  • Disagree
Reactions: croman and jaguar36
So I was wondering how does AP v9 works in right hand drive countries and opportunity presented itself so I jumped on it. Thanks to everybody involved in making this happen!

Here's some footage from 18.40 in Japan.

Sadly there was a bit of a setback where when too many objects appear in the frame the buffer we use to collect it in overflows and it shows as nothing is being detected for some time. You'll see this at the start of the second video - don't worry the AP sees everything still, just artifact of our data collecting that I noticed too late to do anything about.

Highway->surface streets transition:

Busy surface streets->highway. It's interesting that the toll gate is not perceived as an obstacle even when closed.

Airport parking garage (really drives autopilot nuts), parking guard encounter also thee's a superlow flying plane at ~1:05 that AP does not see (I sooo wanted it to see and classify it! ;) ):

While the core NN seems to be the same, there are some processing differences, in particular non-adjascent lanes are detected a bit differently so needed to update the display logic to show them correctly.
Cool!!