Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

The «Full» in Full Self-Driving Capability

This site may earn commission on affiliate links.
Arstechnica have done a very good piece on the recent changes. Link.

The page's headline has changed from "Full Self-Driving Hardware on All Cars" to "Future of Driving." A sentence about Tesla's ride-sharing network has been deleted. The "Full Self-Driving" section now includes a disclaimer that "future use of these features without supervision is dependent on achieving reliability far in excess of human drivers as demonstrated by billions of miles of experience."

In other words, despite Musk's bluster over the years, Autopilot is still just a driver-assistance system. And it will continue to be just a driver-assistance system for some time to come.

They also talk about why this is a problem - the issue of maintaining driver contextual awareness becomes more critical as the system becomes smarter:

...this problem will only get worse as Autopilot begins to navigate freeway interchanges, take turns, and stop for stop lights. If your car safely drives you home from work for 100 days in a row, it's natural to stop paying close attention. If the car makes a serious mistake during the 101st trip, you might not be paying enough attention to intervene and prevent a crash.
 
They also talk about why this is a problem - the issue of maintaining driver contextual awareness becomes more critical as the system becomes smarter:

This is exactly Waymo's point.
Requiring the driver to remain alert for exceptional events is quite simply unreliable and therefore unwise.

Had Tesla been able to demonstrate all but flawless performance with the systems as specified to date, I think we could all be a lot more confident. But when the thing still struggles to even stay in lane sometimes ...
 
What part of that are you upset about? Frankly, I thought Congress took regulatory dominion on self driving so as to prevent individual states from creating a patchwork of OK here, not OK there. I could be wrong. It's not just Tesla that wants it, GM, UPS, FedEx. There is a LOT of lobbying money behind it's passage. If worse case is you have to be in the driver seat and exert torque on the steering wheel...for me that is darn close to self driving. They are field testing it now. It's not vaporware.

From 3 days ago...
Elon Musk says Teslas will have full self-driving ability by end of the year

Yeah, Which description of FSD will eventually become available.. The original version promised 10/2018 when I bought it or the new changed watered down version?
 
”Fleet data will solve it eventually"

Possibly!

The analogy between DeepMind’s AlphaStar and Tesla’s Full Self-Driving

Ok guess it doesn’t work that way. Just the laymen in me trying to get my bearings. Enjoy reading the discussions.

Almost everyone here is a layperson, and I don’t think you’re necessarily wrong about fleet learning. We’re all just trying to learn and speculate.

With fleet-scale machine learning, I surmise that there are two main kinds of data that could be collected: raw sensor data and state-action pairs.

Raw sensor data can, in theory, go through two pipelines:
  • the images or video are uploaded and painstakingly labelled by a human annotator (it’s known for sure Tesla is doing this, e.g. Andrej Karpathy has given talks about it)
  • the images or video are uploaded and only weakly labelled with some driving input, such as an image of a yellow light being weakly labelled with a human driver’s deceleration (it’s just my speculation that Tesla could be doing this)
Raw sensor data is cheap to collect and expensive to label. Paying someone to drive a mile costs in the ballpark of $1, whereas paying someone to label a mile of data might cost 10x or 100x that. So labelled sensor data is where fleet learning is the least useful and offers Tesla the least advantage, if any advantage at all.

Weakly labelled sensor data is something Tesla is currently in a unique position to collect (unless Mobileye or one of the car companies is doing stuff I don’t know about). A hand labelled image is much more valuable than a weakly labelled image, but Facebook has shown you can get pretty far with weakly labelled images. Caveat: Facebook used Instagram hashtags as the weak label, which might be more reliable than driver input like accelerating, steering, and braking. However, a lot of hashtags mislabel images.

State-action pairs are even cheaper to collect than raw sensor data, since they use less bandwidth and storage. State-action pairs come already labelled because the action is the label. A state-action pair is analogous to an image-label pair.

Outside of raw sensor data and state-action pairs, I suppose in theory data could be collected in the form of a training signal for reinforcement learning. For instance, every time a human takes over from Autopilot or FSD, that could deduct points from the reward. Points could be gained by miles between disengagements. Whereas we know for certain Tesla is collecting raw sensor data, and it’s been reported (but not confirmed by Tesla) that Tesla is collecting state-action pairs, there is no direct evidence I’m aware of that Tesla is doing fleet reinforcement learning.
 
Last edited:
  • Informative
Reactions: Sonic_78
Facebook has shown you can get pretty far with weakly labelled images. Caveat: Facebook used Instagram hashtags as the weak label, which might be more reliable than driver input like accelerating, steering, and braking. However, a lot of hashtags mislabel images.

Training on 1 billion hashtag-labelled images, Facebook classified images with about 2 percentage points higher accuracy than others have accomplished training on 1 million hand labelled images.

I don’t know the specifics of how you would try to use both kinds of images. I guess you give weakly labelled images 1/1000th as much weight? And then see how you do on your test set.
 
  • Informative
Reactions: pilotSteve
Sorry if I missed any of this upthread, but wasn't there a clue abouit "shadow mode" when Elon said (paraphrased) "every time there is a disengagement we upload the data ..."
So basically they dont "shadow" anything (which I belived has bee nconfirmed elswhere) but just review disengagements; which is entirely valid methodology providing you accept it does not contain all relevant data for AP driving.

So if you have a regular place where AP messes up then disengage AP to force that snapshot to Tesla. Enough disengagements presumably force a fix for that location. I am thinkning his works much like the whitelisting of bridges etc for false braking.

I actually possibly have evidence of exactly this as there is one bit of road that AP regularly tries to drive you into the hedge and I regularly disengage (duh!) but in recent builds it is mysteriously fixed and it is really cool how well it works, but the fix is not explained by general AP improvements.

How this translates to "full" self driving i am not sure, but it suggests a strategy for managing edge cases that quite simply it is not possible to directly code for, at this stage of development

In turn though it also confirms the (obvious) that the software, AP3 hardware or not, is well short of holistically understanding the "full environment at this stage of development.
 
So when Tesla says stuff like, NOA is complete on highway. I take that as it's done and works.... sorta well. The only way going forward that will improve is from fleet data. To a degree it's out of their hands but will continue to evolve its effectiveness of the task indefinitely. But, the heavy lifting on their end is done. Time to move on to the next....

NOA on street level would follow the same idea... get the ball moving and the fleet will continue the refining push.

My interpretation is that marketing (as in Elon) is glossing over the short term engineering objectives, and focusing on the art of selling. You sell by focusing on selling something tangible.

The reality of the situation though is they're going to have to make significant improvements to NoA, Auto-Lane Changes, etc in order to convince people to pay $8K for AP+FSD versus $3K for AP.

As it is right now everything under the AP package works pretty solidly, and everything under FSD either barely works or isn't there.

I'm eager for the update to NoA that does unconfirmed lane changes. Will know at that point where NoA stands in terms of usability. Right now it's really hampered by not having that. Its hard for me to tell what the road situation is like when I see a request from NoA. It is because I don't immediately see them and they're not in my visual path. By the time i've seen them the situation has changed.

It needs to have unconfirmed lane changes where it turns on the turn signal for a bit to allow me to double check and then allow it to continue. If it sucks I'll just turn off NoA, and wait for the next update to try again.

This won't happen, but I really wish Tesla had a development blog along with a bug tracker. If they're going to have the customer beta test then why not make it official with actual agile sprints?
 
What do you mean by the "mid-level"?

They used to upload the "location, heading, speed, type of disengagdment" kind of data, but that seems to have stopped now. Instead the aggregated statistics from autopilot-trip-log is the only thing uploaded now it seems.

The mid-level representation is the predictions the perception neural network makes about what it sees. What's represented by bounding boxes, "green carpet", and text labels in your videos. I think you've called it metadata but the technical term in machine learning is mid-level representation.
 
I'm surprised they haven't fully blocked your car at this point.
oh, don't think they are not trying!

The mid-level representation is the predictions the perception neural network makes about what it sees. What's represented by bounding boxes, "green carpet", and text labels in your videos. I think you've called it metadata but the technical term in machine learning is mid-level representation.
Usually none of that is captured, unless you have an active trigger that happened to match the conditions. But nowadays those triggers have 0.1% probability to capture anything even if the conditions DO match (I guess they don't want every one of those for whatever reasons).
 
  • Informative
Reactions: electronblue
  • Informative
Reactions: electronblue
I have little visibility into this. But it's pretty clear they have dedicated testing cars to collect detailed info about performance and whatnot.

I am just saying that he widely touted "billions of miles on AP2+ cars" actually don't mean as much when you consider the kind of data you get back from those.

This seems to happen a lot. The State-action-pairs of the Union has been post-poned.
 
I have little visibility into this. But it's pretty clear they have dedicated testing cars to collect detailed info about performance and whatnot.

I am just saying that he widely touted "billions of miles on AP2+ cars" actually don't mean as much when you consider the kind of data you get back from those.

Probably speaks to their brute-force attempt at NoA then.