Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Driver input is a label

This site may earn commission on affiliate links.

shrineofchance

she/her, they/them
Feb 10, 2021
247
278
Canada
Tesla is going to see an explosion of training signal as the FSD Beta gets pushed fleet-wide.

First, this will enhance data curation: when a disengagement occurs, there is a high probability that the sensors or vector space representations show a novel training example that can be added to a training set.

Second, this will increase data labelling: when a disengagement occurs, the driver's subsequent behaviour gives a lot of signal about what are the correct predictions the neural networks should have made. This applies to both perception and planning.

There are a few ways (at least in theory) to collect useful data passively, without a driving automation system being engaged. But nothing beats a driving robot actually acting in the world, under human guidance.

The work falls on the AI team to build the data curation and labelling pipelines to make efficient and scalable use of this firehose of novel training examples and accompanying training signal from human drivers. I believe it is rocket science to do so (it isn't a foolproof task), but rocket science is something the Tesla AI team is capable of.
 
  • Like
Reactions: GWord and Microterf
I couldn’t be more excited for the FSD Beta to go fleet-wide. But I also know that there is a lot of software development work on building the machine learning pipelines to make good use of fleet-scale data, e.g. 3D labelling, video labelling (which brings it up to 4D), 360-degree video stitching, multi-task learning, auto-labelling, self-supervised learning, and so on.

Things will only be where we want them to be once the software development work is completed and FSD Beta is pushed fleet-wide. Which may not happen simultaneously.