Fact Checking
Well-Known Member
About FSD. It was a kick-starter campaign. I didn't bother with Tesla at those times but I expect they needed client money to cushion finances for the initial research.
Seeing experience of Musk with mass media, I would dig first to find actual Musk quotes in context about FSD.
Agreed.
On the other hand it would be obvious for anybody with basic understanding of data analysis that the last crucial 5% would take 95% of the research-training time.
"The first 90% of a large software project takes about as much time and money to complete as the second 90% of the project" is the conventional wisdom among people with software and high-tech development experience, which conventional wisdom has a lot of truth and hard lessons behind it, but neural networks have radically different emergent properties:
- A NN training dataset, once collected and labelled, defines the capabilities of the network - and a training dataset is growing monotonously, so capabilities are never 'lost'. With conventional software the speed and cost of development scales with project size: the larger the project, the harder it is to expand it. This is not true of neural networks - they only depend on the training data set and available (off-site) training computing capacity.
- Tesla is expanding their training dataset by collecting AP disengagement events from beta testers and from the wider owner fleet as well. This means that their ability to expand the training dataset scales with fleet size, which is growing super-linearly at the moment. Also the 'experience' from past disengagement events never goes away: it will be part of every single future neural network trained.
- If the training dataset is large enough neural networks can be 'tested' off site (on large clusters and in simulation environments) with a high degree of confidence. I estimate Tesla's labeled training dataset to potentially be in the 'hundreds of millions of images' order of magnitude, maybe even a billion images, which is unheard of sophistication in academic NN circles.
- Tesla's current bottleneck appears to be not NN quality but HW performance - see this comment of mine.
While the "90% rule of software/hardware engineering" still applies to Tesla's NN training, deployment and in-car platform and infrastructure, it doesn't apply to the neural networks themselves. This is how Tesla was able to surpass AP1 quality early last year, introduce Navigate on Autopilot last fall and Elon's Model 3 has a development version that can do stop signs, traffic lights, left and right turns as well - that's an almost full Level 4 autonomy feature set.