Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Neural Networks

This site may earn commission on affiliate links.
And there's the answer to that!



Elon Musk on Twitter

So I know that we should take any dates that Elon tweets out with a couple pounds of salt, but it does raise an interesting question in my mind. He says it'll be approximately 6 months before the new hardware "is in all new production cars". That's all well and good, but when does it end up in the cars of those of us that paid for FSD zero to two years ago? Sadly I'm expecting all initial stock of the hardware to go towards the new cars while us current owners are told to be patient and we'll get ours in 3 months maybe, 6 months definitely. I sincerely hope that I'm wrong and that they allocate inventory for current owner retrofits before it goes into the assembly line.
 
So I know that we should take any dates that Elon tweets out with a couple pounds of salt, but it does raise an interesting question in my mind. He says it'll be approximately 6 months before the new hardware "is in all new production cars". That's all well and good, but when does it end up in the cars of those of us that paid for FSD zero to two years ago? Sadly I'm expecting all initial stock of the hardware to go towards the new cars while us current owners are told to be patient and we'll get ours in 3 months maybe, 6 months definitely. I sincerely hope that I'm wrong and that they allocate inventory for current owner retrofits before it goes into the assembly line.
We all anticipate that to be the case. Basically, until there actually IS an FSD feature, they're under no obligation to provide us with the hardware.
 
  • Like
Reactions: mongo
The biggest public examples with good quality data and implementations are on the order of single digit millions of labeled images. If you needed a thousand times that much labeled data you'd be talking billions of images. If you needed a million times that much data you'd be talking trillions of labeled images. That's a big range.

Thanks for your answer, jimmy. Based on my rough calculations, I don’t think billions of frames of labelled video is outside the realm of possibility, but it’s the sort of thing that might take 18 months and $1 billion. Maybe the plan is to start with 99% simulation data and gradually retrain on more and more real, labelled data over time.

As for 30fps being fast enough - that seems to be about the speed that the cameras in HW2 cars are capable of. It works out to a frame spacing of 30ms or so. When you consider that our roads are designed for human reaction time, which is never less than 100ms and generally closer to 500ms, 30ms seems like it's probably ok.

This is a great point. The fastest recorded reaction time I was able to find is 101 ms for a sprinter, although it could have been a lucky guess. You can get a better time by jumping the gun. I also found this fun reaction time game. My best score so far is 275 ms (an average of 5 tries). The game says median reaction time is 215 ms and to get on the top 100 leaderboard you need a score of 184 ms or better — but I think people might just be guessing to get below 200 ms, and the top 2 people with exactly 100 ms (the max score the game will allow) probably cheated.

Bosch has an AEB system that it claims has a reaction time of 190 ms. I think it’s safe to say this is better than average human reaction time, and it may be possible for Tesla et al. to improve on this time.
 
Last edited:
New chip, new hardware new software- Tesla release dates on custom APv3 are bound to slip.

Mobileye people should know that the best waiting years for Eyeq4 to be released...

And EyeQ3 is only capable of Level 3 self driving.....

Bladerskb loves derailing any and every thread by bringing up Mobileye, even when it’s completely off-topic. That’s why I created this thread:

Mobileye vs. Tesla megathread
 
Last edited:
  • Like
Reactions: LargeHamCollider
I'm not the one deciding what data should be collected so I can't know what they need. Often it is that they are missing data on some specific scenario like snow + night time + police car. Or it could be something simple like V9 has low confidence, check if FSD also has low confidence and upload all low confidence data. If you haven't already, then listen to this video:

Seeing Andrej's presentation gave me a lot of renewed hope for autopilot and FSD. That man has a brilliant mind.
 
I found this interesting lil' tidbit on OpenAI's blog:

"We believe the largest training runs today employ hardware that cost in the single digit millions of dollars to purchase (although the amortized cost is much lower)."

AlphaGo is the most computationally intensive network that OpenAI lists. So, how much training compute would the new Tesla neural network require, relative to AlphaGo, to fully utilize all its parameters? 10x more? 100x more?

I think it's pretty reasonable to guess that Tesla would spend $100 million on compute hardware if it materially helped the development of Autopilot and higher levels of autonomy. So 10x would be doable.

Depending on various factors, 100x could also be doable: the acceptable length of training time for Tesla, the cost of owning hardware vs. renting cloud cycles, and whether "single digit millions" is closer to $2 million or $9 million.
 
Last edited:
Thanks a ton @jimmy_d for all your sleuthing. Couple of questions.

Any thoughts on what Elon is saying here about different NNs for various cameras? Was it just misdirection?

Twitter

Secondly, there were some tweets by Karpathy a few months ago about distributed training of NNs. With a quickly increasing fleet of this size, most of it plugged in and on Wifi at night, is it possible they are or can tap into this idle compute?

I don't see Elon as a misdirection kind of guy. Wacky and over-optimistic, yes. Intentionally misleading, no. Of course I don't know the guy except from twitter and youtube... YMMV.

I commented on the distributed training tweets a while back. That stuff has to do with distributed training over a cluster of machines in a data center. Distributing training over a bunch of cars via cellular/wifi is a very different challenge. Not totally unrelated in theory, but in a practical sense it's not connected to the research he was commenting on.

Elon's recent comments lead me to believe that the AKNET_V9 network that I wrote about recently is probably not the one that's driving the car in V9. There's another set of networks, with separate nets for each type of camera, also present in V9 and he's likely talking about those. I have some other comments in this thread with more details on that.
 
I found this interesting lil' tidbit on OpenAI's blog:

"We believe the largest training runs today employ hardware that cost in the single digit millions of dollars to purchase (although the amortized cost is much lower)."

AlphaGo is the most computationally intensive network that OpenAI lists. So, how much training compute would the new Tesla neural network require, relative to AlphaGo, to fully utilize all its parameters? 10x more? 100x more?

I think it's pretty reasonable to guess that Tesla would spend $100 million on compute hardware if it materially helped the development of Autopilot and higher levels of autonomy. So 10x would be doable.

Depending on various factors, 100x could also be doable: the acceptable length of training time for Tesla, the cost of owning hardware vs. renting cloud cycles, and whether "single digit millions" is closer to $2 million or $9 million.

I found that report from OpenAI really interesting. It's about time to update it and I'm expecting that the next point on the chart will follow the 10x per year trendline, probably for at least the next few years.

It's hard to compare AlphaGo because the computational character of RL is dominated by simulation, and not by gradient descent calculations. AlphaGo used a lot of TPU for gradient descent, but it used many times more CPUs for doing game simulation. That said I wouldn't be surprised if the training infrastructure for the net I wrote about (AKNET_V9) is beyond AlphaGo scale. Tesla's network training is very likely dominated by GPU (or TPU) requirements, like almost all other CNN vision applications.
 
I don't see Elon as a misdirection kind of guy. Wacky and over-optimistic, yes. Intentionally misleading, no. Of course I don't know the guy except from twitter and youtube... YMMV.

I commented on the distributed training tweets a while back. That stuff has to do with distributed training over a cluster of machines in a data center. Distributing training over a bunch of cars via cellular/wifi is a very different challenge. Not totally unrelated in theory, but in a practical sense it's not connected to the research he was commenting on.

Elon's recent comments lead me to believe that the AKNET_V9 network that I wrote about recently is probably not the one that's driving the car in V9. There's another set of networks, with separate nets for each type of camera, also present in V9 and he's likely talking about those. I have some other comments in this thread with more details on that.

To confirm are you saying the single net processing all cameras is NOT the one you now believe is running the car and rather it's individual NNs processing each independently like v8?

Just trying to understand.
 
If I didn't respond to anyone I apologize. My TM inbox blew up after that Electrek article and I probably missed some stuff. I try to respond to sincere questions but I probably missed some recently.

Ok so it's clear that you can't run both networks at the same time due to computational resource limits, so my theory is that Tesla is actually using the cars who don't have EAP to test the new network since they can't run V9 anyway.

Am i crazy? Those cars should already have the hardware and have plenty of free compute time!
 
Ok so it's clear that you can't run both networks at the same time due to computational resource limits, so my theory is that Tesla is actually using the cars who don't have EAP to test the new network since they can't run V9 anyway.

Am i crazy? Those cars should already have the hardware and have plenty of free compute time!

But they have to "run EAP" to provide the safety aspects of AP to those cars. So they might not need to run the whole shebang, I don't see how its worthwhile to parse it out. I think everyone runs whatever is there if their hardware allows but the system only acts at a very very high threshold when it is safety related.