Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Karpathy Talk on using PyTorch to develop FSD and Smart Summon

This site may earn commission on affiliate links.
I get the impression from the video that Tesla definitely has more advanced NN designed for the AP3 computer than we don't have in the AP2 and AP2.5 cars. I just look at the videos that Karpathy showed for Smart Summon for example and it seems like he is showing something better than what we have,
 
I get the impression from the video that Tesla definitely has more advanced NN designed for the AP3 computer than we don't have in the AP2 and AP2.5 cars. I just look at the videos that Karpathy showed for Smart Summon for example and it seems like he is showing something better than what we have,
And even said without hesitation that the NNs were being built specifically for HW3...so absolutely YES.
 
  • Like
Reactions: diplomat33
The top-down view he shows answered a major question of mine. We know that Smart Summon uses maps, most likely Open Street Maps, to assist in path planning. I was wondering if the car has the ability to make an on-the-spot "mental map" of a parking lot like a human does, and it looks like that's what it is doing. You can imagine those overhead representations getting more sophisticated over time, reflecting more than just driveable space.

The car should eventually be able to remember where entrance and exits to a parking lot are in real time, and not depend exclusively on downloaded maps. It should also be able to map where the entrance to the store is for example so that it knows where to position the car to drop you off and pick you up, unlike the current smart summon which sort of leaves the car in the middle of traffic.
 
  • Like
Reactions: mikes_fsd
I get the impression from the video that Tesla definitely has more advanced NN designed for the AP3 computer than we don't have in the AP2 and AP2.5 cars. I just look at the videos that Karpathy showed for Smart Summon for example and it seems like he is showing something better than what we have,
It looks pretty modular. They run multiple (8?) Resnet-50 style "backbones" which pass their output to a bunch of subtasks. They train each subtask to perform a certain function - e.g. traffic light recognition, pedestrian recognition, depth estimation, etc. They can retrain one subtask without affecting the others. They can also add new features by adding backbones and/or subtasks, up to the limit of the FSD chip.

Since HW 2.5 has a much lower limit, it runs fewer backbones, or maybe just fewer subtasks per backbone. Either way, it is "missing" features. But the backbones and subtasks it does run should perform the same as on HW 3.0.
 
Another important takeaway from this talk is the great face gains that Karpathy has accomplished over the last year. Compare him to video from last year:

How did he accomplish this? He explains it here:
Andrej Karpathy on Twitter

I wonder if he has managed to convince the rest of the team to follow him and if we can expect similar face gains from Mr Musk.
 
Karpathy says NoA has done 200,000 automatic lane changes. That must be a typo? Seems like it would easily do 200,000 per day. Unless he means they have "no confirm" NoA lane changes in dev build and they've already done 200,000 of them...
 
In layman's terms, will Dojo basically speed up the NN training process?
According to Karpathy the goal of Dojo is to bring down the cost(could mean energy, dollars, time) to train large scale NN for Tesla a factor of 10x.

My guess is that they are referring to energy, which will correlate with dollars. Many things in training NN can be sped up by parallelization. Maybe some custom hardware will also speed up the other aspects. From Pete’s last talk he seemed to be very well aware of these things, so my guess is that Tesla will likely have some solution that trains faster than just buying more TPUs from Google.
 
  • Helpful
Reactions: diplomat33
According to the PyTorch video, Andrej says they have 200,000 NOA lane changes on the books. My gut reaction is: that number seems too low. They had 500,000 uses of Smart Summon the first weekend after it was released. NOA lane change has been out since May. I certainly use NOA lane change a lot more than I summon. But assuming his number is accurate, I've documented 3217 NOA lane changes in my long-term test, which means I've personally witnessed 1.6% of Tesla's NOA lane changes? Suddenly I feel like I have a not insignificant role in Tesla's field testing! Either Andrej flubbed a figure, or other Tesla drivers really don't like NOA?
 
I think they're missing units on that 200k NOA lane changes. Back on Autonomy Day in April, they said they were seeing 9 million total automated lane changes and 100k per day, so maybe this new number is supposed to be double that rate? Although the new number might be related to no-confirmation and not just any lane change.
 
Last edited: