Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla AI day: Q3

This site may earn commission on affiliate links.
Why is new software still being written in C++?
He did say C++, but I don't know anything about the relative advantages of other languages. C++ seems like a perfectly acceptable language though, at least as of 18 years ago when I last used it, it was.

Less focused on the language than I am about the concept of writing in all the corner cases (in whatever language desired) into the path planning. That seems like a bear.
 
Beat me to it.

With HW4 they will have better cameras but Elon claims they have not reached the limits of the current cameras, whatever that means.
Yes, one thing I noticed is that when Elon answered the "is HW3 running out of compute power for FSD" question and he asked Andrej/Ashok if they wanted to add anything, they just seemed uneasy and looked down. It was not confidence inspiring for me. I don't know if I'm overthinking it, but I'm going to watch the archived video again tomorrow. I really hope HW4 is not required haha.
 
I really hope HW4 is not required haha.
Fortunately for Tesla, for Level 2 FSD, it probably is not required (though I suppose even that isn’t clear but from FSD beta seems likely). So they only have a small subset of people to whom they sold something else to worry about. They don’t need to get to better than human for FSD in current form; that is not the product they are selling.
 
Yes, one thing I noticed is that when Elon answered the "is HW3 running out of compute power for FSD" question and he asked Andrej/Ashok if they wanted to add anything, they just seemed uneasy and looked down. It was not confidence inspiring for me. I don't know if I'm overthinking it, but I'm going to watch the archived video again tomorrow. I really hope HW4 is not required haha.


For >l2 it's almost certainly gonna be required because HW3 can't run redundantly.
 
All it needs to be able to do is operate safely to hand over to the driver

For L4 or L5 there is no driver required- it can drive without any human.

or pull over if theres a serious hardware failure. It can do that as is.

Really? Can you cite any cases of the car pulling itself over when the nonredundant driving computer fails?



The plan was never to run 2 identical systems in parallel on both chips, thats silly.


This is outright false- they cited the 2 REDUNDANT compute nodes of HW3 as an explicit design feature for failover safety at Autonomy day.
 
All it needs to be able to do is operate safely to hand over to the driver or pull over if theres a serious hardware failure. It can do that as is. The plan was never to run 2 identical systems in parallel on both chips, thats silly.
When Elon originally introduced HW3 in 2019, he stated that each chip would independently arrive at a proposed solution and then provided the solutions matched, this would be translated to steering and acceleration output. That approach has clearly changed and the speculation is that it's due to lack of compute power per chip. In tonight's presentation, one of the speakers mentioned that Chip A and Chip B can both interchangeably act as master while the other will co-process.
 
When Elon originally introduced HW3 in 2019, he stated that each chip would independently arrive at a proposed solution and then provided the solutions matched, this would be translated to steering and acceleration output. That approach has clearly changed and the speculation is that it's due to lack of compute power per chip. In tonight's presentation, one of the speakers mentioned that Chip A and Chip B can both interchangeably act as master while the other will co-process.
yes that still doesnt mean that 1 chip on its own couldnt handle the other dropping out. You obviously cant carry on with just 1 cpu because then there is no failsafe option left.

How is this so hard to understand?

Both CPU's are not running the same code, that would be waseful, either can be making the primary decision based on results from either or both chips

Whats the point in running the same code on both to get the same results? That is certainly NOT what Elon meant.
 
Yes they did to handle hardware issues, both chips are running parts of the system as a whole and either can make decisions well enough to handle one dropping out.


This is again outright false.

If parts of the system are running on each chip obviously it can not handle one dropping out.

Otherwise they wouldn't need to run parts of each chip.

Doing so is highly inefficient since it wasn't designed to pass large amounts of data between them like that.

They're only doing it because a single node can not run enough by itself to handle everything.


Whats the point in running the same code on both to get the same results? That is certainly NOT what Elon meant.

Redundancy is the point.

Either chip can fail and the FULL driving stack keeps working.

It's literally what Elon said- go argue with him.


The system can no longer do that though, because they need more compute than fits on one node anymore. Thus no redundancy and the system CAN NOT survive one node failing, because, again one node does not have enough compute to do everything anymore


I put your own question of what part of this is so hard to understand back to you.
 
  • Like
Reactions: Cheburashka
My take on this issue that "HW3 already ran out of compute, so forget redundancy" is that our observer community may be jumping to an incorrectly-extrapolated conclusion based on a temporary situation (as reported by greentheonly and now widely accepted here as proof of HW3's computer inadequacy).

While it's certainly possible and not entirely surprising that the current FSD stack is turning out to be bigger than what HW3 was intended to support, its architecture is also evolving in ways that could dramatically shrink compute requirements for certain task handler modules. In today's presentation, Ashok gave an example of dramatic reduction of complexity in the parking-lot path problem. The initial "classic" geometry search approach (admittedly a brute-force straw-man in today's world, but not clearly so just a few years ago) was more than 1000x more compute-intensive than the Monte-Carlo Tree Search NN solution. Perhaps more realistic as a comparison, his intermediate method, taking advantage of Navigation support, was a great improvement but still 80x more resorce-intensive than the MCTS solution.

I'm reminded also of the startling reduction of the Google NN-based speech-recognition kernel. Starting with a task no one could reliably do around 20 years ago (perhaps outside of NSA and their supercomputers), they got it to a reasonably effective module at 2GB by 2012, then to an astonishingly efficient, portable stand-alone and better-performing kernel in just 80MB by 2019. Considering that the current learning-curve progress of the Driving problem is probably something like a 2005 analogue of the Speech Recognition problem, I'm cautiously optimistic that far better FSD solutions can be made to operate well within the HW3 compute resource. I don't think it's at all clear that we need to assume an inexorably increasing hardware requirement for the in-car execution of a future, more evolved FSD NN kernel.
 
Last edited: