Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla AI day: Q3

This site may earn commission on affiliate links.
My take on this issue that "HW3 already ran out of compute, so forget redundancy" is that our observer community may be jumping to an incorrectly-extrapolated conclusion based on a temporary situation (as reported by greentheonly and now widely accepted here as proof of HW3's computer inadequacy).

While it's certainly possible and not entirely surprising that the current FSD stack is turning out to be bigger than what HW3 was intended to support, its architecture is also evolving in ways that could dramatically shrink compute requirements for certain task handler modules. In today's presentation, Ashok gave an example of dramatic reduction of complexity in the parking-lot path problem. The initial "classic" geometry search approach (admittedly a brute-force straw-man in today's world, but not clearly so just a few years ago) was more than 1000x more compute-intensive than the Monte-Carlo Tree Search NN solution. Perhaps more realistic as a comparison, his intermediate method, taking advantage of Navigation support, was a great improvement but still 80x more resorce-intensive than the MCTS solution.

I'm reminded also of the startling reduction of the Google NN-based speech-recognition kernel. Starting with a task no one could reliably do around 20 years ago (perhaps outside of NSA and their supercomputers), they got it to a reasonably effective module at 2GB by 2012, then to an astonishingly efficient, portable stand-alone and better-performing kernel in just 80MB by 2019. Considering that the current learning-curve progress of the Driving problem is probably something like a 2005 analogue of the Speech Recognition problem, I'm cautiously optimistic that far better FSD solutions can be made to operate well within the HW3 compute resource. I don't think it's at all clear that we need to assume an inexorably increasing hardware requirement for the in-car execution of a future, more evolved FSD NN kernel.


The "redundancy guy" is probably the one everyone puts on mute during the teams meeting as they have more pressing things to do.

When they're allowed to talk they're probably going to want more redundancy than compute redundancy.
 
  • Funny
Reactions: AlanSubie4Life
Ashok gave an example of dramatic reduction of complexity in the parking-lot path problem
Later during Q&A, he and Karpathy answered a question about the current split between vision/perception and planning pushing HW3 compute/latency budgets, and both talked about the current intermediate/in-development steps in getting to end-to-end that can be simpler and more efficient. Some people think progress requires things to become even more complex, so there's no going back if single-node compute is exceeded already. But similar to training wheels on bikes or scaffolding during construction, assistance/additional resources can be temporary and removed when the "final" thing has progressed enough to continue on its own.
 
Starting to wonder if Elon wants that humanoid robot to provide labor on Mars. Let the robots build the habitats while the humans chill on Earth and wait. Maybe even lets SpaceX simulate Starship living conditions as they monitor how humanoid robots move through the space and operate the Starship on their way to Mars without putting human lives at risk.
 
  • Informative
Reactions: pilotSteve
Starting to wonder if Elon wants that humanoid robot to provide labor on Mars. Let the robots build the habitats while the humans chill on Earth and wait. Maybe even lets SpaceX simulate Starship living conditions as they monitor how humanoid robots move through the space and operate the Starship on their way to Mars without putting human lives at risk.
In the future ...

Elon Musk: From today, Tesla will recognize the UAW.
Journalist: Isn't this just posturing? You don't have any auto workers. You replaced them with robots.
Elon Musk: Well, before we fired the last few of them, they voted to join the UAW. I always said that we'd accept the result of a vote.
 
  • Funny
Reactions: DanCar
Yes. They provided 6M miles of detailed safety data from Chandler, AZ which they were not legally required to do.


Do you have a link? quick google didn't turn it up- but did find a story from March saying they had not done that...


the company hasn't been totally transparent with metro Phoenix residents, refusing to turn over data showing how many times the vehicles' autonomous function has failed while driving around Chandler, Tempe, and other Valley areas


Nearest I can find to your claim is this report:

Which says they reported on crashes, but does not cite reporting on disengagements.
 
Which says they reported on crashes, but does not cite reporting on disengagements.
I think the only thing they say in the paper is that 99.9% of disengagements would not have resulted in a crash had they not occurred. Interestingly this puts the disengagement rate at 1 per 6000 miles which suggests that they are gaming their CA disengagements in some way. Anyway, disengagement numbers are pretty worthless once they get to that level.
Disengagement problem is solved by getting rid of the safety driver but they only did 65,000 miles of that (with one collision, Waymo vehicle got rear ended).
 
Do you have a link? quick google didn't turn it up- but did find a story from March saying they had not done that...


Nearest I can find to your claim is this report:

Which says they reported on crashes, but does not cite reporting on disengagements.

Yes, I am referring to the 6M safety report that Waymo published.

Here is the actual report:

The reports cites all accidents (real and simulated). It includes results on simulation testing on disengagements. It also mentions that 99.9% of disengagements would not have caused an accident.
 
Yes, I am referring to the 6M safety report that Waymo published.

Here is the actual report:

The reports cites all accidents (real and simulated). It includes results on simulation testing on disengagements. It also mentions that 99.9% of disengagements would not have caused an accident.



So... it does not report their actual disengagement rate in real life.

Wonder why.
 
But Tesla reports their accident rates on AP or not every quarter :)

Yeah but AP is different from FSD Beta. I want to see Tesla do a full safety report on FSD Beta where Tesla lists all safety disengagements, total miles, shows the scenario for each disengagement, includes simulation results on every disengagement, and shows what the accident rate would be for FSD beta.

I think the only thing they say in the paper is that 99.9% of disengagements would not have resulted in a crash had they not occurred. Interestingly this puts the disengagement rate at 1 per 6000 miles which suggests that they are gaming their CA disengagements in some way. Anyway, disengagement numbers are pretty worthless once they get to that level.

I think the discrepancy can be explained: The 1 per 6000 miles is the estimated total disengagement rate for Chandler. But the 1 per 30,000 miles disengagement rate in the CA DMV report is the safety disengagement rate. Waymo does not report all disengagements to the CA DMV since some disengagements might be non-safety related or might not have resulted in a safety issue. They only report safety disengagements:

Waymo reported around 30,000 miles per disengagement with Cruise at roughly the same level. Only Waymo has given details on their methodology, where they run simulations of what would have happened had there not been an intervention, and they disregard disengagements where nothing bad would have happened. (Safety drivers are told to disengage if they have any doubts, to assure safety, and you don’t want them afraid to do that because of some government report.)

Source: California Robocar Disengagement Reports Reveal Tidbits About Tesla, AutoX, Apple, Others

We can't compare the CA DMV report to the Chandler Safety Report since they are using different metrics. But also driving is different in SF than in Chandler so the safety disengagement rate in the two cities would probably be different as well.
 
Last edited:
But Tesla reports their accident rates on AP or not every quarter :)

Yeah, in the context of this original quote, Elon was definitely referring to FSD Beta, here is the question and his follow-up - Elon's talking about FSD Beta of course. So that's what people here (I guess?) are asking to have reported?


I don't see much point in diving into the pedantic discussion about what Tesla's statistics do and do not show.

I'd love to see as much disclosure as possible from all the companies involved. But then that would be giving up information on each competitor's positioning, I suppose? I bet it would speed development along though...
 
  • Like
Reactions: diplomat33
FSD beta is not a public product.

Waymos RT service is.

And yet Waymo doesn't publish actual disengagements, just apparently the simulated ones, or the ones so bad they hit something.
I'm not sure what you're saying. All the disengagements that they studied were actual disengagements. What would be the purpose of publishing details for all disengagements? If you're trying to measure safety you only care about the disengagements that prevented a collision.

This is all a bit silly since Tesla isn't even close to trying to deploy robotaxis. I'm more interested in how safe FSD Beta is with a safety driver since it looks like it will be in that state for a long time.
 
  • Like
Reactions: edseloh
FSD beta is not a public product.

Waymos RT service is.

It's about comparing autonomous to autonomous. Tesla's public AP is not autonomous. FSD Beta is Tesla's autonomous driving product. And Waymo's service in Chandler is autonomous driving. So it makes sense to those two.

And yet Waymo doesn't publish actual disengagements, just apparently the simulated ones, or the ones so bad they hit something.

No. Waymo does publish actual disengagements. But they only publish actual safety disengagements because they are the most meaningful in terms of safety. It makes no sense to publish disengagements that are not safety related because they tell you nothing about safety. The whole point of disengagement data is to inform you about safety.

Yeah, in the context of this original quote, Elon was definitely referring to FSD Beta, here is the question and his follow-up - Elon's talking about FSD Beta of course. So that's what people here (I guess?) are asking to have reported?


I don't see much point in diving into the pedantic discussion about what Tesla's statistics do and do not show.

I'd love to see as much disclosure as possible from all the companies involved. But then that would be giving up information on each competitor's positioning, I suppose? I bet it would speed development along though...

Elon has mentioned that the safety goal for FSD beta is 100-200% safer than average human. And Tesla rep told CA DMV that the goal is 1 safety critical disengagement per 1-2M miles. So we know Tesla's safety goals. That is why it would be nice to get actual safety data on FSD Beta to see how far or close FSD beta is to those stated safety goals.