Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
If these are HW4 cameras, looks like it's reducing the number of outside cameras from 8 to 7 (triple cam in front becomes binocular) at higher resolution with wider field of view, and seemingly retrofittable to existing camera placement:

But seems unlikely FSD Beta 11 will need the new hardware if it's planning for wide release later next week.
 
Q4 call - H/W3 cannot sensibly be retrofitted with H/W4. CT will have H/W4.

I posted on this here:- Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

I think the scenario where FSD works on HW4, but doesn't work on HW3 is fairly unlikely.

But if it did happen, Tesla would initially try to solve the problem via software and NN training if that was possible.

Assuming that HW4 works for some hardware reason and HW3 doesn't. IMO Tesla is almost certain to know exactly why it doesn't work.

With a known hardware problem there is at least a 50% chance that a "hardware patch" can be applied.

So not replacing HW3 with HW4 replacing HW3 with a patched version of HW3.

Should this be necessary Tesla will probably give existing HW3 owners with FSD a FSD licence on the purchase of a new Tesla with HW4 while trading in the old HW3 car. At least, that is what I would hope they would do,

The trade in HW3 cars could simply be resold with EAP only, or the "hardware patch" applied. If the whole process is 'breakeven" for Tesla that is better than applying "hardware patches" to existing cars. Note: FSD on the new HW4 cars doesn't actually cost Tesla anything.

Should this scenario playout, existing HW3 owners without FSD would not be able to upgrade to FSD.

Bottom line - we are talking about a problem that is most unlikely to occur, but if it does happen there are good ways of handling it.
 
  • Like
Reactions: petit_bateau
I think the scenario where FSD works on HW4, but doesn't work on HW3 is fairly unlikely.
That is not consistent with what the earnings call said:

HW 3: 200% - 300% safer
HW 4: 500% - 600%
HW 5: Beyond the above.

It means HW3 is safer than humans but not as safe as HW4. HW4 is safer than humans but not as safe as HW5...

The part that HW3 doesn't work as mentioned on the call is being safer than HW4 and HW5.

But in order to state how safer they are from HW1 to HW5, the system needs to be competent in collision avoidance. As with the trend right now from HW1 to HW3, it still requires more and more human interventions to prevent the system from getting into troubles: choosing the wrong lane, slowing down the wrong time, not braking timely...

The monitoring system gets stricter, and owners can't even use the Autopilot Buddy anymore.
 
That is not consistent with what the earnings call said:

HW 3: 200% - 300% safer
HW 4: 500% - 600%
HW 5: Beyond the above.

It means HW3 is safer than humans but not as safe as HW4. HW4 is safer than humans but not as safe as HW5...

The part that HW3 doesn't work as mentioned on the call is being safer than HW4 and HW5.

But in order to state how safer they are from HW1 to HW5, the system needs to be competent in collision avoidance. As with the trend right now from HW1 to HW3, it still requires more and more human interventions to prevent the system from getting into troubles: choosing the wrong lane, slowing down the wrong time, not braking timely...

The monitoring system gets stricter, and owners can't even use the Autopilot Buddy anymore.
Yes, in context not working was not being safer than a human by a significant enough margin.

The need to upgrade HW3 cars to HW4, is something Tesla will be very keen to avoid.

If vision in NN training essentially solve the problem then HW3 will work almost as well as HW4.

If HW3 currently picks the wrong lane, I bet HW4 makes the same mistake. Most probably this lane selection logic is migrating to the NN.
 
The difference will mainly be some combination of
1. size of the neural network -> fewer errors
2. frame rate -> a few milliseconds faster response
3. resolution of the cameras -> can detect object further ahead -> fewer errors

It will be a pretty small difference, anything very useful that can be extracted from the images will have been extracted from the images. Something that might be useful for a few edge cases may have been learnt with the larger network.

My guess is that the difference in miles/accident will be something like
Human 1x
HW3 10x
HW4 11x

At some point almost all the accidents will be other vehicles causing the accident and HW4 is just as vulnerable as HW3 to that.
 
Those safety improvements must be loaded with large assumptions of major HW3 FSDb progress as the software currently running on HW3 might be as safe as a a teenager with a learners permit - best case. Add to that the team needs to find a way to use FSDb HW3 in light rain, seamlessly make >90deg turns, remove crutches like stopping short and crawling to stop signs, improve path, play well with other vehicles, and in general find a way for the current NNs to retain more 'edge' cases.
 
  • Like
Reactions: petit_bateau
Those safety improvements must be loaded with large assumptions of major HW3 FSDb progress as the software currently running on HW3 might be as safe as a a teenager with a learners permit - best case. Add to that the team needs to find a way to use FSDb HW3 in light rain, seamlessly make >90deg turns, remove crutches like stopping short and crawling to stop signs, improve path, play well with other vehicles, and in general find a way for the current NNs to retain more 'edge' cases.
At this stage it seems that the problems can probably be fixed with software, better training data and more NN training.

So throwing more hardware at the problem might not make a big difference, either it is solvable with vision and software or it isn't.

The one piece of hardware that might make a big difference is the HD radar. If Tesla expected that it might make a big difference they may have a path where it can be retofitted to HW3 cars.

Or perhaps HD radar just improves HW4 and better data from HW4 helps improve HW3.

I don't know if "single stack" is live in the fleet or if it is so far making a big difference but my hunch is that it may be a better platform for faster improvement.

Are you running FSD beta? If yes, are you in California or somewhere else?

It seems to do better in California is a tentative impression.
 
  • Like
Reactions: petit_bateau
At this stage it seems that the problems can probably be fixed with software, better training data and more NN training.

So throwing more hardware at the problem might not make a big difference, either it is solvable with vision and software or it isn't.

The one piece of hardware that might make a big difference is the HD radar. If Tesla expected that it might make a big difference they may have a path where it can be retofitted to HW3 cars.

Or perhaps HD radar just improves HW4 and better data from HW4 helps improve HW3.

I don't know if "single stack" is live in the fleet or if it is so far making a big difference but my hunch is that it may be a better platform for faster improvement.

Are you running FSD beta? If yes, are you in California or somewhere else?

It seems to do better in California is a tentative impression.
I'm running the latest FSDb outside the state of California.

I've experienced what feels like frequent system latency and an inability to effectively process the scene in a timely way so I have to assume the current hardware processing is being overwhelmed. Of course that's aside from the current performance of sensors like cameras etc.

Not always the case but some California drives seem to behave better which makes me wonder if a handful of the more vocal California pied piper vehicle data and roadway paths are prioritized when training NNs etc. If so it's just another arguably underhanded design crutch.