Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.

Thanks for posting. So maybe not necessarily for HW4? Or perhaps Tesla reused the HW3 user manual? The "User manual" uploaded along with those documents contains phrases I wouldn't expect to see for HW4.

"Your Model X includes the following Autopilot components that actively monitor the surrounding roadway:

- Ultrasonic sensors are located in the front and rear bumpers
- A camera is mounted in each door pillar
- Three cameras are mounted to the windshield above the rear view mirror"

Edit: We can intuit that this device runs with Automotive Ethernet, at least. In the "Support Equipment View" of "External photos", there's this item:

Screenshot from 2023-02-17 13-30-47.png


Label reads "Technica", which led me to find this physical layer conversion between Automotive Ethernet and PC system
 
Last edited:
Thanks for posting. So maybe not necessarily for HW4? Or perhaps Tesla reused the HW3 user manual? The "User manual" uploaded along with those documents contains phrases I wouldn't expect to see for HW4.

"Your Model X includes the following Autopilot components that actively monitor the surrounding roadway:

- Ultrasonic sensors are located in the front and rear bumpers
- A camera is mounted in each door pillar
- Three cameras are mounted to the windshield above the rear view mirror"
That is old text from when they submitted the unit for testing. The important part is the radar specific specs, ID, and power
 
  • Helpful
Reactions: willow_hiller
On the topic of whether or not HW4 and HW3 can integrate, heres my perspective. I've been coding for 40 years, C++ for 30+ years.

If Tesla have any idea how to do software engineering (and clearly they do), then there is likely a very definite modular break between the neural net code that does object recognition and image-based distance approximation (basically working out what is where), and the NN for deciding what actions to take, which lane to be in, what speed to set, where to face the vehicle etc...

In other words, some code creates a 'world view' likely saying 'object X,Y and Z are at these positions, with this percentage of confidence', and then the decision making code decides how to handle the vehcile given this data.

The second set (decision and driving) can be totally independent of how the first set of data is determined. In fact we KNOW this is how tesla do it, because they used to be pure NN for object recognition and pure C++ for decisions. Now its a bit of NN in the decision code.

I assume HW3 will be able to say 'objects X,Y,Z at these positions, 90% confidence'. HW4 will say 'X,Y,Z with 98% confidence'. As far as the decision code is concerned, it doesn't even have to know if it has HW3 or 4 installed. They MUST work this way, because the number of cameras may decrease in real time due to hardware failure, or blindness/obfuscation of a camera by sunlight or dirt/dust.

So I dont think there is much concern that HW3 will not be able to use a lot of HW4 code. I dont see this as an issue at all. What IS an issue is whether or not the confidence level from HW3 sensors is sufficient to enable hands-free FSD. Thats the only worry, from an investor POV.

In general I think its worth thinking about HW4 and HW3 more as 'sensor suite 3' and 4, as thats likely the biggest real difference.
IANAL.
 
I assume HW3 will be able to say 'objects X,Y,Z at these positions, 90% confidence'. HW4 will say 'X,Y,Z with 98% confidence'.

This comment form @cliff harris seems right to me.

The most likely implication is that often HW4 detects a situation or object before HW3 giving it more time to respond, but that HW3 eventually gets there, e.g. the car moves closer to the object and it becomes easier to identify.

Situations where HW3 just can't see an object or continues to recognises it incorrectly can still in theory happen, would be a lot less frequent.

Assuming FSD works on HW4 but doesn't work well enough on HW3, Tesla knows what they need to fix.
If the issue is compute bandwidth, Tesla can optimise the NNs to squeeze them into HW3, or upgrade the processor hardware if really necessary.

If the issue is object recognition, it relevant the question is what issues arise from the reduced recognition confidence, and is HW3 still better than a human?

If the issue is redundancy, then question is what level of redundancy the regulators require.

Most probably the vast bulk of the problems that still need to be solved require improvements to software an NN training that are common to HW3 and HW4.

HW4 being safer than HW3 provides one small additional reason for Tesla owners to update to a newer car, it doesn't matter how small that safety improvement is, more safe is always better.
 
Last edited:
The neural networks basically are input->something happens ->output. With HW4 the input changes dimensions somewhat and they might be in slightly different neural networks, but it's mostly the same.

In the short run it will mainly be HW4 that is lacking in data. So they will want to augment the HW4 data with HW3 data. There are many ways to do this, for example they could just upsample the HW3 data and let the neural network figure out how to use different data. Or they could try to simulate HW4 data using their normal simulation pipeline, but using real world scenarios from HW3. As they explained in 2021, they can convert simulation data to look more real using neural rendering:



My guess is that they will simulate tons of data from the HW3 dataset and mix in some HW4 data from employee vehicles and that this will probably be enough to drive with some restrictions while they run the data engine to get failure cases until they have enough real world HW4 data.
 
Does anyone think that they can stitch together a 360 view of the car environment in a common format for HW3 or HW4, that can use a common set of training data?

The alternative is different training data for HW3 and HW4 resolving to a common occupancy network?

My issue with 2 seperate sets of training data is there would also be 2 sets of training, 2 lots of testing, and once the platform diverged would be hard to merge.

Perhaps Dojo allows 2 sets of training to be easily accommodated?

One parallel is 286, 386, 486 and Pentium PC chips the hardware was increasingly capable, but the ability to run software on older machines was retained. It was not necessary to compile and test each program for each generation of chips.

I understand that separate training data and training for HW4 will eventually produce a better result. But initially if might be 5 steps backwards meaning that HW3 out performs HW4 for some time?

Seems to me that HW4 cameras could be downgraded to a HW3 pixel count as an interim solution until better HW4 data was available.

I understand the point on using simulation, but that also seems like something that might take time, and a lot of compute.

The hold up with HW4 and the Cybertruck could have been Dojo? If so, that implies 2 lots of training even if only Tesla staff are testing a HW4 specific version.
 
Last edited:
Does anyone think that they can stitch together a 360 view of the car environment in a common format for HW3 or HW4, that can use a common set of training data?

The alternative is different training data for HW3 and HW4 resolving to a common occupancy network?

My issue with 2 seperate sets of training data is there would also be 2 sets of training, 2 lots of testing, and once the platform diverged would be hard to merge.

Perhaps Dojo allows 2 sets of training to be easily accommodated?

One parallel is 286, 386, 486 and Pentium PC chips the hardware was increasingly capable, but the ability to run software on older machines was retained. It was not necessary to compile and test each program for each generation of chips.

I understand that separate training data and training for HW4 will eventually produce a better result. But initially if might be 5 steps backwards meaning that HW3 out performs HW4 for some time?

Seems to me that HW4 cameras could be downgraded to a HW3 pixel count as an interim solution until better HW4 data was available.

I understand the point on using simulation, but that also seems like something that might take time, and a lot of compute.

The hold up with HW4 and the Cybertruck could have been Dojo? If so, that implies 2 lots of training even if only Tesla staff are testing a HW4 specific version.
They talked about it at the AI day in 2022. Ashook believes that some form of occupancy networks or NERFs(not exactly sure what he was implying) will be the foundational model of computer vision like LLMs are the foundational models of language.

Basically you could train a neural network to generate occupancy network from the input video data and then use this to extract all the output you want. And you could use the occupancy network from either HW3 or HW4.

But doubt that's what they are planning right now.
 
  • Informative
Reactions: MC3OZ
I keep checking the EV-CPO website looking to see if a new inventory S/X shows up with HW4 listed...nothing yet. In that twitter post above, there is also conjecture about the Tesla emblem in the nose having a camera "hole". We'll know soon enough I suppose. Until then, I'm unlikely to take delivery of my MYP (scheduled for sometime between 02/23 and 03/04) - may end up canceling - and losing my nice discount - if HW4 is definitely coming to Model Y this year.
 
  • Informative
Reactions: willow_hiller
I assume this is the HW4 cameras:

An article:
new-cameras-model-sx.jpg


So I think we can assume that at least some HW4 Model S/X are in production right now.
 
Today's speculation is that the current cars will get updated cameras (higher res, different viewing angles, etc), but not any ADDITIONAL cameras...the latter will be for Semi and Cybertruck
All cars could, theoretically, get the radar and faster processor.
If true, this doesn't require any huge body changes for S/X/3/Y and means they could all get it pretty quick. It also aligns more with HW3 and HW4 being an evolution.
 
  • Informative
Reactions: navguy12
Today's speculation is that the current cars will get updated cameras (higher res, different viewing angles, etc), but not any ADDITIONAL cameras...the latter will be for Semi and Cybertruck
All cars could, theoretically, get the radar and faster processor.
If true, this doesn't require any huge body changes for S/X/3/Y and means they could all get it pretty quick. It also aligns more with HW3 and HW4 being an evolution.
Regarding upgrade path: HW4 form factor is different, but potentially adaptable with jumper hoses and harnesses. At least for non air cooled HW2/2.5 vehicles.
 
Today's speculation is that the current cars will get updated cameras (higher res, different viewing angles, etc), but not any ADDITIONAL cameras...the latter will be for Semi and Cybertruck
All cars could, theoretically, get the radar and faster processor.
If true, this doesn't require any huge body changes for S/X/3/Y and means they could all get it pretty quick. It also aligns more with HW3 and HW4 being an evolution.


Speculation from whom?

Elon seemed pretty clear HW4 upgrades for existing cars was cost prohibitive and thus won't be happening.