On the topic of whether or not HW4 and HW3 can integrate, heres my perspective. I've been coding for 40 years, C++ for 30+ years.
If Tesla have any idea how to do software engineering (and clearly they do), then there is likely a very definite modular break between the neural net code that does object recognition and image-based distance approximation (basically working out what is where), and the NN for deciding what actions to take, which lane to be in, what speed to set, where to face the vehicle etc...
In other words, some code creates a 'world view' likely saying 'object X,Y and Z are at these positions, with this percentage of confidence', and then the decision making code decides how to handle the vehcile given this data.
The second set (decision and driving) can be totally independent of how the first set of data is determined. In fact we KNOW this is how tesla do it, because they used to be pure NN for object recognition and pure C++ for decisions. Now its a bit of NN in the decision code.
I assume HW3 will be able to say 'objects X,Y,Z at these positions, 90% confidence'. HW4 will say 'X,Y,Z with 98% confidence'. As far as the decision code is concerned, it doesn't even have to know if it has HW3 or 4 installed. They MUST work this way, because the number of cameras may decrease in real time due to hardware failure, or blindness/obfuscation of a camera by sunlight or dirt/dust.
So I dont think there is much concern that HW3 will not be able to use a lot of HW4 code. I dont see this as an issue at all. What IS an issue is whether or not the confidence level from HW3 sensors is sufficient to enable hands-free FSD. Thats the only worry, from an investor POV.
In general I think its worth thinking about HW4 and HW3 more as 'sensor suite 3' and 4, as thats likely the biggest real difference.
IANAL.