@Stratman it's a good discussion, and at the end of the day we all are expressing opinions formed from our individual perceptions of car behaviour and our quite varied technical backgrounds. None of us are intimately involved with Tesla software development (not that I know of anyway).
@doggy1 that experiment is not going to work because the video that's being saved to USB is not HDR - it's a flattened representation of what the car systems see, h.264 encoded (8 bits per pixel). The HDR camera the car is using forms up to 20 bit per pixel images, that's many orders of magnitude better than what you see on the USB stick.
As
@moa999 says, these cameras don't show accurate colour reproduction because that's not needed for the feature extraction which happens with the full dynamic range data. The reason they don't do that (other than it not being necessary) is because the colour filters on the chips are set to optimise sensitivity over colour correctness. A normal camera may use a chip that has a Bayer filter to selectively let certain wavelengths through and reconstructs the image from that information. But, when filtering wavelengths, you lose the photons that don't pass the filter thus lowering sensitivity. HW2.0 cameras were employing gray-gray-gray-red pixels, with only red being filtered for one set of pixels. That was improved somewhat for HW2.5 and HW3.0, but still optimises for luminance data over colour correct data.