Has Tesla ever mentioned adding cameras to the windshiled frame on the driver's side, or even near the driver's side front turn signal looking out the side of the car? I think the car needs some way to get the same or better view than a dirver leaning forward. Maybe they can make this an add on if need be?
This has been a much-discussed issue within the user community. I don’t think Tesla has ever publicly acknowledged that there is any deficiency in the present camera coverage. "Overlapping 360 degree surround" coverage sounds good on the surface, but is
not at all the same as optimal-viewpoint placement in the presence of fixed-infrastructure or other-vehicle obstructions.
Some of us feel strongly that Tesla needs cameras at least on the front corners; I've proposed that a new combined-instrument design for the headlight modules could incorporate such cameras, and could even open the consideration of a retrofittable HW4 or HW3+ configuration to include them. It's possible to do local video pre-processing and merging to deal with the currently limited number of camera input ports on the FSD computer.
That's not the same as saying that I think Tesla agrees or will do any such thing, only that it's feasible.
Similar arguments can be made for rear corner cameras, improved near-vision and curb-view/parking cameras etc. But I think the most important benefits would be at the front corners, along with higher resolution cameras in general.
(Camera cleaning, anti-fouling coatings and so on are also design considerations for all of the cameras not already covered by the windshield cleaning hardware.)
Note that more and/or higher-resolution cameras doesn't really mean that the video bitrate needs to be far higher throughout the video processing stack, which is a typical objection whenever this comes up. I believe it should be handled more as an adaptive capability, in which a number of freely-assignable regions can bring the high resolution imaging on demand. Something like the high-resolution but very limited region of our human vision which is the central fovea - but in the machine vision case there could be a superhuman number of these virtual-fovea regions, reassignable on demand, while the general surround-view is kept to a modest and sufficient resolution-> bitrate.
Discussion of non-vision sensors like lidar, radar, infrared etc. is a whole other dimension; here I'm just commenting on achieving human-equivalent and superhuman capabilities in the standard visual spectrum.
I think this is a very promising area for Improvement, but again I'm not very optimistic that the Tesla engineering team sees it that way even for new hardware, much less any significant retrofit. It will be very interesting to find out what the future HW4 platform looks like.
Finally, I think that most of the present FSD beta performance limitations are due to recognition, labeling and decision processing. Camera improvements would indeed help the creeping and other issues with high-speed traffic recognition, but would certainly not "solve" FSD in general.