So, latecomer to this thread. Have a 2018 Model 3, so you guys probably know where this is going. Got the software update and noted that there was a tremendous amount of glare in low-light (i.e., it's dark out!) when going left or right.
As it happens, the FSD package has been paid for, the computer upgraded to Gen 3, and all that. And, looking at those side views, it crossed my mind that, when FSD finally shows up on the car, well, if I can't see out those cameras at night, the car won't be able to either.
The SO's Model Y, a 2021 model, doesn't have the glare.
Saw that people were getting their cameras replaced under warrantee, so put in a service request.
It's not going to be cheap. Tesla's response:
"The blind sport camera performance on older vehicles is neither a defect of materials or workmanship. It is a characteristic of the product in low light settings. And it's a characteristic that has been design enhanced in newer vehicle production for 2022 and the Palladium Models. You have the option to purchase the upgraded cameras as a retrofit and have them installed to your vehicle. The camera upgrades are about $230 (before tax) each. Please respond with receipt of this message and how you would like to proceed. Thank you."
I did point out that this was probably going to be a problem with FSD. And agreed to the $460 charge, anyway. They're coming in on Mobile Service on the 2nd of February to do the work.
Geez. Paid thousands for FSD, and now this.
So: $460 for replacement cameras/turn signals. Now you know.
I strongly suspect they will have to replace these cameras at some point for FSD. I have been told at a service centre that if needed, Tesla will replace cameras for FSD to function (and have evidence of such).
In a couple of discussions people defending Tesla on this issue claim this won't affect FSD performance - I strongly disagree. I work in VFX / video production and do motion tracking fairly regularly. The process in which I do that will be somewhat different to Tesla's on-the-fly solution, but the underlying principle will be the same:
1. Load the data (i.e - the input from the camera, or in my case a video file from a camera)
2. Analyse and identify points of high contrast
3. Track movement over time of those points
4. Build a solve (output a virtual camera that attempts to mimic the movements of the original perspective)
You can then go on to use that solve. For me, CGI placed into a video will appear to be 'in' the environment thanks to an accurate solve.
For Tesla, the virtual car in vector space should be accurately placed relative to the virtual environment around it. The virtual environment and virtual car should closely align to the real environment and real car, and therefore the solve needs to be as accurate as possible.
Going back to the blinded camera, that's affecting step 2 in the above process - Analyse and identify points of high contrast. If you're blinding a camera with indicator light 50% of the time during a lane change or junction turn you're screwing up 50% of your data to base decisions on. Everything in the visual area of the glare is having its perceived colours, contrast and brightness dramatically shifted. As far as the computer is concerned - everything in that area is now completely different. You can either develop some kind of advanced algorithm to try and account for the glare in every scenario (which isn't going to work well because depending on what's behind the glare the area can be different in colour or contrast AND will require you to spend additional processing power), or completely disregard the frames with the glare on and extrapolate what's happening using object permanence. Either way, you're spending a ton of development time compensating for a manufacturing defect, both of which are going to strongly affect the decisions FSD is making in strange ways.
Let's not forget a repeater will likely be the only camera in many scenarios to base decisions off of during manoeuvres/lane changes/turnings, particularly when there's another vehicle directly behind you blocking the rear camera.
Over time there'll be more vehicles with corrected cameras - that's a problem. FSD is supposed to be trained on data from, and deployed on cars using, the same sensor suite. There's now a proportionally shrinking number of cars using defective cameras. Will It make sense to spend more dev resources compensating for this shrinking group? Someone's eventually going to say 'screw it' and just make the argument to upper management these rubbish cameras need to be replaced because it's straining FSD development.
Long story short, at some point, cameras being blinded will become one of the lower hanging fruits to improve FSD performance compared to say, chasing diminishing returns for the march of 9's. I think if you go ahead and pay the fee now you might find yourself fighting Tesla to get a refund on it when this replacement programme ends up happening.