Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Suspected repeater camera defect that affects FSD performance

This site may earn commission on affiliate links.
The only speculation being made in this thread is the assertion that the light leakage is a problem for FSD. This puts the burden of proof clearly on those making that speculation, but no reasonable evidence has yet been presented. I don't see how this is anything to do with being or not being an apologist.

At this point what we have is a hypothesis, but we lack the tools to demonstrate that there is a reduction in detection accuracy and time to detect due to this defect. That isn't something we can test because we don't have access to to the neural network, and test data to run through the neural network.

We can only look at the processed images to see the differences in obstruction, and make our guess.

My guess is based on my experience working with Optical systems along with Neural Networks. Something as bad as that would cause some serious issues with detection.

I've convinced of that potential to the point where I'll get the Cameras changed out even if I have to pay for it. The rationality is the strong conviction that at some point down the road that Tesla will refund me.

It doesn't have to be the FSD argument that wins, but some other argument about it being a design defect. It certainly isn't the first time Tesla tried to avoid replacing something that was a design defect that they then had to replace under warranty..

Sometimes in life you have make an educated guess, and just take the gamble.
 
In my opinion, it wouldn't surprise me if Tesla's solution to the complainers is just to remove the blindspot camera feature from their car. Seriously, your car still works just like it did the day you bought it. There is nothing wrong. They introduced a new feature and to take full advantage, you'll need to either fix your existing cameras, buy new ones, or live with the light bleed. As for me, I'm going to attempt the fix.

If your car CAME with V11 and has this problem, you'd have a case but I don't think anyone has reported that.
The blindspot indicator popup is completely irrelevant to this issue beyond making it more obvious the problem exists. We were all seeing this on our dashcam footage, and it existed before we were even able to inspect dashcam footage coming from the cameras. They can remove the pop-up if they want, but that'd be the equivalent of hiding a rotting corpse under the carpet to try and make it go away.

The only speculation being made in this thread is the assertion that the light leakage is a problem for FSD. This puts the burden of proof clearly on those making that speculation, but no reasonable evidence has yet been presented. I don't see how this is anything to do with being or not being an apologist.
If you were to take one frame worth of data from the repeater cameras where there's no glare, and another a few moments later where there's glare, I think it's a fair assumption to make that the data - whether that's raw data for the NN or a rendered image - is substantially different. To exaggerate this to make the point, you would rather choose an image without glare than glare for the NN. To have the best possible data for the best possible outcome for the NN, you want to minimise noise. I think we can all agree on that baseline.

I think we're going in circles here, so I think we should actually contact someone with lower-level access like @greentheonly to see if we can get any better insights into what the FSD computer receives than a dashcam / post-processed live-view.
 
update: @rice_fry has affected repeater cams & the ability to pull raw imagery from the car. We'll wait and see if he decides to compare images with & without glare.
I don't think raw imagery will tell much, as you still need to process it. Then there will be arguments over how to process, because if you process it for human consumption, like how most raw images are processed, then you run into the same issue (reminds me of the processing done back in the day on the early cameras and how to get the color right for human consumption due to back then the RCCC filter).

What would be telling is effect on the NN data of a car with the glare and without the glare. Would be interesting to see if they have any mitigations already built in (and for other things, like for example the wiper for the front cameras, or the rear camera water beading).
 
  • Like
Reactions: android04
I don't think raw imagery will tell much, as you still need to process it. Then there will be arguments over how to process, because if you process it for human consumption, like how most raw images are processed, then you run into the same issue (reminds me of the processing done back in the day on the early cameras and how to get the color right for human consumption due to back then the RCCC filter).

What would be telling is effect on the NN data of a car with the glare and without the glare. Would be interesting to see if they have any mitigations already built in (and for other things, like for example the wiper for the front cameras, or the rear camera water beading).
The goal post moves again from being able to identify what the NN sees to how the NN processes it.

The NN isn’t a magical entity which expends zero computational effort to completely mitigate all forms of noise and visual artefacts
 
Has anyone taken two cars running FSDBeta - one with older cameras and one with newer ones - and done a drive together to compare FSDBeta between them? That would tell us if there is any impact. If your near Snohomish and have a car with new cameras, let me know and we'll test this out.
I think that would be the required step from us to get blog-o-sphere attention onto the issue.

The official Tesla Owners UK group has picked this up and is now tweeting about it:

I hate to suggest it, but Tesla only seems to respond to outcry once it hits regulators or the media. Doing a test as Ruffles mentions and taking it to the media with a spin like 'Tesla promises FSD using cameras that blind themselves' might be the only way we can force their hand. Every attempt at going through official channels get us shut down with awful and inaccurate excuses.
 
The goal post moves again from being able to identify what the NN sees to how the NN processes it.

The NN isn’t a magical entity which expends zero computational effort to completely mitigate all forms of noise and visual artefacts
The question was always about the effect on FSD, that's not goal post moving. If there is little to no effect, then even if there is less information available in the glare case, it's not relevant. The only means it can lead to a possible improvement, but it's not affecting FSD operation.

The people who have hacked into Tesla's system have access to all the processed data (I've seen illustrations generated from it in terms of voxels, point clouds, and labeled 2D images) so why would we not be interested in those instead of trying to argue over how to process the raw data from the sensors?

NN and algorithms aren't magic, but they can do things that humans can't from just manual processing. Tried some of this for people in the forum, for example frame averaging to extract a license plate number from low res footage. There is software out there with NN that can do it with high reliability, but good luck trying to replicate that manually by processing the raw footage yourself.
 
  • Like
Reactions: drtimhill
I've seen no one give any 'proof' either way. You've made some arguments about the abilities of the cameras which may or may not be true but certainly haven't provided proof. I've made counter arguments about them, but I will also admit that I don't have definitive proof.

It is no large stretch to say that an obscured image can cause issues with FSD. It may be that the noise caused by the glare is a non-issue, but without more in depth knowledge of the technical specs of the sensors only someone from Tesla can say for sure.
Well you are conflating two issues.

First and foremost, my posts were pointing out that the argument "I cannot see X on screen therefore FSD cannot see X" is an invalid assumption. Since this logic is the basis of the argument that FSD is impacted by light-leakage, the argument is invalidated. That's not speculation, its just logic.

Of course, invalidating one argument doesnt mean there are not other valid arguments about the possible impact of light leakage. As you note, without true knowledge about how the cameras/NN handle this we cannot know. However, its worth noting that FSD is a primary project for Tesla, and has been for several years, with massive resources assigned to it. The FSD vision stack has been running in-house for 2-3 years now (in some form or other). If light-leakage was an issue for FSD, do you think it would have gone un-noticed within Tesla all that time? And, given the importance of the FSD project, do you not think it would have been addressed long ago if it was? Tesla make production changes to the cars all the time, can you imagine a scenario in which the relatively trivial change was refused if the FSD team felt it was impacting the capabilities of FSD?

Yes, this is speculation also, but it seems more logical to me than assuming Tesla "covered-up" an issue, and potentially crippled one of their most important and visible development efforts. That seems to me to be drifting into the realm of conspiracy theory. So why are they fixing it now? Because it only became significant when they added the user-visible view to the UI as a convenience.
 
Since this logic is the basis of the argument that FSD is impacted by light-leakage
Ah, but that’s the question, isn’t it? You’ve assumed that the light leakage does not impact any data needed by the car for FSD, but that argument is also based on assumptions. If it does not impact the data then you are correct, but a logical conclusion premised on assumptions is only as good as the assumptions. If the assumptions are incorrect then then the conclusion is invalid.

If light-leakage was an issue for FSD, do you think it would have gone un-noticed within Tesla all that time? And, given the importance of the FSD project, do you not think it would have been addressed long ago if it was?
Agreed, but I would also note that they actually did make a change to the camera design to prevent light leakage. That would imply (but not prove) that it was a problem (or at least a potential problem.) Tesla (like all companies) is also loathe to retrofit hardware after the fact for obvious financial reasons. It is no great leap to say/hypothesize that the engineers said ‘this is a problem‘ and the management said ‘we can’t afford to swap out every camera, figure out a way to deal with it.’ (That’s basically what management does to engineering all the time.

So why are they fixing it now? Because it only became significant when they added the user-visible view to the UI as a convenience.
and this is speculation, too. Did they make the change because they added the BSM camera view, or had they already made the change and the BSM just made evident the problem that existed before? We don’t really know.
 
  • Like
Reactions: pilotSteve
Ah, but that’s the question, isn’t it? You’ve assumed that the light leakage does not impact any data needed by the car for FSD, but that argument is also based on assumptions. If it does not impact the data then you are correct, but a logical conclusion premised on assumptions is only as good as the assumptions. If the assumptions are incorrect then then the conclusion is invalid.
Er, no, you dont seem to understand how this works. A claim was made (FSD is impacted by light-bleed). The argument to support this claim was that it could be seen on screen that light-bleed was blinding the camera.

I pointed out that the argument was invalid, since you cannot assert that what you see on the screen is what the NN sees without much deeper knowledge of the entire video processing chains (FSD and screen). This invalidates the argument (but not the claim, since there may be other arguments foe the claim that are valid). But at no point did I make any assumptions, I merely pointed out that the argument contains an invalid assumption.

You seem to think that invalidating an assumption is itself implicitly an assumption, which is incorrect. I repeat, at no point did I assert that light leakage does not impact FSD, I simply said the argument that the screen view "proves" the claim was invalid, and therefore of no use in arguing for (or against) the claim.
 
  • Like
Reactions: pilotSteve
Does anyone know if this issue is related to the mysterious error message "Auto park unavailable" that I get in my MX from time to time. It only seems to happen when there is bright sun on the side of the car.

I am guessing the error is due to the repeater being blinded. And yes, the message should be inhibited if the car is in 'Drive'.
Interesting I did get this message in my MX when i was travelling long distance with Autopilot engaged. I came back from my trip and put in a service request and set up an appt. Tesla called back and said that since the error message event, multiple s/w updated have come into my car so this may have got resolved and if I observe it again, I need to put in a service request. I have attached the pic. ( 4-43 PM PST, on Jan 1st, 2022) so not sure if i was driving into the sun the pic info states that it is near Coalinga, CA and i was driving N on I-5 and I-5 generally goes NNW direction near Coalinga- ( sunset was at 4-59PM on Jan 1st at Coaling, CA) so my guess is I was not driving facing the sun around sunset time as the sun should have been well to my left . Not sure why it happened. I will go on a longer drive this weekend and see if it repeats with sw rel 2022.4 ( current version)
 

Attachments

  • IMG_0568.jpg
    IMG_0568.jpg
    97 KB · Views: 83
Er, no, you dont seem to understand how this works. A claim was made (FSD is impacted by light-bleed). The argument to support this claim was that it could be seen on screen that light-bleed was blinding the camera.

I pointed out that the argument was invalid, since you cannot assert that what you see on the screen is what the NN sees without much deeper knowledge of the entire video processing chains (FSD and screen). This invalidates the argument (but not the claim, since there may be other arguments foe the claim that are valid). But at no point did I make any assumptions, I merely pointed out that the argument contains an invalid assumption.

You seem to think that invalidating an assumption is itself implicitly an assumption, which is incorrect. I repeat, at no point did I assert that light leakage does not impact FSD, I simply said the argument that the screen view "proves" the claim was invalid, and therefore of no use in arguing for (or against) the claim.
Yes, I understand, but you also (at least implicitly) assumed that the cameras and NN had more data that made the light bleed irrelevant, and we don't know that either since we don't actually know the true capabilities of the camera. Both arguments make assumptions in different respects.
 
Sorry for the 'aside', but I'm wondering if the cameras see infrared. I live in a rural area and, so far, my Y has not reacted to deer during the day or at night either on or along side the road. Any thoughts on that? Strangely, last year it slowed and I realized that a family of coyotes was crossing far down the road.
 
Sorry for the 'aside', but I'm wondering if the cameras see infrared. I live in a rural area and, so far, my Y has not reacted to deer during the day or at night either on or along side the road. Any thoughts on that? Strangely, last year it slowed and I realized that a family of coyotes was crossing far down the road.
I would throw it out there that from the dashcam footage, IR (infrared) is filtered out, so it can't see IR. A camera that is sensitive to IR looks very different from one with an IR filter on it (most "normal" cameras come with IR filters because us humans are usually more interested in visible light) and Tesla's cameras definitely don't have active IR filters that can switch on or off.

Just an example from raspberry pi cams (given I was looking at them recently). The top picture has the IR filter removed, bottom is the normal camera with an IR filter still on it. You will notice foliage looks completely different in IR (the colors look different too in other objects).
pinoir-thumbnail.jpg


However, the latest Plaid models have infrared LEDs for the cabin camera, so that camera likely has IR capabilities.
Tesla Model S Plaid Has In-Cabin Infrared LEDs, Possibly for Driver Monitoring - TeslaNorth.com
 
  • Like
Reactions: pg3ttt
Yes, I understand, but you also (at least implicitly) assumed that the cameras and NN had more data that made the light bleed irrelevant, and we don't know that either since we don't actually know the true capabilities of the camera. Both arguments make assumptions in different respects.
Nope you dont get it, and you clearly never will.

I didnt say ANYTHING about the NN getting better or worse or the same. I simply said WE DONT KNOW. Since we dont know any claim made based on KNOWING is invalid. End discussion.

If i flip a coin, and you say, without looking, its heads, i can say “you dont know that” without knowing if its heads or tails. You keep claiming my saying “ you dontvknow that” is the same as my claiming its really tails, which is nonsense.
 
I would throw it out there that from the dashcam footage, IR (infrared) is filtered out, so it can't see IR. A camera that is sensitive to IR looks very different from one with an IR filter on it (most "normal" cameras come with IR filters because us humans are usually more interested in visible light) and Tesla's cameras definitely don't have active IR filters that can switch on or off.

Just an example from raspberry pi cams (given I was looking at them recently). The top picture has the IR filter removed, bottom is the normal camera with an IR filter still on it. You will notice foliage looks completely different in IR (the colors look different too in other objects).
pinoir-thumbnail.jpg


However, the latest Plaid models have infrared LEDs for the cabin camera, so that camera likely has IR capabilities.
Tesla Model S Plaid Has In-Cabin Infrared LEDs, Possibly for Driver Monitoring - TeslaNorth.com
Thanks! Maybe someday... Crashes with deer are very common in my neck of the woods, and many of them happen at night.
 
I would throw it out there that from the dashcam footage, IR (infrared) is filtered out, so it can't see IR. A camera that is sensitive to IR looks very different from one with an IR filter on it (most "normal" cameras come with IR filters because us humans are usually more interested in visible light) and Tesla's cameras definitely don't have active IR filters that can switch on or off.

Just an example from raspberry pi cams (given I was looking at them recently). The top picture has the IR filter removed, bottom is the normal camera with an IR filter still on it. You will notice foliage looks completely different in IR (the colors look different too in other objects).
pinoir-thumbnail.jpg


However, the latest Plaid models have infrared LEDs for the cabin camera, so that camera likely has IR capabilities.
Tesla Model S Plaid Has In-Cabin Infrared LEDs, Possibly for Driver Monitoring - TeslaNorth.com
Interesting comparison but be careful. After all what you are showing is the result of mapping IR into the visible spectrum .. if not the images would be the same. A camera that can truly see IR as well as visible light can see all we can see PLUS additional “colors” we cannot. Its actually impossible for us to see what the camera truly sees.
 
  • Like
Reactions: pg3ttt
Interesting comparison but be careful. After all what you are showing is the result of mapping IR into the visible spectrum .. if not the images would be the same. A camera that can truly see IR as well as visible light can see all we can see PLUS additional “colors” we cannot. Its actually impossible for us to see what the camera truly sees.
That's not exactly what's happening. Without the IR filter, the IR returns overwhelm the visible light returns, especially in things like foliage which reflects IR strongly (while in visible light it only barely reflects green, while absorbing most of the visible light spectrum).

This article has the reflectance of a leaf. You will notice a small bump at around 550 nm. That's green.
Then look at the response past 700nm, that's the IR. Without the filter that cuts it off, the IR overwhelms the image sensor, so it's impossible for the sensor to tell foliage is green.
f516460a-9bc5-4ed9-a7bb-d78bb38b4467.jpg

Eerie and Beautiful Infrared Time-Lapse Video

You can do a remapping of the colors to get back something that looks closer to a camera with a IR filter, but it introduces a lot of color noise even in the parts you can recover. For things like foliage, you can't recover it because of the problem mentioned above.

Examples below:
IR Filter:
1-Camera-with-IR-filter.png

Without IR filter:
2-Camera-without-IR-filter.png

Without IR filter but remapped colors and white balance:
8-Fully-optimized-color-processing-for-camera-without-IR-cut-filter.png

The color checker is able to be remapped because the IR reflectance of it (it's plastic) makes it still possible, but you will notice it's a lot noisier and colors are muted. However you will notice that the foliage is still not able to be recovered (makes complete sense if you look at the reflectance graph). The glass also looks different (because IR reflectance of glass is different than for visible light).
This article has more details:
Implementing a visible camera for both daylight and lowlight vision - Adimec

That's why security cameras that have an IR mode still use a switchable IR filter and can't just rely on software to remap the colors. And security cameras very much would like to eliminate that switching physical IR filter if it was possible (some of them are doing that partially with "color night vision" which is just a sensor that is more sensitive to visible light with wide/high dynamic range features, but still has the IR filter).
 
Last edited:
  • Like
Reactions: sleepydoc
That's not exactly what's happening. Without the IR filter, the IR returns overwhelm the visible light returns, especially in things like foliage which reflects IR strongly (while in visible light it only barely reflects green, while absorbing most of the visible light spectrum).

This article has the reflectance of a leaf. You will notice a small bump at around 550 nm. That's green.
Then look at the response past 700nm, that's the IR. Without the filter that cuts it off, the IR overwhelms the image sensor, so it's impossible for the sensor to tell foliage is green.
f516460a-9bc5-4ed9-a7bb-d78bb38b4467.jpg

https://slate.com/technology/2013/02/infrared-time-lapse-video-of-trees-in-the-infrared-is-eerie-and-beautiful.html

You can do a remapping of the colors to get back something that looks closer to a camera with a IR filter, but it introduces a lot of color noise even in the parts you can recover. For things like foliage, you can't recover it because of the problem mentioned above.

Examples below:
IR Filter:
1-Camera-with-IR-filter.png

Without IR filter:
2-Camera-without-IR-filter.png

Without IR filter but remapped colors and white balance:
8-Fully-optimized-color-processing-for-camera-without-IR-cut-filter.png

The color checker is able to be remapped because the IR reflectance of it (it's plastic) makes it still possible, but you will notice it's a lot noisier. However you will notice that the foliage is still not able to be recovered (makes complete sense from if you look at the reflectance graph). The glass also looks different (because IR reflectance of glass is different than for visible light).
This article has more details:
Implementing a visible camera for both daylight and lowlight vision - Adimec

That's why security cameras that have an IR mode still use a switchable IR filter and can't just rely on software to remap the colors. And security cameras very much would like to eliminate that switching physical IR filter if it was possible (some of them are doing that partially with "color night vision" which is just a sensor that is more sensitive to visible light with wide/high dynamic range features, but still has the IR filter).
I think we're kinda saying the same thing. I agree the response curve is highly non-linear, but that's a distinct problem from re-mapping IR into the visible spectrum (e.g. with security cameras). With correct filtering, you can build a camera that can see the full spectrum from visible to IR (though I'd be the first to admit this is non-trivial). Or, equivalently, you simply have multiple cameras and combine the images in post-processing. My point was just that if you want to present such a system to a human, you HAVE to remap everything into the visible spectrum, which inevitably eliminates information, whereas for a car visual system and NN you can directly train against the full spectrum. So we need to be careful when drawing comparisons using remapped IR images that a human can see.