Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Suspected repeater camera defect that affects FSD performance

This site may earn commission on affiliate links.
In my opinion, it wouldn't surprise me if Tesla's solution to the complainers is just to remove the blindspot camera feature from their car. Seriously, your car still works just like it did the day you bought it. There is nothing wrong. They introduced a new feature and to take full advantage, you'll need to either fix your existing cameras, buy new ones, or live with the light bleed. As for me, I'm going to attempt the fix.

If your car CAME with V11 and has this problem, you'd have a case but I don't think anyone has reported that.
 
Is this really all you people have to complain about? Tesla's doing a great job then.
How do you know this is the only problem people are having?
There's no doubt in my mind that this is a flaw that needs fixing - as Tesla has done in later builds. It's still wild speculation to suggest that this will cause an issue with FSD - especially as spatial and temporal permanence is something they're working hard on in order to improve FSD.
Among other things, FSD depends on camera input. You have significant obfuscation of the side images. How can you call it ‘wild speculation’ that it causes an issue with FSD? I would say it‘s the reverse - it’s wild speculation to assume that it doesn’t cause an issue. Stop being an apologist.
 
It's evident Telsa thought it was worth applying tape onto the PCB of the repeater cameras to mitigate the issue. The new PCBs don't have this effect either, but it's impossible to know if that was intentional or simply a byproduct of the redesign / a new manufacturer.
Don't know about the taped model, but the C model with the new PCB in the video does not fix the issue. A few of us with C models (with early 2021 models) chimed in and reported similar glare, and it was retested by the original video creator to determine that the new PCB actually does not fix the issue. There is a later D version that does appear to fix the issue (but unknown what is different).
At least with the front cameras you have an array of 3 cams equipped with different specs to deal with things like direct sunlight and to add some layer of redundancy.
Note the front cameras have to deal with a repeating obstruction also: the windshield wipers. So theoretically, they should be able to have a NN that can deal with the issue. Of course that doesn't solve the issue for people using it for blind spot cam. I guess the argument is that it was extra functionality that was not included with the car, that's why the warranty denials. If however you have one camera that has it and one not, you have a stronger case (I think other than one example, all of the ones I have read that got free fixes fall under this camp). Also if your car was made after the feature was released (but I don't think there are any examples of this).
 
  • Informative
Reactions: DenkiJidousha
In my opinion, it wouldn't surprise me if Tesla's solution to the complainers is just to remove the blindspot camera feature from their car. Seriously, your car still works just like it did the day you bought it. There is nothing wrong. They introduced a new feature and to take full advantage, you'll need to either fix your existing cameras, buy new ones, or live with the light bleed. As for me, I'm going to attempt the fix.

If your car CAME with V11 and has this problem, you'd have a case but I don't think anyone has reported that.
We've had the ability to view the side repeater cameras for awhile.

The only thing that changed in V11 is that view automatically comes on when hitting the turn signal.
 
  • Like
Reactions: pilotSteve
We've had the ability to view the side repeater cameras for awhile.

The only thing that changed in V11 is that view automatically comes on when hitting the turn signal.
And that's the critical difference, the turn signal is always on in the blind spot cam, which makes it a problem. The other usages it didn't really matter because very rarely is the turn signal on, nor is the side view the primary view (dashcam or the camera screen). I'm one of the rare people that always have the camera view up whenever possible when driving, but the gist I get is most people aren't doing that (which is why the call for the blind spot cam).
 
  • Like
Reactions: DenkiJidousha
Stop being an apologist.

Oohhhh, that’ll scare me - Junior-high name calling. I’m shaking, please don’t stuff me in a locker.

Unless you have a background in image processing and understand the terms “temporal permanence” and “spatial permanence”, you are completely ignorant on this subject, and should quit making a fool of yourself. Yes, whether this will be a problem for FSD is indeed wild speculation.

Let the engineers do their jobs.
 
And that's the critical difference, the turn signal is always on in the blind spot cam, which makes it a problem. The other usages it didn't really matter because very rarely is the turn signal on, nor is the side view the primary view (dashcam or the camera screen). I'm one of the rare people that always have the camera view up whenever possible when driving, but the gist I get is most people aren't doing that (which is why the call for the blind spot cam).
I totally would have used it if it was bigger.

It was too small to really be useful so that's why I never used it.
 
Let the engineers do their jobs.

I think you mean "Let the bean counters do their jobs". :)

In all seriousness as an Engineer I'd prefer to remove as much variability as possible.

Sure it can be designed around, but not without trade offs.

For example lets say you decided to wait to do captures in between the blinking, but this will introduce latency.

Okay so lets deal with the issue in the Neural Network code, but that introduces another problem. That problem it introduces is different levels of obstruction between camera in the fleet.

As an Engineer I wouldn't want to deal with it.

Now being told to deal with something due to cost reasons isn't uncommon, and I've had numerous cases of non-optimal stuff that I never felt good about.

But, nothing anywhere close to something that could cause someone to get hurt. With FSD these cameras are used for the auto lane change feature which is the single most popular feature within FSD. This issue has what I'd consider a non-zero chance of interfering with the detection of a vehicle.

It's also going to make validation and verification of autonomous capabilities a difficult task. It is because of this variability.

For FSD owners I think Tesla at some point will be forced to replace them.
 
From a non-technical perspective I would highly advocate for complaining.

Why?

As consumers we have power.
We can dictate when we think things are fair or not.

Now I'm not saying to demand things when there is no argument to be made, but we have enough to advocate in our favor.

We know without a doubt that this is a design issue.
We know from the very beginning that the repeater cameras were intended for self driving whether someone bought the FSD feature or not
We know in some regions that have better consumer protection laws that Tesla is changing them out for free.
We know that fixing it ourselves is simply not possible without taking a huge risk of damaging the entire thing.

The other reason is I question whether have power anymore. So it would be a good test case to see if Tesla even cares about the customer anymore. We used to have advocates like Fred on Electrek, but Elon no longer listens to him.

Every Tesla influencer person that Elon listens to these days has shown total loyalty to Tesla, and not the customer.
 
We know in some regions that have better consumer protection laws that Tesla is changing them out for free.
Citation required for this point. The only consistent thing we have seen that gives you a higher probability of getting it changed for free, is if you have it only on one side, I haven't seen anything that suggests jurisdiction has anything to do with it.
 
Citation required for this point. The only consistent thing we have seen that gives you a higher probability of getting it changed for free, is if you have it only on one side, I haven't seen anything that suggests jurisdiction has anything to do with it.

This was addressed in a different thread where it was mentioned that warranty aspects are quite regulated, and customers in the NL have a lot of power.

Now what isn't said it what percentage of customers are able to get them replaced under warranty.

It was an attempt of explaining why NL customers seemed to have an easier time. It wasn't 100% though as some customers were told "its normal"

Maybe we need a poll? I think a poll would be useful as we don't really know how many people are able to get them replaced under warranty.

Both of mine are bad so I'm going to see what Tesla service says.

I was going to give it a bit more time though as I think its inevitable that they'll replace FSD owners for free.
 
I've no idea .. the car is showing a junction ahead and the road behind to the left of the car. What specifically is the problem here?

There is enough obstruction that you can't easily see if there is car coming.

From a UI functionality standpoint its a quality issue
From a FSD standpoint the general rule is if a human can't tell then you can't expect a Neural Network to.

This defect impacts both aspects, and needs to be replaced.

Sure someone who only uses it from a UI aspect can live with it. Its a bit embarrassing if anyone else sees it, but it can be lived with. Lots of Model 3/Y owners live with little body alignment issues and other eye sores. The newer owners have less issues as Tesla improved things.

From an FSD standpoint I simply don't see it being viable.
 
  • Like
Reactions: sleepydoc
There is enough obstruction that you can't easily see if there is car coming.

From a UI functionality standpoint its a quality issue
From a FSD standpoint the general rule is if a human can't tell then you can't expect a Neural Network to.
Did you not read my earlier post?

I'm sorry, but arguing that if a human cannot see something then a camera also cannot is utter nonsense. Of course you can expect cameras to see things humans cannot. For a start, there are more of them! Can you see all around you at the same time? How about the time it takes your eye to adjust to different lighting conditions? Or that your eye can only really see details clearly at the fovea, which has a very narrow FOV, while a camera sees as clearly throughout its entire visual range?

And, quite apart from anything else, you are seeing a rendering on an LCD panel with all that implies, while the NN is seeing the camera output directly (albeit, at the moment, with some significant video processing). And what about that processing? You do understand that the dynamic range of the cameras is far greater than the LCD panel, right? That allows the NN to see details in light and dark areas at the same time without the tricks needed to get a compromised image for you to look at.
 
Oohhhh, that’ll scare me - Junior-high name calling. I’m shaking, please don’t stuff me in a locker.

Unless you have a background in image processing and understand the terms “temporal permanence” and “spatial permanence”, you are completely ignorant on this subject, and should quit making a fool of yourself. Yes, whether this will be a problem for FSD is indeed wild speculation.

Let the engineers do their jobs.
I’m well aware of what temporal permanence and spatial permanence are (they’re both mechanisms used to compensate for errors/noise in the input made necessary by imperfect sensors, which goes back to my original point) but whatever - you call it name-calling, I call it an adjective. Either way it seemed to touch a nerve (And elicited a junior high response, ironically enough.)
 
I’m well aware of what temporal permanence and spatial permanence are (they’re both mechanisms used to compensate for errors/noise in the input made necessary by imperfect sensors, which goes back to my original point) but whatever - you call it name-calling, I call it an adjective. Either way it seemed to touch a nerve (And elicited a junior high response, ironically enough.)
Er, no. They are primarily the means to allow the NN to deduce the presence of occluded objects (e.g. pedestrian walks behind a car). Nothing whatsoever to do with the sensors. As a side-effect they can be used to handle temporary sensor loss (which is just a more extreme form of occlusion).
 
  • Like
Reactions: Frank99
Among other things, FSD depends on camera input. You have significant obfuscation of the side images. How can you call it ‘wild speculation’ that it causes an issue with FSD? I would say it‘s the reverse - it’s wild speculation to assume that it doesn’t cause an issue. Stop being an apologist.
The only speculation being made in this thread is the assertion that the light leakage is a problem for FSD. This puts the burden of proof clearly on those making that speculation, but no reasonable evidence has yet been presented. I don't see how this is anything to do with being or not being an apologist.
 
Did you not read my earlier post?

I'm sorry, but arguing that if a human cannot see something then a camera also cannot is utter nonsense. Of course you can expect cameras to see things humans cannot. For a start, there are more of them! Can you see all around you at the same time? How about the time it takes your eye to adjust to different lighting conditions? Or that your eye can only really see details clearly at the fovea, which has a very narrow FOV, while a camera sees as clearly throughout its entire visual range?

And, quite apart from anything else, you are seeing a rendering on an LCD panel with all that implies, while the NN is seeing the camera output directly (albeit, at the moment, with some significant video processing). And what about that processing? You do understand that the dynamic range of the cameras is far greater than the LCD panel, right? That allows the NN to see details in light and dark areas at the same time without the tricks needed to get a compromised image for you to look at.
sorry, you have several flaws in your logic.

Several cameras - only matters if the views are overlapping and providing redundant data.

As for whether cameras can see things humans cannot, that‘s an assumption that depends on the camera having 'features' that the eye doesn't have (expanded spectrum, better focusing/accommodation, resolution, etc.) There are some cameras that do and plenty that don't. If you turn the blinker on during the day, there is no glare, so the camera can adapt to higher light intensity, but there's more to it than that. The adaptation typically occurs in the camera itself, not due to post-device processing, so the image that the car receives may or may not be better. Additionally, in low light situations, the amount of light coming from the object obscured by the glare is significantly lower meaning you have a much lower signal to noise ratio. Beyond all that, the human eye (and brain) is still better at pattern recognition, interpretation and extrapolation.

It's very possible that the car is getting a better image than we're seeing, but it's also very possible that it's not.

Er, no. They are primarily the means to allow the NN to deduce the presence of occluded objects (e.g. pedestrian walks behind a car). Nothing whatsoever to do with the sensors. As a side-effect they can be used to handle temporary sensor loss (which is just a more extreme form of occlusion).
Yes, that's basically what I said (or meant to say.) I was including occlusion as a form of input noise/error. You just said they're used to handle sensor loss but then said they have nothing to do with the sensors. They very much has to do with the sensors (or compensating for their limitations.)

The only speculation being made in this thread is the assertion that the light leakage is a problem for FSD. This puts the burden of proof clearly on those making that speculation, but no reasonable evidence has yet been presented. I don't see how this is anything to do with being or not being an apologist.
I've seen no one give any 'proof' either way. You've made some arguments about the abilities of the cameras which may or may not be true but certainly haven't provided proof. I've made counter arguments about them, but I will also admit that I don't have definitive proof.

It is no large stretch to say that an obscured image can cause issues with FSD. It may be that the noise caused by the glare is a non-issue, but without more in depth knowledge of the technical specs of the sensors only someone from Tesla can say for sure.
 
Did you not read my earlier post?

I'm sorry, but arguing that if a human cannot see something then a camera also cannot is utter nonsense. Of course you can expect cameras to see things humans cannot. For a start, there are more of them! Can you see all around you at the same time? How about the time it takes your eye to adjust to different lighting conditions? Or that your eye can only really see details clearly at the fovea, which has a very narrow FOV, while a camera sees as clearly throughout its entire visual range?

And, quite apart from anything else, you are seeing a rendering on an LCD panel with all that implies, while the NN is seeing the camera output directly (albeit, at the moment, with some significant video processing). And what about that processing? You do understand that the dynamic range of the cameras is far greater than the LCD panel, right? That allows the NN to see details in light and dark areas at the same time without the tricks needed to get a compromised image for you to look at.

That would be utter nonsense if I was comparing human vision with camera vision. But, how could I be doing that? There is no human to plug into to compare it against. The technology for that hasn't been invented yet.

From the camera image shown on the LCD a human can clearly see the image is obstructed. So only the second part of what you said is relevant. That we might not be seeing all the data present.

You claim the limited dynamic range of the LCD panel isn't allowing us to see everything there. That's true to an extent, but we're not looking for tiny differences in shading. We're looking for something that is going to be REALLY obvious. This will be obvious in the LCD panel because the Camera image goes through a look up table to translate it from 10bits to 8bits.

I agree that we don't know to what degree the image is being impacted, but I think its ridiculous to suggest that its not being impacted.
 
Last edited:
  • Like
Reactions: sleepydoc