You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
How do you know this is the only problem people are having?Is this really all you people have to complain about? Tesla's doing a great job then.
Among other things, FSD depends on camera input. You have significant obfuscation of the side images. How can you call it ‘wild speculation’ that it causes an issue with FSD? I would say it‘s the reverse - it’s wild speculation to assume that it doesn’t cause an issue. Stop being an apologist.There's no doubt in my mind that this is a flaw that needs fixing - as Tesla has done in later builds. It's still wild speculation to suggest that this will cause an issue with FSD - especially as spatial and temporal permanence is something they're working hard on in order to improve FSD.
Don't know about the taped model, but the C model with the new PCB in the video does not fix the issue. A few of us with C models (with early 2021 models) chimed in and reported similar glare, and it was retested by the original video creator to determine that the new PCB actually does not fix the issue. There is a later D version that does appear to fix the issue (but unknown what is different).It's evident Telsa thought it was worth applying tape onto the PCB of the repeater cameras to mitigate the issue. The new PCBs don't have this effect either, but it's impossible to know if that was intentional or simply a byproduct of the redesign / a new manufacturer.
Note the front cameras have to deal with a repeating obstruction also: the windshield wipers. So theoretically, they should be able to have a NN that can deal with the issue. Of course that doesn't solve the issue for people using it for blind spot cam. I guess the argument is that it was extra functionality that was not included with the car, that's why the warranty denials. If however you have one camera that has it and one not, you have a stronger case (I think other than one example, all of the ones I have read that got free fixes fall under this camp). Also if your car was made after the feature was released (but I don't think there are any examples of this).At least with the front cameras you have an array of 3 cams equipped with different specs to deal with things like direct sunlight and to add some layer of redundancy.
We've had the ability to view the side repeater cameras for awhile.In my opinion, it wouldn't surprise me if Tesla's solution to the complainers is just to remove the blindspot camera feature from their car. Seriously, your car still works just like it did the day you bought it. There is nothing wrong. They introduced a new feature and to take full advantage, you'll need to either fix your existing cameras, buy new ones, or live with the light bleed. As for me, I'm going to attempt the fix.
If your car CAME with V11 and has this problem, you'd have a case but I don't think anyone has reported that.
And that's the critical difference, the turn signal is always on in the blind spot cam, which makes it a problem. The other usages it didn't really matter because very rarely is the turn signal on, nor is the side view the primary view (dashcam or the camera screen). I'm one of the rare people that always have the camera view up whenever possible when driving, but the gist I get is most people aren't doing that (which is why the call for the blind spot cam).We've had the ability to view the side repeater cameras for awhile.
The only thing that changed in V11 is that view automatically comes on when hitting the turn signal.
Stop being an apologist.
I totally would have used it if it was bigger.And that's the critical difference, the turn signal is always on in the blind spot cam, which makes it a problem. The other usages it didn't really matter because very rarely is the turn signal on, nor is the side view the primary view (dashcam or the camera screen). I'm one of the rare people that always have the camera view up whenever possible when driving, but the gist I get is most people aren't doing that (which is why the call for the blind spot cam).
Let the engineers do their jobs.
Citation required for this point. The only consistent thing we have seen that gives you a higher probability of getting it changed for free, is if you have it only on one side, I haven't seen anything that suggests jurisdiction has anything to do with it.We know in some regions that have better consumer protection laws that Tesla is changing them out for free.
I've no idea .. the car is showing a junction ahead and the road behind to the left of the car. What specifically is the problem here?
Citation required for this point. The only consistent thing we have seen that gives you a higher probability of getting it changed for free, is if you have it only on one side, I haven't seen anything that suggests jurisdiction has anything to do with it.
I've no idea .. the car is showing a junction ahead and the road behind to the left of the car. What specifically is the problem here?
Did you not read my earlier post?There is enough obstruction that you can't easily see if there is car coming.
From a UI functionality standpoint its a quality issue
From a FSD standpoint the general rule is if a human can't tell then you can't expect a Neural Network to.
I’m well aware of what temporal permanence and spatial permanence are (they’re both mechanisms used to compensate for errors/noise in the input made necessary by imperfect sensors, which goes back to my original point) but whatever - you call it name-calling, I call it an adjective. Either way it seemed to touch a nerve (And elicited a junior high response, ironically enough.)Oohhhh, that’ll scare me - Junior-high name calling. I’m shaking, please don’t stuff me in a locker.
Unless you have a background in image processing and understand the terms “temporal permanence” and “spatial permanence”, you are completely ignorant on this subject, and should quit making a fool of yourself. Yes, whether this will be a problem for FSD is indeed wild speculation.
Let the engineers do their jobs.
Er, no. They are primarily the means to allow the NN to deduce the presence of occluded objects (e.g. pedestrian walks behind a car). Nothing whatsoever to do with the sensors. As a side-effect they can be used to handle temporary sensor loss (which is just a more extreme form of occlusion).I’m well aware of what temporal permanence and spatial permanence are (they’re both mechanisms used to compensate for errors/noise in the input made necessary by imperfect sensors, which goes back to my original point) but whatever - you call it name-calling, I call it an adjective. Either way it seemed to touch a nerve (And elicited a junior high response, ironically enough.)
The only speculation being made in this thread is the assertion that the light leakage is a problem for FSD. This puts the burden of proof clearly on those making that speculation, but no reasonable evidence has yet been presented. I don't see how this is anything to do with being or not being an apologist.Among other things, FSD depends on camera input. You have significant obfuscation of the side images. How can you call it ‘wild speculation’ that it causes an issue with FSD? I would say it‘s the reverse - it’s wild speculation to assume that it doesn’t cause an issue. Stop being an apologist.
sorry, you have several flaws in your logic.Did you not read my earlier post?
I'm sorry, but arguing that if a human cannot see something then a camera also cannot is utter nonsense. Of course you can expect cameras to see things humans cannot. For a start, there are more of them! Can you see all around you at the same time? How about the time it takes your eye to adjust to different lighting conditions? Or that your eye can only really see details clearly at the fovea, which has a very narrow FOV, while a camera sees as clearly throughout its entire visual range?
And, quite apart from anything else, you are seeing a rendering on an LCD panel with all that implies, while the NN is seeing the camera output directly (albeit, at the moment, with some significant video processing). And what about that processing? You do understand that the dynamic range of the cameras is far greater than the LCD panel, right? That allows the NN to see details in light and dark areas at the same time without the tricks needed to get a compromised image for you to look at.
Yes, that's basically what I said (or meant to say.) I was including occlusion as a form of input noise/error. You just said they're used to handle sensor loss but then said they have nothing to do with the sensors. They very much has to do with the sensors (or compensating for their limitations.)Er, no. They are primarily the means to allow the NN to deduce the presence of occluded objects (e.g. pedestrian walks behind a car). Nothing whatsoever to do with the sensors. As a side-effect they can be used to handle temporary sensor loss (which is just a more extreme form of occlusion).
I've seen no one give any 'proof' either way. You've made some arguments about the abilities of the cameras which may or may not be true but certainly haven't provided proof. I've made counter arguments about them, but I will also admit that I don't have definitive proof.The only speculation being made in this thread is the assertion that the light leakage is a problem for FSD. This puts the burden of proof clearly on those making that speculation, but no reasonable evidence has yet been presented. I don't see how this is anything to do with being or not being an apologist.
Did you not read my earlier post?
I'm sorry, but arguing that if a human cannot see something then a camera also cannot is utter nonsense. Of course you can expect cameras to see things humans cannot. For a start, there are more of them! Can you see all around you at the same time? How about the time it takes your eye to adjust to different lighting conditions? Or that your eye can only really see details clearly at the fovea, which has a very narrow FOV, while a camera sees as clearly throughout its entire visual range?
And, quite apart from anything else, you are seeing a rendering on an LCD panel with all that implies, while the NN is seeing the camera output directly (albeit, at the moment, with some significant video processing). And what about that processing? You do understand that the dynamic range of the cameras is far greater than the LCD panel, right? That allows the NN to see details in light and dark areas at the same time without the tricks needed to get a compromised image for you to look at.