Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

2019.40.1.1

This site may earn commission on affiliate links.
I don't think this a regression, or a problem is related to this update at all. I've seen it a few times before, in much older releases than this. Could be triggered by a GPS module restart or something during update.

Occasional inaccuracies are assumed, you can get that with reflections and weak signal, I've seen that before. But this was basically offset an exact amount in one direction, and didn't fix itself over the course of a 30 mile drive over 45 minutes, including a reboot. This is just broken. It should have fixed itself within seconds of getting a good GPS fix.
 
We have a lot of complaints about the lack of using a few dollar part for rain sensing. While it seems like the easiest thing to point a finger at is the idea of Tesla cutting cost. But I think it has to go deeper than that.

Can anyone here who works in the AI/NN field give another guess as to why Tesla chose this route? From a layman’s perspective, if in some sense we are trying to imitate a human driver, it makes sense to me that the system should be able to recognize a bad image and make some sort of action based off that. Just like we as humans do. Whether it’s heavy rain, frost, fog outside or inside the glass. Action based of this perception can vary from running wipers, to cleaning with wiper fluid, turning on defrost, to slowing down or handing back control or stopping until conditions improve. While it’s incredibly frustrating now to deal with the infancy of this ability, I see the function of wiping the windows as just the tip of the iceberg of a greater perception solution. Any thoughts would be appreciated.

Also, not trying to argue against legitimate complaints. Just trying to understand.
 
  • Like
Reactions: pilotSteve
It's a test bed for their software and development methods. If you can't get a neural net for the wipers working, you'll never get robotaxis. The scope of what you need the system to do is much simpler than zero intervention city driving, and if they make a mistake in the neural net tuning they don't have the same kind of risk to human life.
 
Occasional inaccuracies are assumed, you can get that with reflections and weak signal, I've seen that before. But this was basically offset an exact amount in one direction, and didn't fix itself over the course of a 30 mile drive over 45 minutes, including a reboot. This is just broken. It should have fixed itself within seconds of getting a good GPS fix.
The times I've seen this before a few years ago in a Tesla it was the same. Consistent offset about 50 meters in one direction. A few days later the issue had resolved itself.
 
I've got 40.2, and I can say that it is predicting I'm not on the highway much more than ever before. It's beyond frustrating at this point, and getting worse as time goes by. Add that to phantom braking still being an issue, high speed approaches to offramps, not obeying a lane edge defined by a pile of snow rather than a lane line, and excessive weaving inside a lane, and I'll be honest- 40.2 is a total dud in my mind. I have no idea what they're doing to produce regressions this bad and this obvious, or how they miss them in their regression tests, but they're quickly ruining regular AP for me after ruining NoAP earlier in the year.
 
  • Like
Reactions: Silicon Desert
I've got 40.2, and I can say that it is predicting I'm not on the highway much more than ever before. It's beyond frustrating at this point, and getting worse as time goes by. Add that to phantom braking still being an issue, high speed approaches to offramps, not obeying a lane edge defined by a pile of snow rather than a lane line, and excessive weaving inside a lane, and I'll be honest- 40.2 is a total dud in my mind. I have no idea what they're doing to produce regressions this bad and this obvious, or how they miss them in their regression tests, but they're quickly ruining regular AP for me after ruining NoAP earlier in the year.

Based on you receiving 40. I assume you have your update settings to advanced. Based on your desire for fewer bugs, I would suggest standard, like myself. Still get the major update once or twice per month, but I skip the first couple of “alpha test” waves. Seems to work fairly well.
 
Still get the major update once or twice per month, but I skip the first couple of “alpha test” waves. Seems to work fairly well.
Yeah that is what I do these days, been burnt too many times now.
Mine is still on 2019.32.2.2 because for what I want working stable seems to be stable in that release. And given mine is an older model (yeah, 2017 is old in Tesla world) the new fancy things that come out often aren't applicable for me anyway.
Yes it would be nice if the software update nag didn't consume the screen every time I drive it but that is a small price to pay vs unstable functions.
 
Can anyone here who works in the AI/NN field give another guess as to why Tesla chose this route? From a layman’s perspective, if in some sense we are trying to imitate a human driver, it makes sense to me that the system should be able to recognize a bad image and make some sort of action based off that. Just like we as humans do.

There are several problems with this. One is that the AP cameras are right up against the glass, but their lenses are focused beyond the glass, so they can't see what's directly on the glass in the same way that our eyes do. Another is that those cameras only see a tiny part of the windshield, and its a heated part of the windshield, so that it doesn't fog up like the rest of the windshield does.

But I would really strongly take issue with the central thesis here -- that the AP deep net can or should do things "just like we as humans do". The human brain is orders of magnitude more powerful than any deep net so far produced, and the AP hardware in Teslas is orders of magnitude less powerful than the most capable deep net so far produced. If their plan is to do things "just like we humans do" then they are doomed. If they are going to make this work, they're going to make it work by using clever engineering to balance out the limitations of their deep net when compared to humans.

Clever engineering such as: Don't waste your precious and expensive inference (and software engineering) resources -- which should be 100% dedicated to solving the hard problems -- solving a problem that's easily solved by a $5 purpose-built sensor.
 
  • Like
Reactions: P100D_Me
One is that the AP cameras are right up against the glass, but their lenses are focused beyond the glass, so they can't see what's directly on the glass in the same way that our eyes do. Another is that those cameras only see a tiny part of the windshield, and its a heated part of the windshield, so that it doesn't fog up like the rest of the windshield does.

I've noticed that the slope of the glass where the cameras are mounted is a lot more horizontal than the majority of the windshield. Whenever I frustrated that the autowipers aren't wiping, I glance up at the amount of rain at the top of the windshield, and notice that it's distinctly less than in the center of my vision.
 
  • Funny
Reactions: boonedocks
There are several problems with this. One is that the AP cameras are right up against the glass, but their lenses are focused beyond the glass, so they can't see what's directly on the glass in the same way that our eyes do. Another is that those cameras only see a tiny part of the windshield, and its a heated part of the windshield, so that it doesn't fog up like the rest of the windshield does.

But I would really strongly take issue with the central thesis here -- that the AP deep net can or should do things "just like we as humans do". The human brain is orders of magnitude more powerful than any deep net so far produced, and the AP hardware in Teslas is orders of magnitude less powerful than the most capable deep net so far produced. If their plan is to do things "just like we humans do" then they are doomed. If they are going to make this work, they're going to make it work by using clever engineering to balance out the limitations of their deep net when compared to humans.

Clever engineering such as: Don't waste your precious and expensive inference (and software engineering) resources -- which should be 100% dedicated to solving the hard problems -- solving a problem that's easily solved by a $5 purpose-built sensor.

thanks for the thoughts. But i did specify that we are trying to imitate, not do exactly as humans do as you quoted. I would say other applications of NN are never an exact replication of the human thought process. But looking at alphago which is a much more simple problem to solve, while the machine started to imitate a human it ultimately did non human actions but had the end result that justified the means, which was winning the game. In self driving though, we still have to have the system act like a human driver so it doesn’t throw other drivers off from their expectations.

but interesting thoughts on the windshield. Would also say though that it doesn’t matter what the rest of the windshield shows. As long as it can detect a poor image through recognition of blur patterns, it doesn’t need a camera focused on the windshield itself.
 
Last edited:
As long as it can detect a poor image through recognition of blur patterns, it doesn’t need a camera focused on the windshield itself.

This may also fall along the lines of Tesla purposefully putting off work on features that will eventually not be needed when they happen to solve greater autonomy.

A lot of people are annoyed by the steering wheel nags, but I feel like Tesla doesn't put too much energy into solving those woes because they're hoping that eventually holding the steering wheel won't be needed.

Likewise, they might be hoping that soon it won't matter whether the driver has a clear view of the road, only the cameras.
 
  • Funny
Reactions: rnortman
But looking at alphago which is a much more simple problem to solve, while the machine started to imitate a human it ultimately did non human actions but had the end result that justified the means, which was winning the game.

As you point out, AlphaGo solved a vastly simpler problem. And it was solved with vastly more powerful hardware. So a vastly better hardware applied to vastly simpler problem just manages to beat the humans.

Just let that sink in for a bit.
 
  • Like
Reactions: DrDabbles
As you point out, AlphaGo solved a vastly simpler problem. And it was solved with vastly more powerful hardware. So a vastly better hardware applied to vastly simpler problem just manages to beat the humans.

Just let that sink in for a bit.


Totally understand that. That why it was stated.

Talking about our current hardware and if it can fully solve self driving is debated on these forums. We also debate what the end state of full driving really looks like... will it be mere imitation, or some sort of machine driven evolution. I brought up that point not to debate about hardware and what problems we can solve with it, but that even in a much more simple scenario the machine will begin to act in ways we never imagined to come up with a solution.

If we want a solution for self driving that is vastly more complex than go, why cripple it with a system that it has to depend on to understand if there is rain or not? Again, totally understand that the system sucks now, I find it a pain in the butt at time as well. But might Tesla see the cheap piece of hardware as a crutch in the overall process to getting to self driving and to better perception not only of the road, but of vehicle status/cameras ect.
 
I've got 40.2, and I can say that it is predicting I'm not on the highway much more than ever before. It's beyond frustrating at this point, and getting worse as time goes by. Add that to phantom braking still being an issue, high speed approaches to offramps, not obeying a lane edge defined by a pile of snow rather than a lane line, and excessive weaving inside a lane, and I'll be honest- 40.2 is a total dud in my mind. I have no idea what they're doing to produce regressions this bad and this obvious, or how they miss them in their regression tests, but they're quickly ruining regular AP for me after ruining NoAP earlier in the year.

You reminded me of a couple things:
One is about a month ago, I asked the car to find a couple of local places like Home Depot, and Best Buy. It kept showing places in San Diego (where I was the week earlier), even though the map showed me to be in the proper location 600 miles away from San Diego. It required an update to straighten out that issue.
Second is I am also annoyed by the weaving inside of a lane. Yea, I know a lot of people like that feature of trying to move a little to the side when big vehicles go by, but I don't like it. In my car it is too slow doing it anyway. A fast pickup truck goes by and at least 100 feet in front of me when my car decides it wants to move a couple feet away from that lane. I wish I could disable that sea sick feature and let other people use it whom like it :)
 
Based on you receiving 40. I assume you have your update settings to advanced. Based on your desire for fewer bugs, I would suggest standard, like myself. Still get the major update once or twice per month, but I skip the first couple of “alpha test” waves. Seems to work fairly well.

hmmmm, I am not on advanced setting because I don't want every little update the minute it comes out, yet I kept being annoyed by Tesla wanting to install that version literally within the hour that I started seeing it roll out to other cars. I keep ignoring it after it failed to install the first time, and I needed service to pull it out. Now it wants to install again, an I have to ignore it every time I get in the car. I wish there was a "Go away and shut up" option to have it leave me along until a new version pops up. Yes, I know they are not going to offer such feature for a couple of decent reasons I can think of, but I want it anyway :)