Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

2017.50.3.f3425a1 is out!

This site may earn commission on affiliate links.
My theory:

1. Using vision instead of a dedicated sensor reduces the amount of hardware required, which means less cost (slightly higher margins) and less hardware to break. It also means simpler manufacturing/assembly. Might save $10 in hardware. Doesn't sound like much, but when Tesla is making >1M cars a year, that's $10M in additional profit. That's like adding 100 very well-paid employees to the payroll for zero additional cost, which when put in those terms is a lot. Doubt this is a primary reason, but it's a benefit. Penny saved is a penny earned.

2. The neural net likely needs to know when it's raining or when the roadways are wet anyway (for autonomous driving, knowing this would be useful to adjust maximum speed in curves, etc).

Since (2) is a bit of a necessity, (1) follows.
 
Makes the most sense. So basically sounds like a simple neural net that has done all the processing, and then spits back the answers to the car. Probably learns more as we go along, I take it?

My question was really the technicality of why it seems like the whole system has been over-engineered. Has nothing to do with a rain sensor or anything like that... You guys are entitled to your own opinions, but I am trying to figure out how/why they did it this way.
It seems reasonable to me to abandon old technology like a rain sensor that has limited functionality. (Just rain detection)
Full self driving may require detection of many potential conditions which may warrant evasive actions or a full stop...
Like the ability to detect snow, ash, sand, dust, mud, feathers, nuclear fallout, etc. Seems to me like cameras would be a logical choice as a multipurpose sensor that could be adapted to detect many obstructive materials and conditions.
 
Makes the most sense. So basically sounds like a simple neural net that has done all the processing, and then spits back the answers to the car. Probably learns more as we go along, I take it?

My question was really the technicality of why it seems like the whole system has been over-engineered. Has nothing to do with a rain sensor or anything like that... You guys are entitled to your own opinions, but I am trying to figure out how/why they did it this way.

My guess is that Tesla will need AP to recognize precipitation, so why add the component and assembly cost of multiple rain sensors?
 
Just as a means of managing expectations, the rain sensing, automatic adjusting feature on my AP1 MS are the worst of any car I've owned, basically unusable. They speed up or slow down for no discernible reason, mostly wiping about twice as fast as they should. There have been a couple software revisions to fix this over the last 3 years but have made no difference AFAIK.
 
  • Informative
Reactions: Rouget
It seems reasonable to me to abandon old technology like a rain sensor that has limited functionality. (Just rain detection)
Full self driving may require detection of many potential conditions which may warrant evasive actions or a full stop...
Like the ability to detect snow, ash, sand, dust, mud, feathers, nuclear fallout, etc. Seems to me like cameras would be a logical choice as a multipurpose sensor that could be adapted to detect many obstructive materials and conditions.
Here’s an example of a mysterious gas cloud obstructing my view that I encountered while driving. It forced me to stop and proceed very slowly. I would hope that FSD or Autopilot for that matter would exercise the same amount of caution.

 
My guess is that Tesla will need AP to recognize precipitation, so why add the component and assembly cost of multiple rain sensors?

Why not have AP use a trusted rain sensor that has already been debugged? They can experiment with software rain detection one self driving is a reality. In this world, there is only one rain sensor, from a reliable manufacturer.

Detecting water, snow and ice on the road itself is a different story. These can exist whether or not it is actively raining or snowing. I suspect data from the traction control system could assist machine vision here.
 
Why not have AP use a trusted rain sensor that has already been debugged? They can experiment with software rain detection one self driving is a reality. In this world, there is only one rain sensor, from a reliable manufacturer.

Detecting water, snow and ice on the road itself is a different story. These can exist whether or not it is actively raining or snowing. I suspect data from the traction control system could assist machine vision here.
It’s water under the bridge. (Pun intended) No rain sensors since AP2. And now problem apparently solved (crossed fingers)
 
  • Like
Reactions: pilotSteve
Having an external sensor that does one particular thing doesn’t seem very ‘Tesla’.

When I am indoors and I look out of a window I can if it’s raining, snowing, icy, dry etc so it makes sense that a camera can do the same.

It was always a combo sensor, which they still have for e.g. humidity and light, they just took one part of the combo sensor out (i.e. replaced it with a tad bit cheaper combo sensor).

And then the reality is AP2 cars were without rain sensing for a full year, some even longer than that...

Including a rain sensor at least for a transition would have been very nice IMO. And given the sensor is there anyway (just without rain function), it would have offered nice redundancy too.
 
  • Like
Reactions: NewTMSMan
My guess is that Tesla will need AP to recognize precipitation, so why add the component and assembly cost of multiple rain sensors?

Well, the biggest answer would have been: because they lacked the software for a year, at least including rain-sensor in early AP2/2.5 cars would have made sense.

The second answer is: The sensor is still there anyway, it just lacks the rain function (it does humidity and redundant light detection etc.). All this is described to detail in the service manuals and much discussed on TMC.

They saved some pennies on it, to not include rain sensing. The myth that there are less sensors is simply not true, they just included a less capable sensor.
 
The myth that there are less sensors is simply not true, they just included a less capable sensor.

Not sure, but aren't there fewer sensors on AP2 than AP1?

Look at this picture of the setup for AP1
2D6BE98F00000578-3273093-image-a-16_1444929923666.jpg
 
Last edited:
  • Like
  • Helpful
Reactions: J1mbo and Rouget
Why not have AP use a trusted rain sensor that has already been debugged? They can experiment with software rain detection one self driving is a reality. In this world, there is only one rain sensor, from a reliable manufacturer.

The use of an existing sensor as the reference data source is interesting, but I think does not achieve the real requirements of auto-wipe.

Autopilot is relying on the cameras for most of its interaction with the world. This creates two intertwined requirements.
1) have a clear view of the world via the camera
2) know when the camera image is obscured
The detection parameters of a standard sensor likely do not align to the thresholds of optical performance need by AP.
Further, as part of its base operation, AP must know if its view is obscured. A separate rain sensor does not do that and cannot train that. Only the AP cameras know if they are blocked/ smudged.
So training from a rain sensor could get you to auto wipe faster, but the training and resultant NN would be an evolutionary dead end toward the end goal of camera self health check. Along with that, I expect auto wash/spray to be added down the road.
 
Just as a means of managing expectations, the rain sensing, automatic adjusting feature on my AP1 MS are the worst of any car I've owned, basically unusable. They speed up or slow down for no discernible reason, mostly wiping about twice as fast as they should. There have been a couple software revisions to fix this over the last 3 years but have made no difference AFAIK.

In my opinion it's dramatically better than it used to be, and I have no significant qualms with it. Some people seem more sensitive to it than I am though.
 
I see Tesla as a SW company. One of my mirrors wouldn’t fold out all the way. I called and said they had a new motor vendor and that they needed a bit more current to drive it. They sent a patch and all was well. The buttons don’t control the mirrors. They signal the computer. Allows updates and new features. Same with other systems like, say, rain sensors. I don’t think it’s about money. It’s about control. Once it’s in SW, there is more control, future features, etc. Even the glove box is SW controlled. It’s a philosophy.