Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Discussion of sensor suites

This site may earn commission on affiliate links.
I'm not even sure if redundancy is the right word here. I'm talking about adding capabilities that currently don't exist, e.g. seeing through a car behind you or on your sides. Radar can offer this, no camera can. However no radar can see everything we need to see to drive, either. Hence the value of the combo.

The added benefit of course is additional security in case of sensor failure or blockage.

A bit similar with the rain sensor. Since it is actively lit and looks at the surface of the windshield, it has night-vision properties no camera has. It could have offered a useful second opinion (while also speeding up the transition).

To be clear, the radar doesn't see through the vehicle, it is using the bounce path under the car in front. So the angles need to be correct to provide more data than a camera. If that is the sensor you rely on for blind spot/ lane change detection you need ~100% confidence in it's sensing ability.


If they had added an IR LED, that could have back illuminated the rain also. I do wonder if the camera heater is the cause of the non-response during light snow or mist (evaporation on contact).
 
I understand the desire for multiple data sources. But from a cost/ complexity POV, if you have two sensors, and only one is needed, then why have the other?
If you need radar due to camera failings, and radar can operate with occluded camera, then the radar should be the only sensor since the camera is not adding anything. In the case where data is required from both, then both are needed. If, however, the camera's additional data is always required and the radar is there as a band-aid, then the radar fails to provide sufficient data, and the system fails anyway.

Radar cannot read speed limit/stop signs and traffic lights. Cameras cannot see in fog or work well to keep distance between cars when on radar cruise, so both are needed. I am sure that there are other examples.
 
  • Like
Reactions: AnxietyRanger
Radar cannot read speed limit/stop signs and traffic lights. Cameras cannot see in fog or work well to keep distance between cars when on radar cruise, so both are needed. I am sure that there are other examples.

Camera-only cruise control systems are actually coming online now that the framerates and distance estimation are getting better. I believe GM is shipping a few now. In fact, MobileEye has done a lot of work on this field: http://mobileye.com/wp-content/uploads/2011/09/VisiobBasedACC.pdf

So far, nobody has designed road conditions such that they cannot be driven with visible light spectrum information — after all, humans are still expected to be able to safely drive their cars. Whether or not hardware that exists today is significantly advanced enough to perform this task is another matter.
 
Radar cannot read speed limit/stop signs and traffic lights. Cameras cannot see in fog or work well to keep distance between cars when on radar cruise, so both are needed. I am sure that there are other examples.

Right. Radar can't replace cameras for sign recognition, and cameras can't see through fog. If you need both of those data sources in order to operate, the car cannot work in fog (because the camera doesn't). If the car can work in fog, then either the radar is not needed (camera is good enough) or the camera is not needed (radar is augmented with sign an live traffic signal data).

In the current system, the radar is supplementing the camera due in part to the in-process NN. A driver should never over drive their vision/ headlights, so if you can see the traffic lights in time to stop, a camera can see obstacles in time to stop also.
 
I understand the desire for multiple data sources. But from a cost/ complexity POV, if you have two sensors, and only one is needed, then why have the other?
If you need radar due to camera failings, and radar can operate with occluded camera, then the radar should be the only sensor since the camera is not adding anything. In the case where data is required from both, then both are needed. If, however, the camera's additional data is always required and the radar is there as a band-aid, then the radar fails to provide sufficient data, and the system fails anyway.

You forgot an important factor, the intelligence of the system. The more the system knows about it's surroundings, the easier the decision making gets. So even if the extra radar input would not be totally necessary, it would make it easier for the computer.

It's for example a lot easier to calculate distance and speed form a radar sensor, than from a camera. You can do the same job with a camera, but then there is more computational power needed, all of the time. And especially at night it gets harder and harder and the computer needs to be more and more intelligent.

People often argue that cameras must be enough, because people don't have radar either. But a brain is very capable and if driving at night, in rain is already stressful to a human brain, it will be even harder for a computer.
 
You forgot an important factor, the intelligence of the system. The more the system knows about it's surroundings, the easier the decision making gets. So even if the extra radar input would not be totally necessary, it would make it easier for the computer.

It's for example a lot easier to calculate distance and speed form a radar sensor, than from a camera. You can do the same job with a camera, but then there is more computational power needed, all of the time. And especially at night it gets harder and harder and the computer needs to be more and more intelligent.

People often argue that cameras must be enough, because people don't have radar either. But a brain is very capable and if driving at night, in rain is already stressful to a human brain, it will be even harder for a computer.

I understand the assistance factor. For some time, AP did not have the ability to do rain detection, so in that span of time a separate sensor would have assisted and provided the functionality. But long term, the temporal availability of rain sense would be a liability (cost and inconstancy during change over between systems), is also would not be the best at clearing the area directly in front of the camera.

Same thing goes with front radar and vision. Radar gets things working sooner, but it also provides skip data and a second data set for the AP HW to work with, it may never be removed.

The crux of my argument is that either the system needs both sensors or it doesn't. If one sensor only makes things easier, then it is really redundant, since the computer doesn't work on a harder/ easier scale, it works on a can/ can't scale (although one could rate different driving situations and for each of those have a can/ can't criteria thus producing an overall capability map). The ones who are most affected by easier/ harder are the programmers implementing the algorithms. That is one reason why LIDAR is so popular. It provides a point cloud of exactly when things are without needing additional processing (rain/ moth noise excepted (take the max value of the region)).

My belief is that Tesla is aiming for final sensor goal and skipping intermediate stages which would be more easily achieved with additional sensors. This means waiting longer for functionality, but not having duplicated/ redundant/ obsoleted work (no stop-gaps).
 
I understand the assistance factor. For some time, AP did not have the ability to do rain detection, so in that span of time a separate sensor would have assisted and provided the functionality. But long term, the temporal availability of rain sense would be a liability (cost and inconstancy during change over between systems), is also would not be the best at clearing the area directly in front of the camera.

Same thing goes with front radar and vision. Radar gets things working sooner, but it also provides skip data and a second data set for the AP HW to work with, it may never be removed.

The crux of my argument is that either the system needs both sensors or it doesn't. If one sensor only makes things easier, then it is really redundant, since the computer doesn't work on a harder/ easier scale, it works on a can/ can't scale (although one could rate different driving situations and for each of those have a can/ can't criteria thus producing an overall capability map). The ones who are most affected by easier/ harder are the programmers implementing the algorithms. That is one reason why LIDAR is so popular. It provides a point cloud of exactly when things are without needing additional processing (rain/ moth noise excepted (take the max value of the region)).

My belief is that Tesla is aiming for final sensor goal and skipping intermediate stages which would be more easily achieved with additional sensors. This means waiting longer for functionality, but not having duplicated/ redundant/ obsoleted work (no stop-gaps).

Well, time is of course the main problem here. Long term can be 5 years, but also be 20 years. So a current Tesla with HW 2 might not be able to reach full autonomous driving, long after cars with more sensors do. And even then it might require a more powerful computer at which point the residual value of the car might not even be worth it.

So IMO the best strategy should be having lot's of sensors first and then remove them as the computers and software get more capable. Sure that means early cars will be more expensive, but that's usually how it is with technology.