You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
I think the discussion about LIDAR vs radar vs cameras and BSM sensing strategies is cool and all, but going way off topic.
I'm not even sure if redundancy is the right word here. I'm talking about adding capabilities that currently don't exist, e.g. seeing through a car behind you or on your sides. Radar can offer this, no camera can. However no radar can see everything we need to see to drive, either. Hence the value of the combo.
The added benefit of course is additional security in case of sensor failure or blockage.
A bit similar with the rain sensor. Since it is actively lit and looks at the surface of the windshield, it has night-vision properties no camera has. It could have offered a useful second opinion (while also speeding up the transition).
I understand the desire for multiple data sources. But from a cost/ complexity POV, if you have two sensors, and only one is needed, then why have the other?
If you need radar due to camera failings, and radar can operate with occluded camera, then the radar should be the only sensor since the camera is not adding anything. In the case where data is required from both, then both are needed. If, however, the camera's additional data is always required and the radar is there as a band-aid, then the radar fails to provide sufficient data, and the system fails anyway.
Radar cannot read speed limit/stop signs and traffic lights. Cameras cannot see in fog or work well to keep distance between cars when on radar cruise, so both are needed. I am sure that there are other examples.
Radar cannot read speed limit/stop signs and traffic lights. Cameras cannot see in fog or work well to keep distance between cars when on radar cruise, so both are needed. I am sure that there are other examples.
In the current system, the radar is supplementing the camera due in part to the in-process NN.
Camera-only cruise control systems are actually coming online now that the framerates and distance estimation are getting better. I believe GM is shipping a few now. In fact, MobileEye has done a lot of work on this field: http://mobileye.com/wp-content/uploads/2011/09/VisiobBasedACC.pdf
I understand the desire for multiple data sources. But from a cost/ complexity POV, if you have two sensors, and only one is needed, then why have the other?
If you need radar due to camera failings, and radar can operate with occluded camera, then the radar should be the only sensor since the camera is not adding anything. In the case where data is required from both, then both are needed. If, however, the camera's additional data is always required and the radar is there as a band-aid, then the radar fails to provide sufficient data, and the system fails anyway.
You forgot an important factor, the intelligence of the system. The more the system knows about it's surroundings, the easier the decision making gets. So even if the extra radar input would not be totally necessary, it would make it easier for the computer.
It's for example a lot easier to calculate distance and speed form a radar sensor, than from a camera. You can do the same job with a camera, but then there is more computational power needed, all of the time. And especially at night it gets harder and harder and the computer needs to be more and more intelligent.
People often argue that cameras must be enough, because people don't have radar either. But a brain is very capable and if driving at night, in rain is already stressful to a human brain, it will be even harder for a computer.
I understand the assistance factor. For some time, AP did not have the ability to do rain detection, so in that span of time a separate sensor would have assisted and provided the functionality. But long term, the temporal availability of rain sense would be a liability (cost and inconstancy during change over between systems), is also would not be the best at clearing the area directly in front of the camera.
Same thing goes with front radar and vision. Radar gets things working sooner, but it also provides skip data and a second data set for the AP HW to work with, it may never be removed.
The crux of my argument is that either the system needs both sensors or it doesn't. If one sensor only makes things easier, then it is really redundant, since the computer doesn't work on a harder/ easier scale, it works on a can/ can't scale (although one could rate different driving situations and for each of those have a can/ can't criteria thus producing an overall capability map). The ones who are most affected by easier/ harder are the programmers implementing the algorithms. That is one reason why LIDAR is so popular. It provides a point cloud of exactly when things are without needing additional processing (rain/ moth noise excepted (take the max value of the region)).
My belief is that Tesla is aiming for final sensor goal and skipping intermediate stages which would be more easily achieved with additional sensors. This means waiting longer for functionality, but not having duplicated/ redundant/ obsoleted work (no stop-gaps).