Electroman
Well-Known Member
For those who are saying somehow Radar & Lidar will magically improve autonomous driving capability - It won't.
Radar/Lidar is extremely useful in a very controlled environment, where every object in your neighborhood has been catalogued and identified and there are no surprises. A good example is Dragon docking with ISS.
It is much less useful in a highly complex and chaotic environment such as driving in highway or city streets, where there are moving objects at different speeds in different directions, immobile objects, objects that appear out of nowhere, different shapes etc.. You always run into so many false positives, that often the data becomes useless to make quick split sub-second decisions. Now when you combine that with visual data, it becomes even more difficult that they often don't agree with each other.
What Tesla found is, while Radar is useful in cases like getting the accurate speed of a car in front of you, they often end up overriding Radar input in favor of visuals. So much so in the big scheme of things, the value add of Radar is more than compensated with the spurious data they get during complex situations, not to forget the added compute cycles spent on processing Radar/Lidar data.
So they made the decision to remove Radar and completely depend only on cameras, and use the full processing power to process data from 6 cameras, and improve on it. Their reasoning is simple:
Driving on roads is an environment tailor made for humans with two eyes and brains. It can only be solved with visuals and processing power. Combining different types of inputs, although that might seem very helpful in the initial stages for simple situations, pretty soon that will get you to a dead end (local maxima).
And so far they seem to be on the right track. If the others - Waymo, Uber, Mobileye have been correct, they would have had autonomy by now. But they don't. And neither has Tesla (although many would rightly claim Tesla is ahead in generalized, non-geo fenced autonomy).
So the jury is still out.
Radar/Lidar is extremely useful in a very controlled environment, where every object in your neighborhood has been catalogued and identified and there are no surprises. A good example is Dragon docking with ISS.
It is much less useful in a highly complex and chaotic environment such as driving in highway or city streets, where there are moving objects at different speeds in different directions, immobile objects, objects that appear out of nowhere, different shapes etc.. You always run into so many false positives, that often the data becomes useless to make quick split sub-second decisions. Now when you combine that with visual data, it becomes even more difficult that they often don't agree with each other.
What Tesla found is, while Radar is useful in cases like getting the accurate speed of a car in front of you, they often end up overriding Radar input in favor of visuals. So much so in the big scheme of things, the value add of Radar is more than compensated with the spurious data they get during complex situations, not to forget the added compute cycles spent on processing Radar/Lidar data.
So they made the decision to remove Radar and completely depend only on cameras, and use the full processing power to process data from 6 cameras, and improve on it. Their reasoning is simple:
Driving on roads is an environment tailor made for humans with two eyes and brains. It can only be solved with visuals and processing power. Combining different types of inputs, although that might seem very helpful in the initial stages for simple situations, pretty soon that will get you to a dead end (local maxima).
And so far they seem to be on the right track. If the others - Waymo, Uber, Mobileye have been correct, they would have had autonomy by now. But they don't. And neither has Tesla (although many would rightly claim Tesla is ahead in generalized, non-geo fenced autonomy).
So the jury is still out.
Last edited: