Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Possible removal of radar?

This site may earn commission on affiliate links.

Buckminster

Well-Known Member
Aug 29, 2018
10,490
52,467
UK
I don't believe Elon will remove radar.
I understood that tweet to mean that they're removing radar from the perception stack, but that doesn't mean it won't be used by any other neural nets in the car (or using its output outside of the neural nets).

We know from last October Tesla is investing in better radar units: Tesla is adding a new '4D' radar with twice the range for self-driving - Electrek Unless things have seriously changed in the last 5 months, that would be an odd decision.
Place to discuss.
 
Yeah, just expanding on my original message:

I believe that the radar returns are presently incorporated as an input into the perception neural network; to the point where faulty returns from overpasses can momentarily make the car believe there's an obstacle in the way (causing phantom breaking). That also means that they need to train the forward-facing perception separately from the other cameras (as we only have forward facing radar).

Instead they're moving to a system where perception is trained purely on camera output (with the network taking input from all cameras simultaneously and outputting the birds-eye-view). So what do they plan to do with the radar returns if they're not being used to estimate the surrounding environment? They could be feeding the output from the perception network into a different planning neural network (one that decides how to plan a route around the environment, when to accelerate, break, turn, etc). And the radar returns could be a separate input into this planning network. So this would allow the car to still factor in radar returns for deciding how to drive, while also improving the accuracy and train-ability of the perception network.
 
  • Like
Reactions: aymelis
Yeah, just expanding on my original message:

I believe that the radar returns are presently incorporated as an input into the perception neural network; to the point where faulty returns from overpasses can momentarily make the car believe there's an obstacle in the way (causing phantom breaking). That also means that they need to train the forward-facing perception separately from the other cameras (as we only have forward facing radar).

Instead they're moving to a system where perception is trained purely on camera output (with the network taking input from all cameras simultaneously and outputting the birds-eye-view). So what do they plan to do with the radar returns if they're not being used to estimate the surrounding environment? They could be feeding the output from the perception network into a different planning neural network (one that decides how to plan a route around the environment, when to accelerate, break, turn, etc). And the radar returns could be a separate input into this planning network. So this would allow the car to still factor in radar returns for deciding how to drive, while also improving the accuracy and train-ability of the perception network.

I guess I don't understand. Why keep a signaling system that is prone to so many errors, if there is no advantage to it?
 
I guess I don't understand. Why keep a signaling system that is prone to so many errors, if there is no advantage to it?

It has some advantages. Like being able to see in the dark or through fog.

Maybe by decoupling it from perception and feeding it into planning instead, they'd be able to give the radar input variable weights depending on the circumstances. E.g. on a clear sunny day, ignore the radar, on a foggy night give it more weight.
 
I don't believe Elon will remove radar.

Place to discuss.

I agree with @willow_hiller that they could be just temporarily removing the radar input from the perception stack in order to test how good the camera vision is and focus on improving the camera vision.

I know Elon has touted radar as better than lidar as a second sensor because radar sees through snow, rain and fog better than both cameras and lidar. So it would make sense for Tesla to add radar back in on the final release of FSD. Tesla could be doing something similar to what Mobileye is doing: develop "camera-only" FSD and then add radar later for extra redundancy, especially for those cases like rain, snow and fog. But Elon wants the camera vision to be as good as possible first and be able to drive the car, before they add radar. Considering that camera vision does degrade in rain, snow and fog, it seems crazy not to at least include a front radar.

But Elon's tweet certainly sounds like Elon might be planning to remove the radar completely. Maybe they added a front radar before back when camera vision was not as good but now Elon thinks the camera vision is so good that a front radar is not needed anymore? Elon is very zealous for camera vision. He clearly believes camera vision and AI can solve everything. So I would not put anything past him. It sounds crazy but I would not be surprised if Elon really does plan to ditch radar completely.

I do see a few possible scenarios from most optimistic case to most pessimistic case:
1) Elon is right about camera vision. Tesla solves vision and all our cars become driverless L5 robotaxis on just cameras.
2) Tesla's camera vision gets really good but not quite good enough to remove driver supervision entirely. So FSD does fully self-drive in good weather with no disengagements, but we still need to supervise in some or all cases.
3) Tesla's vision is not quite good enough so Elon back peddles and puts radar back in the perception stack before releasing FSD wide.
4) Tesla releases FSD wide with "camera-only". Vision is not good enough and removing radar was a mistake. FSD Beta does have a couple serious crashes. Regulators step in and shut down FSD.
 
  • Like
Reactions: APotatoGod
Gotta say, I was very surprised by that tweet and look forward to further clarification.

I'll throw this out there: Is it possible HW3 limitations are forcing Tesla to choose (now or sometime in the future) between improving vision vs. using radar?
 
Validate what?

That vision-only is adequate. They're not going vision only on the next big update to "test" it. It's already been tested. That's why they're going pure vision.

I personally love this move. This tweet. All of it. It's the final step in Tesla's approach of going vision-only. Every time I think about the radar, it doesn't make sense to me. If you've solved vision, you don't need radar or lidar. Radar was the part that didn't make sense to me when Elon tauted a vision-only approach.

When everyone else is adding more and more sensors, Tesla and Elon are removing them. These are the kinds of moves that Elon is known for.
 
Last edited:
Radar is still useful for labeling their dataset though. But maybe at this point cameras can do everything that radar does, at least with regards to Tesla’s goal of making self driving cars. Situations where the radar adds value, for example heavy fog, are probably very rare and even in them they need to rely heavily on the camera, so having the radar probably adds minimal value. It’s not like they can drive with blind cameras through intersections just because they have a radar.

Radar does have some cute properties, for example it can see under cars in front. But is that really useful? Sure if the car two in front of the vehicle does a quick break and the car in front slams into, it helps to be able to detect it. But maybe this is so rare and AEB does a good enough job of lowering the impact anyway and the fisheye with neural network can see this most of the time, if the Tesla just keeps an appropriate distance the the car in front, the added value is minimal.

TLDR: probably most situations where radar brings value don’t really matter to solve fsd if camera network and control is good enough.
 
  • Like
Reactions: Todd Burch
Another take is that they might have removed Radar from the main neural network, but keep it physically for verification of the camera system to make sure that it is working properly. If RadarNN disagrees with cameraNN then they might warn and tell the user to take over.

Tesla might want to be able to prove statistically that their system is 10x safer than humans, but there also is some merit to proving that their system is ASIL D and here the radar might be useful as a redundant sensor to decrese the probablily of ciritical failure.
 
  • Like
Reactions: APotatoGod
wouldn’t the three front cameras count as redundant sensors already? If the car senses that any of the cameras has failed, it can just pull over and throw an error.
Their failures might not be statistically independent of each other, thus any statistical proof might be invalid. For example some ketchup on the windshield event will make all three of the cameras fail, but not the radar.
 
  • Like
Reactions: APotatoGod
For example some ketchup on the windshield event will make all three of the cameras fail, but not the radar.

That‘s true, but you’d also have to know that the cameras have failed in order to fall back on the radar. So if your cameras can self diagnose and there are 3 front facing ones, then wouldn’t that be enough redundancy?

I always thought that the best kind of redundancy is to have multiple copies of the same technology rather than two different technologies. The reason is that if the main copy fails, the Secondary simply uses the same code and implementation, which has been updated and maintained through time, rather than an appendage tech that may not be well maintained.
 
  • Like
Reactions: APotatoGod
That‘s true, but you’d also have to know that the cameras have failed in order to fall back on the radar. So if your cameras can self diagnose and there are 3 front facing ones, then wouldn’t that be enough redundancy?

I always thought that the best kind of redundancy is to have multiple copies of the same technology rather than two different technologies. The reason is that if the main copy fails, the Secondary simply uses the same code and implementation, which has been updated and maintained through time, rather than an appendage tech that may not be well maintained.
Yeah, but the self correction would have to happen a few seconds after the event, first some object needs to be seen by the side cameras that was not seen by the front cameras. While if the radar sees a car in front and the cameras fail to detect it, it will instantly be clear that the cameras have failed.
 
I always thought that the best kind of redundancy is to have multiple copies of the same technology rather than two different technologies. The reason is that if the main copy fails, the Secondary simply uses the same code and implementation, which has been updated and maintained through time, rather than an appendage tech that may not be well maintained.
The short answer is...(this is a wheelhouse item for me)...there is no short answer :rolleyes:

It depends on what you’re trying to sense and how observable (in the mathematical control theory meaning of “observable”) the quantity you’re trying to measure is. If you’re measuring wheel speed to feed into the cruise control or anti-lock brake systems, then you may just want redundant wheel speed sensors and you just vote the redundant measurements from each wheel to detect failures and determine the “best” measurement. What you’re measuring is a one-to-one mapping to what you want for your speed controller or anti-lock brake controller: the speed “state” - where the term “state” is used in its control theory sense - is completely observable, so relatively simple redundancy is usually fine.

But “state of the driving world” sensors for a self driving vehicle are a different kettle of fish. The state of the driving world that the self driving algorithms need is very complex, and each sensor only provides observability into subsets of the whole state. So you may need multiple kinds of sensors - cameras, radar, lidar, GPS, etc. - to get enough observability of the self driving state to be able to control it. And you still need to detect and isolate failures in your self driving sensors, so you will probably end up with redundancy in each type of sensor, as well as cross checking sensors of different types (this is called analytical redundancy).

Whether Tesla can solve the self driving problem with cameras only is something I have no position on. I work on autonomy for a different application, not autos, so I will refrain from speculating here. But general principles of redundant systems are still applicable, which is what I discussed above.
 
While if the radar sees a car in front and the cameras fail to detect it, it will instantly be clear that the cameras have failed.

But the type of data that the radar produces is different than the camera, so you'll have to sift through all the examples and corner cases where the data disagree. This would create more problems (false positives / negatives) than it solves.

Tesla is well aware of instances where all three cameras fail simultaneously. It's likely so rare that 10x human performance is possible despite the likelihood of a camera failure where the car can't pull over.

Even if all three front cameras fail, the front facing b pillar cameras should still enable the car to pull over.
 
  • Like
Reactions: APotatoGod
But the type of data that the radar produces is different than the camera, so you'll have to sift through all the examples and corner cases where the data disagree. This would create more problems (false positives / negatives) than it solves.

Maybe this is why Tesla is removing radar from the stack presently. With software 1.0, it's too difficult to reconcile two different types of sensors and programmatically decide the correct course of action.

So maybe they'll put radar returns back in when more of the decisions are made by software 2.0. No need to manually create code that picks which sensor is correct, just give both sensors as inputs to the planning network, and let it learn the scenarios when you trust radar more than cameras.
 
That vision-only is adequate. They're not going vision only on the next big update to "test" it. It's already been tested. That's why they're going pure vision.

I personally love this move. This tweet. All of it. It's the final step in Tesla's approach of going vision-only. Every time I think about the radar, it doesn't make sense to me. If you've solved vision, you don't need radar or lidar. Radar was the part that didn't make sense to me when Elon tauted a vision-only approach.

When everyone else is adding more and more sensors, Tesla and Elon are removing them. These are the kinds of moves that Elon is known for.

More like their current radar is beyond useless for anything other than acc.

  • The radar is from 2010
  • It has major issues distinguishing stopped objects.
  • It’s only forward and has very low FOV
  • it has very low resolution and range.
  • Therefore it can’t classify objects.
  • Majority of their radar deployed is not heated so it fails in moderate rain and light snow.
I could keep going....

But ofcourse you love it. You buy and prop up anything Elon says like its the gospel. When Elon was PRing his garbage radar as the second coming It didn't stop you from talking up their garbage radar when it suited you. "Tesla found it necessary to include a front facing radar " and "Better radar is an obvious development."
Some Tesla fans even said that Tesla created Radar that was better than Lidar from Elon's PR statements.

Also weren't you the one who said, if you use maps for anything other than routing then its not true self driving? Then Tesla started using what everyone in the industry will classify as HD maps. Then you said "I haven’t seen anyone contradict what Karpathy is predicting" "Karpathy doesn't think HD maps are scalable" and their maintenance cost is expensive.

Then in Jan you admitted that Mobileye's REM HD Map were scalable and you incorrectly state that their lidar system doesn't use REM, " I'm assuming it's using LIDAR based localization with HD-maps, so it's not as scalable as their REM system.'

So you have been proven wrong at all points, they are already scaled around the world with tens of millions of miles already mapped and usable.
And the maintenance cost is basically the muscle strength it takes to push a button because they are fully automated.

Anyone with logical thinking could easily asses that Tesla's sensors is garbage for what they are marketing it as "Level 5,no driver, cross country, look out the window, no geofence".
From the forward facing side camera placement. To the rear camera being rendered useless in light rain/snow. The rear facing repeater cams being susceptible to vibrations and occlusion during rain To no bumper front camera. To their camera being very low resolution (1.2 MP).

But the same reason fans like you right after the AP2 announcement proclaimed that the half variant of Drive PX2 that Tesla used had waaay more than enough power for Level 5, that it was actually too much. That Nvidia was obviously lying that Level 5 needs more, that they were saying that for for PR and marketing reasons. That Tesla were and always are telling the truth.

But the few people in this forum who were trying to talk sense with technical analysis were drowned out by people like you.

I'm sure Elon could come out tomorrow and say all they need is their ultrasonics and you would respond with that's the absolute truth. Funny enough years ago someone tried to argue that one front camera and the ultrasonics were more than enough for level 5. Can never put anything past Tesla fans.

image02.jpg

image03.jpg

image04.jpg


si2pXOY.png
 
Last edited: