Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Possible removal of radar?

This site may earn commission on affiliate links.
But ofcourse you love it. You buy and prop up anything Elon says like its the gospel. When Elon was PRing his garbage radar as the second coming It didn't stop you from talking up their garbage radar when it suited you. "Tesla found it necessary to include a front facing radar " and "Better radar is an obvious development."
Some Tesla fans even said that Tesla created Radar that was better than Lidar from Elon's PR statements.

Yup, it's obvious that Tesla is working on / researching better sensors / hardware. That was the context of my statement in response to people who were claiming that Tesla is upgrading the hw3 suite because hw3 isn't capable of fsd.
 
I'm sure Elon could come out tomorrow and say all they need is their ultrasonics and you would respond with that's the absolute truth. Funny enough years ago someone tried to argue that one front camera and the ultrasonics were more than enough for level 5. Can never put anything past Tesla fans.

I'm actually expecting there'll be a day that Elon announces the wonderful things they are going to do with Lidar and everyone says it's so amazing how Tesla is always leading the way with groundbreaking technology.
 
Yeah, just expanding on my original message:

I believe that the radar returns are presently incorporated as an input into the perception neural network; to the point where faulty returns from overpasses can momentarily make the car believe there's an obstacle in the way (causing phantom breaking). That also means that they need to train the forward-facing perception separately from the other cameras (as we only have forward facing radar).

Instead they're moving to a system where perception is trained purely on camera output (with the network taking input from all cameras simultaneously and outputting the birds-eye-view). So what do they plan to do with the radar returns if they're not being used to estimate the surrounding environment? They could be feeding the output from the perception network into a different planning neural network (one that decides how to plan a route around the environment, when to accelerate, break, turn, etc). And the radar returns could be a separate input into this planning network. So this would allow the car to still factor in radar returns for deciding how to drive, while also improving the accuracy and train-ability of the perception network.
I HATE broken Phantoms...
 
  • Funny
Reactions: APotatoGod
But ofcourse you love it. You buy and prop up anything Elon says like its the gospel.

We're all just speculating. Yourself included.

At the end of the day, it doesn't matter if removing the radar is the right move for right now. It doesn't matter what sensors are used today. Tesla has and will make mistakes, but they'll keep trying.

Sometimes it doesn't take the most perfect approach to tackle a problem everyone thinks is insurmountable. It just takes persistence in the face of everyone telling them "give up."
 
More like their current radar is beyond useless for anything other than acc.

  • The radar is from 2010
  • It has major issues distinguishing stopped objects.
  • It’s only forward and has very low FOV
  • it has very low resolution and range.
  • Therefore it can’t classify objects.
  • Majority of their radar deployed is not heated so it fails in moderate rain and light snow.
I could keep going....

But ofcourse you love it. You buy and prop up anything Elon says like its the gospel. When Elon was PRing his garbage radar as the second coming It didn't stop you from talking up their garbage radar when it suited you. "Tesla found it necessary to include a front facing radar " and "Better radar is an obvious development."
Some Tesla fans even said that Tesla created Radar that was better than Lidar from Elon's PR statements.

Also weren't you the one who said, if you use maps for anything other than routing then its not true self driving? Then Tesla started using what everyone in the industry will classify as HD maps. Then you said "I haven’t seen anyone contradict what Karpathy is predicting" "Karpathy doesn't think HD maps are scalable" and their maintenance cost is expensive.

Then in Jan you admitted that Mobileye's REM HD Map were scalable and you incorrectly state that their lidar system doesn't use REM, " I'm assuming it's using LIDAR based localization with HD-maps, so it's not as scalable as their REM system.'

So you have been proven wrong at all points, they are already scaled around the world with tens of millions of miles already mapped and usable.
And the maintenance cost is basically the muscle strength it takes to push a button because they are fully automated.

Anyone with logical thinking could easily asses that Tesla's sensors is garbage for what they are marketing it as "Level 5,no driver, cross country, look out the window, no geofence".
From the forward facing side camera placement. To the rear camera being rendered useless in light rain/snow. The rear facing repeater cams being susceptible to vibrations and occlusion during rain To no bumper front camera. To their camera being very low resolution (1.2 MP).

But the same reason fans like you right after the AP2 announcement proclaimed that the half variant of Drive PX2 that Tesla used had waaay more than enough power for Level 5, that it was actually too much. That Nvidia was obviously lying that Level 5 needs more, that they were saying that for for PR and marketing reasons. That Tesla were and always are telling the truth.

But the few people in this forum who were trying to talk sense with technical analysis were drowned out by people like you.

I'm sure Elon could come out tomorrow and say all they need is their ultrasonics and you would respond with that's the absolute truth. Funny enough years ago someone tried to argue that one front camera and the ultrasonics were more than enough for level 5. Can never put anything past Tesla fans.

image02.jpg

image03.jpg

image04.jpg


si2pXOY.png


You sound desperate.
 
Maybe this is why Tesla is removing radar from the stack presently. With software 1.0, it's too difficult to reconcile two different types of sensors and programmatically decide the correct course of action.

So maybe they'll put radar returns back in when more of the decisions are made by software 2.0. No need to manually create code that picks which sensor is correct, just give both sensors as inputs to the planning network, and let it learn the scenarios when you trust radar more than cameras.
I thought before this tweet that Tesla used both radar and cameras as input to the neural network and then let the neural network figure out how to use the data. I assume that Tesla decided that removing the radar as input to the neural network didn’t make the performance of the neural network that much less safe for Tesla’s use case.
 
I can't see what the issue is with Tesla's radar implementation - I've driven 10s of k miles in VWs without a single phantom braking incident, so:
the assertion about radar is incorrect or
Tesla's radar requires redediation.
Maybe those are used narrowly. if you see green's videos it's used in the whole field of view
 
The current system doesn't seem to be accurate at distance estimation where the object is outside of the radar's coverage. For example, traffic lights are rarely rendered in their correct positions when you are stationary at the stop line. So unless they have cracked that, I would hope that radar is still used for calibration/confirmation...
 
What I'm curious about is how granular vision can be. Can vision accurately tell if a vehicle if 3 feet ahead of you vs 7 feet? What about 12 inches in front of you vs 3 feet when pulling into a garage and seeing a solid wall in front of you? Summon might still have use for radar.... in addition, very slow speed stop and go traffic (think 5 mph and under) might have use for radar so the car can be fairly close to the one in front of you (3 to 4 feet) and move forward 2 feet before slowing down again, etc.

I think long range vision probably is the go to, but I'm very curious how well it works close up and with large flat objects (right behind a truck where vision just see's a box in front of it and no reference points, or the above garage case where it needs to pull up close to a wall).
 
  • Like
Reactions: AlanSubie4Life
What I'm curious about is how granular vision can be. Can vision accurately tell if a vehicle if 3 feet ahead of you vs 7 feet? What about 12 inches in front of you vs 3 feet when pulling into a garage and seeing a solid wall in front of you? Summon might still have use for radar.... in addition, very slow speed stop and go traffic (think 5 mph and under) might have use for radar so the car can be fairly close to the one in front of you (3 to 4 feet) and move forward 2 feet before slowing down again, etc.

I think long range vision probably is the go to, but I'm very curious how well it works close up and with large flat objects (right behind a truck where vision just see's a box in front of it and no reference points, or the above garage case where it needs to pull up close to a wall).

Tesla is probably using some sort of distance labeling by counting wheel rotations and/or ultrasonics and/or radar. After you feed the neural network enough accurate distance labels, it should be granular for fsd.
 
  • Like
Reactions: APotatoGod
Plus parallax from the triple-front facing camera. Multiple shots of a scene from slightly different angles are how human brains naturally estimate distance.

Do you think they can figure out the distance from the parallax effect alone (i.e., not labeling it with distance information from other sources first)?

I think humans use the parallax effect to estimate distance, but we normally use prior experience to put an actual yardage to the estimate.
 
Do you think they can figure out the distance from the parallax effect alone (i.e., not labeling it with distance information from other sources first)?

I think humans use the parallax effect to estimate distance, but we normally use prior experience to put an actual yardage to the estimate.
Yes we use the stereo image to estimate distance, and can do quite well with just one eye too. Distant objects have less parallax but I think we get other cues like how much objects move relative to others when you move your head side to side (single vision parallax🤔?), and diminishing clarity & size, and from identifying objects. For example we know what a person looks like and their size, so we can get relative distance and speed from knowing what the object is. This comes from years of experience in estimating distance.

They're no doubt able to extract distance and speed from parallax cameras. We humans probably have the advantage in the other cues like diminishing clarity & size, and quicker object identification. We just 'know' what we're looking at.
 
We're all just speculating. Yourself included.

At the end of the day, it doesn't matter if removing the radar is the right move for right now. It doesn't matter what sensors are used today. Tesla has and will make mistakes, but they'll keep trying.

Sometimes it doesn't take the most perfect approach to tackle a problem everyone thinks is insurmountable. It just takes persistence in the face of everyone telling them "give up."

No i'm not speculating, i'm giving you straight facts.
No one is telling them to give up.
Everyone is telling them to STOP LYING.
How is it that you can't understand that?
Stop claiming you will have Level 5 better than humans by the end of each new year.
Stop marketing every next update as a gamechanger so you can extort money from your customers.
 
No i'm not speculating, i'm giving you straight facts

Unless you work at Tesla, you're speculating. You're not omnipotent; you're not infallible. You may know more about the field than most of us, but you don't have any more inside information than we do.

And speaking frankly, people wouldn't be so abrasive to your posts if you treated others with more respect. You set up all of these ridiculous straw-men instead of actually reading what people are saying. If you want people to listen to you, you need to listen to them, too.
 
What I'm curious about is how granular vision can be. Can vision accurately tell if a vehicle if 3 feet ahead of you vs 7 feet? What about 12 inches in front of you vs 3 feet when pulling into a garage and seeing a solid wall in front of you? Summon might still have use for radar.... in addition, very slow speed stop and go traffic (think 5 mph and under) might have use for radar so the car can be fairly close to the one in front of you (3 to 4 feet) and move forward 2 feet before slowing down again, etc.

I think long range vision probably is the go to, but I'm very curious how well it works close up and with large flat objects (right behind a truck where vision just see's a box in front of it and no reference points, or the above garage case where it needs to pull up close to a wall).

3 feet vs 7 feet is no problem. 300feet vs 350feet probably no problem either. See the end of the Karpathy part of the autonomy day presentation where he showed that Tesla could use Radar detections to train a camera neural network to accurately tell the distance to vehicles.