Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Radar on Teslas

This site may earn commission on affiliate links.
[Moderator note (bmah): Moved this discussion of radar on Teslas from the 2018.16.1 firmware thread into its own thread.]

Detecting stopped cars cannot be done using radar alone. Cars could stand still in a curve. How could the radar tell the difference between a stopped car you will crash with due to the road curving towards it? With a road not curving and the same obstacle being a parked car next to the road and you pass it?

This cannot be reliably implemented without causing much phantom braking, before the camera has a much better situational understanding. It needs to see that this load further ahead leads to this stopped car, hence I need to start braking.

That situational understanding needs to be part of FSD. Will probably not be introduced before a year or two to EAP.
 
Last edited by a moderator:
  • Like
Reactions: croman
Radar can tell the speed of the other vehicle, if you think not protest the next time a highway patrol officer gives you a ticket for speeding using radar. It does need other information in order to determine precise location relative to the roadway so geo-mapping data would help. Additionally, the beam width, scanning capability all play into the picture. Thus an object located outside of the highway an stopped is treated differently than an object located within the bounds of the highway that is stopped. The doppler shift adjusted for your speed helps it to determine whether the object is stationary or moving as a high rate of speed and whether it is moving toward you or away from you. In the military we use doppler radar to track multiple targets and determine their range, speed and location. Of course these days a lot of computing power goes into accomplishing these tasks with sea clutter, jamming, etc.

By the way, I haven't seen any improvement in steering when the lanes suddenly widen. It erratically zig-zags as it searches between lines. I see this when I go through the toll lanes and the three occupant lane merges back into two lanes.
 
  • Helpful
Reactions: bate and croman
Radar can tell the speed of the other vehicle, if you think not protest the next time a highway patrol officer gives you a ticket for speeding using radar. It does need other information in order to determine precise location relative to the roadway so geo-mapping data would help. Additionally, the beam width, scanning capability all play into the picture. Thus an object located outside of the highway an stopped is treated differently than an object located within the bounds of the highway that is stopped. The doppler shift adjusted for your speed helps it to determine whether the object is stationary or moving as a high rate of speed and whether it is moving toward you or away from you. In the military we use doppler radar to track multiple targets and determine their range, speed and location. Of course these days a lot of computing power goes into accomplishing these tasks with sea clutter, jamming, etc.

By the way, I haven't seen any improvement in steering when the lanes suddenly widen. It erratically zig-zags as it searches between lines. I see this when I go through the toll lanes and the three occupant lane merges back into two lanes.
I'm talking about zero velocity vehicles. Moving vehicles are easy, which is why Tesla sees them if they have been moving.

Tesla can't tell if the zero velocity vehicle is in your lane or not reliably. It doesn't know reliably where your lane is when the camera doesn't map where the lane is far ahead. It can guess based on steering angle etc. but phantom braking and not seeing zero velocity vehicles happen for that reason.

Tracking moving vehicles is much easier like you said. Even around turns etc...
 
Radar can tell the speed of the other vehicle, if you think not protest the next time a highway patrol officer gives you a ticket for speeding using radar. It does need other information in order to determine precise location relative to the roadway so geo-mapping data would help. Additionally, the beam width, scanning capability all play into the picture. Thus an object located outside of the highway an stopped is treated differently than an object located within the bounds of the highway that is stopped. The doppler shift adjusted for your speed helps it to determine whether the object is stationary or moving as a high rate of speed and whether it is moving toward you or away from you. In the military we use doppler radar to track multiple targets and determine their range, speed and location. Of course these days a lot of computing power goes into accomplishing these tasks with sea clutter, jamming, etc.

It's important to note that automotive radar has essentially no vertical resolution, despite OK horizontal resolution and very good velocity resolution. This is why overhead signs pose such a problem for it -- it knows the horizontal coordinates (approximately) of that sign but doesn't know where it is vertically. So the camera is essential in differentiating a stopped car from an overhead sign, overpass, etc. In a dense urban environment radar gets really useless for detecting stationary things because there are so many stationary things all around.

The camera (or good maps!) is also important for knowing where the road is as it curves. For example, if the road is curving, there may be a parked car on the bend that is absolutely dead ahead of me. Without taking into account the road curvature, the radar and camera will both agree that I'm going to hit it -- and indeed AP2 often hits the brakes in this situation, at least a little. The camera's understanding of road curvature more than a few car lengths in front of the vehicle is still pretty sketchy.
 
How do you avoid having vertical resolution? The beam will spread out both horizontally and vertically over distance. If they use phased array antennas they could scan a narrow beam both vertically and horizontally.


Well, according to Bosch: Mid-range radar sensor (MRR)

The MRR uses an elevation antenna (red area in the image) to generate an additional upward elevation beam. This additional beam enables the MRR to measure the height of all detected objects in order to reliably classify relevant objects and determine whether the vehicle can drive under or over them.

The MRR has two antennas: A narrowbeam forward antenna that's horizontal, and a wider beamed antenna that is pointed somewhat upwards. And the system uses signal processing and the relative signal strength in both directions to attempt to determine if something is an overhead obstacle or not.


Of course this approach has its limitations, but there is definitely an attempt to vertically discriminate.
 
  • Informative
Reactions: rnortman
Do you have a source that substantiates this claim?

How about this: http://www.araa.asn.au/acra/acra2015/papers/pap167.pdf

I'm sure I can dig up other sources, but a commonality with all off-the-shelf automotive radar is that they give you range and bearing (one bearing angle, not two) to the estimated centroid of the object, radial and lateral velocity (note again only two velocity components, not three as a true 3-D system would have) and some kind of "intensity". Intensity is complicated and is influenced by size, shape, and surface composition simultaneously, making it hard to separate out these things. Bearing, it is important to note, is given as a single angle in a single plane, not a pair of angles or anything else that would give you vertical resolution.

This is another interesting paper to read if you want to understand how crappy radar data is for trying to understand the world in high fidelity: https://www.researchgate.net/public..._Vulnerable_Road_Users_-_Pedestrians_Cyclists

How do you avoid having vertical resolution? The beam will spread out both horizontally and vertically over distance. If they use phased array antennas they could scan a narrow beam both vertically and horizontally.

It's not that it's not possible, it's that it's not implemented in the low-cost automotive radar that's currently common in the industry. Automotive radar was designed for two things: Adaptive cruise control and AEB. Neither requires vertical resolution (though AEB would certainly be better with vertical resolution). So yes, they use phased array antennas, but they only have antennas spread over a single dimension, in columns, providing angular resolution only in that dimension. There may be some more advanced automotive radar units on the market recently or coming soon, I'm not sure, but that's the way it's been in this industry for years. What I've seen is efforts to improve range, field of view, horizontal resolution, number of objects tracked, and also some work on differentiating pedestrians from vehicles.

Note also that automotive radar has to deal with an inherently 2-D-ish environment in the form of the road surface, which is ever-present and always very close to the transponder. Road surface scattering is kind of a big deal (sometimes a problem, sometimes helpful as in bouncing radar under a vehicle to show the vehicle ahead).
 
  • Informative
Reactions: SucreTease
Well, according to Bosch: Mid-range radar sensor (MRR)



The MRR has two antennas: A narrowbeam forward antenna that's horizontal, and a wider beamed antenna that is pointed somewhat upwards. And the system uses signal processing and the relative signal strength in both directions to attempt to determine if something is an overhead obstacle or not.


Of course this approach has its limitations, but there is definitely an attempt to vertically discriminate.

That's interesting, I hadn't seen that. So they're cheating by bundling two radars together with one pitched up. The combination of the two gives you some coarse vertical resolution. Do we know if Tesla is using this sort of radard in HW2.5? I don't think they were using it previously.
 
That's interesting, I hadn't seen that. So they're cheating by bundling two radars together with one pitched up. The combination of the two gives you some coarse vertical resolution. Do we know if Tesla is using this sort of radard in HW2.5? I don't think they were using it previously.

This actually is the old radar. The Bosch MRR14 is the AP1 and AP2.0 radar. In AP2.5 they switched to a Continental radar that works differently….

Obviously the course vertical resolution didn't work 100% of the time otherwise we wouldn't get phantom overpass braking. And that's the difference between what a whitepaper/patent/datasheet advertises and making a product out of a theoretical capability. It's not like when Tesla activated radar braking it caused false activations at 100% of overpasses 100% of the time. There were a combination of scenarios and events that led to false overpass braking.
 
  • Like
  • Informative
Reactions: rnortman and croman
That is not accurate. You don't know if it was a hardware problem or a Tesla software problem.

It is safe to say that whether it was "hardware" or "software", it comes down to inherent limitations in mass-market-ready automotive radar technology, which is a combination of hardware and software in the radar unit plus hardware and software provided by Tesla. The software has a very hard job to do trying to overcome the limitations of the radar. Even if the software made the wrong call, I don't think you can completely blame the software -- it is trying to do the impossible if it's trying to understand the world around it with high resolution and precision using radar data alone. It will necessarily make the wrong call sometimes. This problem cannot be fixed fully without (at a minimum) bringing those cameras into play. I think the cameras already are involved to some extent; it's really unclear how much sensor fusion is currently happening though. (Boy, lidar sure would be helpful, too...)

I think you can blame Tesla for the performance of the whole system, hardware and software, including the components from Bosch/Continental, if the performance of the system does not match their marketing promises, because they made the decision to make those promises with full awareness of the limitations of their radar technology.

Hey, remember when Musk said they were going to get lidar-like point clouds out of radar? bwah hahahahaha! Not with the radars they currently have on these vehicles, that's for sure. Maybe with something somebody has in an R&D department...
 
Last edited:
It is safe to say that whether it was "hardware" or "software", it comes down to inherent limitations in mass-market-ready automotive radar technology, which is a combination of hardware and software in the radar unit plus hardware and software provided by Tesla. The software has a very hard job to do trying to overcome the limitations of the radar. Even if the software made the wrong call, I don't think you can completely blame the software -- it is trying to do the impossible if it's trying to understand the world around it with high resolution and precision using radar data alone. It will necessarily make the wrong call sometimes. This problem cannot be fixed fully without (at a minimum) bringing those cameras into play. I think the cameras already are involved to some extent; it's really unclear how much sensor fusion is currently happening though. (Boy, lidar sure would be helpful, too...)

I think you can blame Tesla for the performance of the whole system, hardware and software, including the components from Bosch/Continental, if the performance of the system does not match their marketing promises, because they made the decision to make those promises with full awareness of the limitations of their radar technology.

Hey, remember when Musk said they were going to get lidar-like point clouds out of radar? bwah hahahahaha! Not with the radars they currently have on these vehicles, that's for sure. Maybe with something somebody has in an R&D department...
I recall when Bosch provided new 'drivers' and then shortly after that Telsa came out with enhanced AEB by looking two cars ahead.

Perhaps a combo of radar and camera (sensor fusion as was mentioned).

Below is only *1* way that cameras are being used for 3D mapping. There are several ways already. Imagine multiple passes. Imagine multiple methods and just syncing the ECEF points to create the 3D world and on-going variations (construction) when the ECEF points deviate.

ORB-SLAM Project Webpage
webdiis.unizar.es/~raulmur/orbslam/
ORB-SLAM is a versatile and accurate SLAM solution for Monocular, Stereo and RGB-D cameras. It is able to compute in real-time the camera trajectory and a sparse 3D reconstruction of the scene in a wide variety of environments, ranging from small hand-held sequences of a desk to a car driven around several city ...

Check out a couple of ORB-SLAM2 example videos. Amazing!!! with basic cameras.

Driving and mapping

Walking through hallways and staircases to get a two-floor view