Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
For radar, it's 1/(distance^4). Both the transmitted pulse and the reflection fall off as the square. Multiply those together and you get inverse to the fourth power.
This would be true if the transmitted signal power were omnidirectional in x, y and z. But the transmitted radar beam is (with varying degrees of precision & efficiency) formed into a more or less highly directional main lobe and some inevitable unwanted minor lobes.

(Disclaimer: I'm not a radar engineer myself, so I invite constructive correction if I say anything wrong here. I'm not trying to beat it to death but I find it interesting.)

TLDR: the transmitted radar signal is directional and does not exhibit Square Law fall off, something much less severe. The reflected signal is complicated but may be grossly assumed to be something like Square Law.

If the beam were formed into a a 360° horizontal plane (omni in two dimensions), the transmitted energy would then fall off roughly proportional to distance ( linear rather than Square law). An optical analogy would be the design of some signal beacon lights, or some of those marker bollards that wrap a reflector around a column.

But if the beam is formed as a directional projection (really the condition that makes us call it a "beam"), the transmit signal power falls off according to the averaged beam spreading angle.
For radar, this was originally done using the familiar parabolic dish antenna, but later was often accomplished using the so-called phased array. I think the letter concept is the basis of most modern radar and radio communications antennae, for beam forming and steering design.​
Here an optical analogy would be more like a flashlight or searchlight, highly directional compared to a standard light bulb.
The extreme version of this is a laser beam with extremely little spreading, so nearly all the energy reaches the target with almost no fall-off. Lidar uses one or more lasers with rapid scanning to achieve the more spread-out imaging field angles, but with resolution related to the laser spot size. Then Imaging radar becomes the microwave analogue of that.​

In any case, the point is that the transmit beam does not fall off as the square law because it's intentionally quite a bit more focused than an omnidirectional source.

How the reflected receive power falls off can be complicated. The simplest concept is that the target object reflects everything but diffuses it more or less omnidirectionally, at least into a rough hemisphere, which behaves somewhat like omni Square Law fall-off but without losing half the energy to the backwards half of the sphere. In reality though, it depends a lot on the target. Some energy will be lost to absorption, and the complex metallic angles of a typical automobile target will produce a kind of sparkly complex reflection of the transmitted energy. Furthermore, the receive antenna creates a beam form of sensitivity, so that it improves the signal to noise ratio of the received signal by suppressing random energy from non-targeted directions. This effectively improves the square law fall off problem, not in absolute energy terms, but in directional sensitivity terms.

A large flat panel surface, angled away from the transmitter, will throw most of energy away from the source and become more or less iinvisible, one of the concepts used in stealth aircraft and ship design. But this also depends on the materials, texture, paints and so on.

Fortunately for the manufacturers of adaptive cruise control, most cars have not been designed with radar stealth in mind. The typical array of curved painted surfaces, complex parts and shapes behind fiberglass bumpers and so on seem to produce plenty of sparkling/shimmering return energy for the purposes of automotive cruise radar.
But I look at the Cybertruck and it does remind me a little of military stealth vehicles (and that look is probably not an accident); I wonder if it will make a difficult target for these follow-the-eader radars. :) I don't really know about that, but I do think it could be hell to be around when the stainless panels mirror the sun right into your eye! Maybe I should plan to be in one rather than around one...​
 
It's off topic but I was wondering about CT stealthiness too. The back should have good returns from corner reflectors at the top corners of the back window as well as the license plate/step bumper. The front looks more challenging but there may still be some good returns from the top corners of the front windshield, sharp corners of the front lights, and maybe front torsion bar/steering assemblies. A radar should get a good return from the sharp corners on the top of the side windows.
 
  • Like
Reactions: JHCCAZ
Isn't hat saying radar is good enough to rely on at night, but not good enough to rely on in the day when brightness has no impact on the sensor itself? If so, the system as a whole really isn't good enough at night, unless day is way way safer than needed. In which case, day use could be primarily radar at the night safety level...
What I meant is that the value of each sensor in determining truth varies depending on condition and one could make a machine learning model (given enough ground truth data in multiple conditions) to give the best estimate, weighting each sensor accordingly.

Still though, Tesla is testing out their new in house higher resolution radar to see what kind of performance they can ring from it.

There are many startups in that field as well. Tesla should consider buying one, they have plenty of money now.

This would be true if the transmitted signal power were omnidirectional in x, y and z. But the transmitted radar beam is (with varying degrees of precision & efficiency) formed into a more or less highly directional main lobe and some inevitable unwanted minor lobes.

(Disclaimer: I'm not a radar engineer myself, so I invite constructive correction if I say anything wrong here. I'm not trying to beat it to death but I find it interesting.)

TLDR: the transmitted radar signal is directional and does not exhibit Square Law fall off, something much less severe. The reflected signal is complicated but may be grossly assumed to be something like Square Law.

If it's directional you get more power into that direction you care about but I'm pretty sure that even radiating into a limited part of the angular sphere, the square law fall off is the same. You'd otherwise need a long antenna with a height much larger than the wavelength (and comparable to distance to target) for it to be otherwise.

If the beam were formed into a a 360° horizontal plane (omni in two dimensions), the transmitted energy would then fall off roughly proportional to distance ( linear rather than Square law). An optical analogy would be the design of some signal beacon lights, or some of those marker bollards that wrap a reflector around a column.

But if the beam is formed as a directional projection (really the condition that makes us call it a "beam"), the transmit signal power falls off according to the averaged beam spreading angle.
For radar, this was originally done using the familiar parabolic dish antenna, but later was often accomplished using the so-called phased array. I think the letter concept is the basis of most modern radar and radio communications antennae, for beam forming and steering design.​
Here an optical analogy would be more like a flashlight or searchlight, highly directional compared to a standard light bulb.
The extreme version of this is a laser beam with extremely little spreading, so nearly all the energy reaches the target with almost no fall-off.​

There is fall off with a laser but you start out with much better intensity in the region you care about given the input power.
 
  • Like
Reactions: JHCCAZ
This would be true if the transmitted signal power were omnidirectional in x, y and z. But the transmitted radar beam is (with varying degrees of precision & efficiency) formed into a more or less highly directional main lobe and some inevitable unwanted minor lobes.

(Disclaimer: I'm not a radar engineer myself, so I invite constructive correction if I say anything wrong here. I'm not trying to beat it to death but I find it interesting.)

TLDR: the transmitted radar signal is directional and does not exhibit Square Law fall off, something much less severe. The reflected signal is complicated but may be grossly assumed to be something like Square Law.

If the beam were formed into a a 360° horizontal plane (omni in two dimensions), the transmitted energy would then fall off roughly proportional to distance ( linear rather than Square law). An optical analogy would be the design of some signal beacon lights, or some of those marker bollards that wrap a reflector around a column.

But if the beam is formed as a directional projection (really the condition that makes us call it a "beam"), the transmit signal power falls off according to the averaged beam spreading angle.
For radar, this was originally done using the familiar parabolic dish antenna, but later was often accomplished using the so-called phased array. I think the letter concept is the basis of most modern radar and radio communications antennae, for beam forming and steering design.​
Here an optical analogy would be more like a flashlight or searchlight, highly directional compared to a standard light bulb.
The extreme version of this is a laser beam with extremely little spreading, so nearly all the energy reaches the target with almost no fall-off. Lidar uses one or more lasers with rapid scanning to achieve the more spread-out imaging field angles, but with resolution related to the laser spot size. Then Imaging radar becomes the microwave analogue of that.​

In any case, the point is that the transmit beam does not fall off as the square law because it's intentionally quite a bit more focused than an omnidirectional source.

How the reflected receive power falls off can be complicated. The simplest concept is that the target object reflects everything but diffuses it more or less omnidirectionally, at least into a rough hemisphere, which behaves somewhat like omni Square Law fall-off but without losing half the energy to the backwards half of the sphere. In reality though, it depends a lot on the target. Some energy will be lost to absorption, and the complex metallic angles of a typical automobile target will produce a kind of sparkly complex reflection of the transmitted energy. Furthermore, the receive antenna creates a beam form of sensitivity, so that it improves the signal to noise ratio of the received signal by suppressing random energy from non-targeted directions. This effectively improves the square law fall off problem, not in absolute energy terms, but in directional sensitivity terms.

A large flat panel surface, angled away from the transmitter, will throw most of energy away from the source and become more or less iinvisible, one of the concepts used in stealth aircraft and ship design. But this also depends on the materials, texture, paints and so on.

Fortunately for the manufacturers of adaptive cruise control, most cars have not been designed with radar stealth in mind. The typical array of curved painted surfaces, complex parts and shapes behind fiberglass bumpers and so on seem to produce plenty of sparkling/shimmering return energy for the purposes of automotive cruise radar.
But I look at the Cybertruck and it does remind me a little of military stealth vehicles (and that look is probably not an accident); I wonder if it will make a difficult target for these follow-the-eader radars. :) I don't really know about that, but I do think it could be hell to be around when the stainless panels mirror the sun right into your eye! Maybe I should plan to be in one rather than around one...​

https://www.ll.mit.edu/sites/default/files/outreach/doc/2018-07/lecture 2.pdf

Beam width determines power per area but the fall off is independent. Doubling the radius is four times the area of any angular section of a sphere. Radar beams are not collimated like a laser.

Receive antenna gain can improve sensitivity, but doesn't change the energy per unit area at the antenna.

Yes, a target can be designed to focus, but cars are generally flat to convex, not concave, and the beam covers a large area at distance, a 5 degree beamwidth at 200 feet is roughly 17 ft across.

Object characteristics only change the amount of reflected energy. It still diverges, unless you are imaging a sphere and you are at the center, I suppose (or ellipse/focus).
 
Beam width determines power per area but the fall off is independent. Doubling the radius is four times the area of any angular section of a sphere. Radar beams are not collimated like a laser.
Thanks and you're right, I was conflating the issue of illumination power concentration (focus) with the geometric divergence issue. Sorry about that; the Square Law fall-off will indeed apply as long as the beam diverges in both dimensions beyond the size of the receiver aperture.

My excuse is misinterpretation of a memory from a project: A few years ago, we were working on a photodiode amplifier and discussing the customer's optical setup. There was a laser-illuminated version and an LED with optical focus version. Both could be arranged so that the entire beam width i.e. spot size fell within the area of the receiving photodiode - so I remembered that in that situation the distance between source and receiver became irrelevant. But if the beam spot was alowed to diverge beyond the area boundaries of the receiver - possible if they went for cost reduction and used an unfocused LED - then there was a distance-dependent signal loss that we would have to accommodate.​

So again I apologize and defer to your original point . I think my comments about the reflections were pretty much in agreement with yours about the object characteristics though. My main thought there was that the unpredictable and oftentimes varying reflection pattern - what I called sparkling - means that there can be large range of returned signal power, that I think can dominate over the distance-based variation. But still, as you said, virtually all of those reflections will still be unfocused and will fall off with distance. Receive antenna directionality doesn't change that; it only suppresses unwanted spurious signal.
 
Radar is 100% needed at least for the sake of long range perception. Currently, if you approach stopped traffic at 85 mph with fsd on, the car waits until the very last minute to stop and abruptly SLAMS on the brakes and scares the living bejesus out of everyone in the car. The braking is so intense items will quite literally fly forward in the car. I have experienced this at 80mph, but it is even scary as low as 60. There is no way they will ever allow the original 90mph limit with the current setup. The car literally cannot see. I, the human can see the car well over a MILE in advance on a clear day. The car? Well it doesn’t brake until about 400 feet from the car ahead…
I think about it this way:

If you, a human, watched the video feed, can you make out the car with sufficient distance? If so, then it’s not a sensor problem. (And the video feed you watch is lower quality than the raw sensor data). With HW3 you can see cars much further than the car reacts.

I’m convinced this is a C++/training/neural net size issue and not a sensor issue.

I think with more training compute coming online and the new end-to-end approach (remember George Hotz agrees this is the right approach and he’s a smart dude too) we’ll see significant improvement in this behavior.
 
1. The car can see out that far (range is 250m or 800+) but it not reacting to the information
2. 400 feet is about double the distance car needs to stop from 80 on a good surface
3. Why is the driver letting the car gey to that point without intervening? That's 3.5 seconds from collision. If the car's reaction scares the driver, that's a good thing because they should have slowed sooner.
Look. Take your car out to a highway that has a 55 limit and go 60. Then continue until you approach stopped traffic (probably behind a red light). You will find that the car aggressively slams the brakes. I don’t care about the numbers that they quote. A camera has no definite “limit” as to how far it can see per se. It’s the pixel density that counts. Go out and try it then report in. You will see that the current system is inadequate.
 
I think about it this way:

If you, a human, watched the video feed, can you make out the car with sufficient distance? If so, then it’s not a sensor problem. (And the video feed you watch is lower quality than the raw sensor data). With HW3 you can see cars much further than the car reacts.

I’m convinced this is a C++/training/neural net size issue and not a sensor issue.

I think with more training compute coming online and the new end-to-end approach (remember George Hotz agrees this is the right approach and he’s a smart dude too) we’ll see significant improvement in this behavior.
I agree with this. However, if it isn’t fixed with v12 then I’ll be convinced it’s a sensor issue.
 
If you, a human, watched the video feed, can you make out the car with sufficient distance? If so, then it’s not a sensor problem. (And the video feed you watch is lower quality than the raw sensor data). With HW3 you can see cars much further than the car reacts.

I’m convinced this is a C++/training/neural net size issue and not a sensor issue.

I think with more training compute coming online and the new end-to-end approach (remember George Hotz agrees this is the right approach and he’s a smart dude too) we’ll see significant improvement in this behavior.
This is not a valid line of reasoning for the coming 5-10 years in my opinion. NN:s aren't a brain. GPT-4 can't even reliably multiply two five digit numbers without a plugin.

I believe you need superhuman multi-modal sensing (including detailed maps) for the foreseeable future to compensate for the lack of a brain.
If NN:s ever get anywhere close to something that can be called intelligence or reasoning at some point, the sensors will likely be so cheap and good that you will add them for safety regardless.

A Lidar or radar doesn't need ML (or a minimal amount if it) to figure out there is an object coming towards you at 60 mph. Low-latency and close to 100% recall.

I think computer vision systems will do unsupervised radiology and other simpler few-image tasks that are safety critical at least 10 years before we have general unsupervised self-driving - which is both time- and safety critical and immensely more complex.

My best guess is that no Teslas on the roads today (current HW) will ever be unsupervised on highways (at highway speed) nor on city streets. I think it's more likely than not that no personally owned vehicles that you can buy before 2035 will be Level 4 even on highway-only. Perhaps there will be some L3 vehicles that you can buy capable of a 130 km/h highway ODD before 2030.
 
Last edited:
Reliably identifying that an object in a picture is a cat and not a dog seemingly requires a brain too, and yet NNs have been able to do that for years now.

What is the number of neurons/weights required in a NN that allows you to identify “there is a large object on the road in the distance that might be a car, so I should start slowing down out of caution”?
 
I agree with this. However, if it isn’t fixed with v12 then I’ll be convinced it’s a sensor problem.
Just keep in mind there will be iterations of v12. I expect that Tesla will identify the most serious issues with each version, retrain their neural network with examples of those issues, and iterate. I think generally Tesla will view turns into and across traffic (Chuck’s left turn, unprotected lefts) as higher priority safety-wise, but this will be addressed soon after.

But yes, even the initial release of v12 will give us more insight. It’s just always felt that trying to write coding rules around the real world has been FSD’s achilles heel. Having said that, Karpathy spoke many years ago now about “Software 2.0” eating into and replacing manual code, so they are technically still following the playbook they laid out many years ago.
 
Look. Take your car out to a highway that has a 55 limit and go 60. Then continue until you approach stopped traffic (probably behind a red light). You will find that the car aggressively slams the brakes. I don’t care about the numbers that they quote. A camera has no definite “limit” as to how far it can see per se. It’s the pixel density that counts. Go out and try it then report in. You will see that the current system is inadequate.
There might be a few issues causing it as I experience it frequently on surface streets with 45mph speed limits. But I agree the current system is inadequate at distance as well as moving objects turning across the ego's path. And the more they rely on photons the more noise/uncertainty results which requires longer dwell/processing.

And then the needless herky jerky low speed steering input has to play hell with aligning object photons/kinematic estimates. Why they didn't fix that crap along time ago escapes me. Junk in/junk out.
 
Just keep in mind there will be iterations of v12. I expect that Tesla will identify the most serious issues with each version, retrain their neural network with examples of those issues, and iterate. I think generally Tesla will view turns into and across traffic (Chuck’s left turn, unprotected lefts) as higher priority safety-wise, but this will be addressed soon after.

But yes, even the initial release of v12 will give us more insight. It’s just always felt that trying to write coding rules around the real world has been FSD’s achilles heel. Having said that, Karpathy spoke many years ago now about “Software 2.0” eating into and replacing manual code, so they are technically still following the playbook they laid out many years ago.
As with any dog/pony show, I don't doubt they can get v12 to work reasonably well in small demo territories. Scaling up will be the challenge. Tesla's original advantage was data from all those road miles but all that data and all these years hasn't resulted in anything close to a generalized solution. Dojo will help get a quicker convergence but the end solution may still be a very poor generalization.

V11 performance is so poor v12 may see some gains best case but likely still far from mainstream customer expectations. Aside from the enthusiasts, few want FSD to go from calm to potentially life ending moments on every other drive. A slow, klunky, cost prioritized vision-only design isn't safe or confidence inspiring for the customer. So far the FSD customer has paid big bucks and only received junk.
 
Is V11 performing poorly because it is trying to heuristically / perceptually account for thousands of variations in road semantics across north America

Or is it performing poorly because some other local maximum that won't be overcome?

Do we have examples videos of this limitation that can't be overcome with the sensor / hardware suite?
 
  • Funny
Reactions: AlanSubie4Life
A Lidar or radar doesn't need ML (or a minimal amount if it) to figure out there is an object coming towards you at 60 mph.

But that was never the problem anyway. It was figuring out if the stationary object it thought you were coming toward was relevant to your path. See phantom braking for overpasses and signs as but one example. Most makers had to semi-manually set lots of objects to "ignore" to mitigate this and AFAIK nobody has really "solved" it.
 
  • Like
Reactions: Tronguy and JHCCAZ
For radar, it's 1/(distance^4). Both the transmitted pulse and the reflection fall off as the square. Multiply those together and you get inverse to the fourth power.



Mixing at transmitted carrier frequency doesn't work due to sum and difference outputs from mixer. They mix with an offset tone and then filter.

Modern radars use a chirped signal and mix the return with an offset chirp. That produces individual targets in range as individual frequencies and increase signal to noise by integrating each return over the chirp length.

Tesla's Phoenix is also a multiple transmit multiple recieve antenna (virtual grid) phased array for increased resolution.

On terms of general safety, a vehicle should be able to stop in less than half their visibility, but mostly nobody does that (on land).
Back in the day when floating about the ocean with a white hat on, I actually worked on one of those chirped RADARs. Fun stuff, even if it was back in the 70's. And you're right about the signal to noise ratio, at least in general.

And when I wrote the "square law" bit; you're right, I was just thinking of one direction. Blame posts done in the dead of night, it is 1/r^4.

But even with chirping and SNR improvements; while that helps if one has a clearly defined target (say, looking at a target in the air, far from other reflective surfaces), it's not going to help with bridge abutments and the like. Those are going to also have chirped reflections; as clutter goes, that doesn't help much.

Perfectly willing to believe that modern practice, with an "offset" tone, puts the mixed-down-to-IF signal at some frequency other than zero. (Although I did work for a time on Homodyne RADARs, from a design viewpoint, that did that direct mixing to zero frequency. Fun stuff with proximity fuzes.

One way I've mumbled to myself on how to solve this kind of, "get the target, but only the target" trick is to save a time trace of all the data received, then subtract that time trace from a previous return from the current return. Things that aren't moving, more or less, get subtracted out, leaving those things that are moving. But this would call for a lot of storage and an almighty fast processor, not to mention offsets for how fast one is moving so one can zero out the clutter. Of course, the problem with this approach is that one does zero out the clutter, which includes stopped vehicles in the middle of the road.

In a way, in order to truly solve this problem of trying to detect stopped object in reduced visibility is to use a frequency of light that fog, mist, and snow is transparent to; this frequency should have a short enough wavelength so that focusing techniques with the right kinds of lenses works; and then build a camera that can "see" right through fog, mist, rain, and snow. In addition, there needs to be enough ambient energy at that wavelength about so a purely passive camera can pick up the images.. But that's the deal with visible light. Visible light to creatures on Earth is visible because that's the range of wavelengths of light that's at the peak of emissions that come off of the sun. If we lived on a planet with a red dwarf star, we'd have eyes sensitive to further in the deep red spectrum or infrared.

Further on in this thread was a mention of using synthetic aperture antennas for better radial resolution. Nice.. I think. If the cost can be kept down. (For those of you who aren't aware, those 1.5' x 1.5' antennas used by SpaceX for their Starlink sets consists of a lot of individual sensors whose received signals are electronically summed, with phase shifts, to create a steerable receiver pattern that can track satellites as they pass by. Much bigger versions of this thing are used by the militaries of the world to create multiple receive beams, the better to track multiple targets, all at the same time. But, once again, these are usually meant to track aerial targets without a pile of clutter, not a target sitting in and amongst other, non-moving like bridge abutments, signs, and stopped cars.)

There may be a solution out there that gives us what we want, and it might even be achievable, but my suspicion is that It Won't Be Cheap. The Electronics equivalent of stuff ten pounds of stuff in an eight pound bag.
 
I don't believe it is a camera problem In my car about last July there was an update and after that, the visualization suddenly wouldn't show a semi until 100-150 feet from me. The top 1/3 of the display is whited out. Trash cans don't show up until I can see them out the passenger window. It's the same cameras but the resolution is about 20% of what it was in June. I'm hoping when they get done with fart noises, light shows and the speaker volume control, they will fix the perception back to what it was.
 
  • Funny
Reactions: AlanSubie4Life
Radar is 100% needed at least for the sake of long range perception. Currently, if you approach stopped traffic at 85 mph with fsd on, the car waits until the very last minute to stop and abruptly SLAMS on the brakes and scares the living bejesus out of everyone in the car. The braking is so intense items will quite literally fly forward in the car. I have experienced this at 80mph, but it is even scary as low as 60. There is no way they will ever allow the original 90mph limit with the current setup. The car literally cannot see. I, the human can see the car well over a MILE in advance on a clear day. The car? Well it doesn’t brake until about 400 feet from the car ahead…

all expected by the technology. The HW3 cameras don't have that much resolution and can't easily distinguish dots far away as cars, much less accurately size them and estimate relative velocity. That takes multiple frames and filtering and the object needs to be big enough on the pixels that changes in angular size can be correlated to velocity with a good enough precision.

I think the FSD could probably make out the existence of something but not at all its velocity. Radar gets the velocity instantly with doppler, the problem is knowing what it reflected off of.

HW4 cameras with higher resolution could potentially help---if they also increased compute dramatically to be able to use the fatter bitstream
 
I think about it this way:

If you, a human, watched the video feed, can you make out the car with sufficient distance? If so, then it’s not a sensor problem. (And the video feed you watch is lower quality than the raw sensor data). With HW3 you can see cars much further than the car reacts.

I’m convinced this is a C++/training/neural net size issue and not a sensor issue.

I'm not---humans are better in the fovea than the 1280x960 of HW3. It can guess at a car at distance, but not be sure, and definitely not be sure about its velocity. That is a harder job than image classification, as you need to estimate relative size changes with high precision and if its 4 pixels wide you can't do that.