Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
Another term comes to mind when contemplating Tesla's heavy approach to estimating the physics of other road users: error propagation. There's inevitably going to be some error between reality and what Tesla's NNs output. The hope is that this error is vanishingly small. The problem is that the motion planner has to deal with bad information when this error is large. Garbage in, garbage out. The hope is that the motion planner can be fed data that's as accurate as possible. The shortcut to feeding the motion planner accurate physics of other road users is to take direct measurements. The Tesla approach is to refine its estimators (neural networks) of road user physics. If Tesla's estimators are far off, then it just reduces to a case of garbage in and garbage out.
I'm confused by this .. what do you think the raw output of lidar looks like? Or radar? There seems to be an assumption here that these sensors somehow provide all the data the car needs to determine the existence of, and the motion of objects?
 
I'm confused by this .. what do you think the raw output of lidar looks like? Or radar? There seems to be an assumption here that these sensors somehow provide all the data the car needs to determine the existence of, and the motion of objects?

Yes, the lidar and radar returns need to be processed to identify objects and only then can physical properties be attached to the objects, but, those are pretty much solved problems, unlike the case of vision, where not only do objects need to be identified but techniques for extracting the physics of the objects from video feeds are very much still in active development.

Lasers have been used on cars since 1992 to identify lead vehicle physical properties such as lead vehicle velocity. If a 1992 car ECU can handle the task of understanding the returned laser signal and correlating it with a lead car then I'm sure we can do it even better in 2022.
 
I'm confused by this .. what do you think the raw output of lidar looks like? Or radar? There seems to be an assumption here that these sensors somehow provide all the data the car needs to determine the existence of, and the motion of objects?
Lidar returns a 3D point cloud and IR reflectivity data for each point.
Is your concern that it doesn't directly tell you what the object is?
 
For regular radar yes but I don't think this is true for lidar. Lidar creates a 3D point cloud. You can identify the group of points that match an object and measure the velocity of that group of points to get the true velocity of the object. HD radar can also create a high res 3D point cloud. So you could probably do the same with HD radar. And yes, it uses NN. But everything uses NN.
No, you can't. Or at least not directly. You CAN, with a LOT of NN smarts, deduce the outline of an "object", but that itself is a hard problem. Once you have done that you have a direct velocity measurement only along the axis from the object to the sensor. If you want "true" velocity (that is, direction and speed) you need to examine the motion over many frames. And this is yet again an NN problem. In fact, its pretty much the SAME NN problem that you have to solve when using camera inout, sans the on-axis velocity component.

There are a lot of people here vastly over-estimating what you can get from lidar/radar sensors. Yes, they give extra data, and that can be good, sometimes, but they still need as much NN work before they can be used in any meaningful way, just like cameras. They are NOT a panacea.
 
Read my other posts .. basically I think some people here are claiming lidar magically solves all sorts of problems, skipping over bits that re actually much harder to solve.
FMCW lidar in my understanding can return things such as instantaneous radial velocity and so can agile ToF lidar except slower. Perhaps I'm wrong so please correct me. My understanding is that most current lidar systems are ToF based and not necessarily agile.
 
Last edited:
No, you can't. Or at least not directly. You CAN, with a LOT of NN smarts, deduce the outline of an "object", but that itself is a hard problem. Once you have done that you have a direct velocity measurement only along the axis from the object to the sensor. If you want "true" velocity (that is, direction and speed) you need to examine the motion over many frames. And this is yet again an NN problem. In fact, its pretty much the SAME NN problem that you have to solve when using camera inout, sans the on-axis velocity component.

There are a lot of people here vastly over-estimating what you can get from lidar/radar sensors. Yes, they give extra data, and that can be good, sometimes, but they still need as much NN work before they can be used in any meaningful way, just like cameras. They are NOT a panacea.
I don't see why you need a neural net to measure the direction and speed of a moving object from a cloud of points. That can be done with basic math (though maybe a little more advanced than my coding ability).
Waymo went 6 million miles without hitting a curb. Are you saying that LIDAR didn't help them do that?
 
Yes, the lidar and radar returns need to be processed to identify objects and only then can physical properties be attached to the objects, but, those are pretty much solved problems, unlike the case of vision, where not only do objects need to be identified but techniques for extracting the physics of the objects from video feeds are very much still in active development.

Lasers have been used on cars since 1992 to identify lead vehicle physical properties such as lead vehicle velocity. If a 1992 car ECU can handle the task of understanding the returned laser signal and correlating it with a lead car then I'm sure we can do it even better in 2022.
I think one of the hard parts for Tesla is calculating distance to an object by referencing pixel changes in 2D images. You have to admit, it's impressive what they've been able to achieve with just 2D images right now. When I'm driving, or at a stop light, the visualizations are almost always accurate as to relative position of cars. Biggest issues I see are with predictions, so when a car is visible to the camera and shows on my screen in it's correct relative position in motion, then the camera is blocked (the car moved behind something like another car or truck), the visualizations sometimes don't show accurate location data. So the car fades a little and seems to swerve around (like it's changing lanes, etc), until it's visible again, then it snaps/blinks into the correct location.

I'm sure the same issue occurs with LIDAR - when an object is visible to the light beams, then goes behind something - there must be some predictive method to extrapolate movement?

I guess the same can be said for humans. Our eyes are stereo, building a 3D image of the world as our brain stitches the two 2D images together. However, a person who loses an eye still has some basic depth perception, using visual clues of objects in motion. If the person stands still, and the object is still, it's hard to gauge distance. But if the object moves, or the person moves, they can use cues to estimate distance. Same must be happening for Tesla. Since the car is almost always in motion, and if it's not in motion, then objects around it are in motion, it's gauging distances from those cues.
 
There are a lot of people here vastly over-estimating what you can get from lidar/radar sensors. Yes, they give extra data, and that can be good, sometimes, but they still need as much NN work before they can be used in any meaningful way, just like cameras. They are NOT a panacea.

Nobody is saying that lidar/radar are a magic bullet. They are just sensors that give you perception data. Whether it's camera, lidar or radar, it's what you do with the data that matters. But lidar/radar are active sensors whereas cameras are passive sensors. They do have the benefit that they can give you very precise direct distance measurements whereas camera vision can only estimate distance indirectly. When doing autonomous driving, you need perception to be 99.99999%. Vision-only cannot do that yet. Having active sensors like lidar/radar can be very beneficial.
 
Lasers have been used on cars since 1992 to identify lead vehicle physical properties such as lead vehicle velocity. If a 1992 car ECU can handle the task of understanding the returned laser signal and correlating it with a lead car then I'm sure we can do it even better in 2022.
This is a massively different problem. In your example all the system is doing is computing the approach velocity, and yes you can get this (more or less) directly from various measurements (doppler shift or frame-by-frame distance deltas). But computing the lateral velocity of an object is a totally different class of problem. And guess what? It requires solving basically the same hard set of problems that you face when doing so from camera data. That is, object identification, object placement, object persistence, and velocity projection. These are all NN problems and are not "solved" by the sensors any more than the cameras "know" that they are seeing a car.

And if you go down that path, the extra data you are getting from the lidar/radar becomes less critical .. because solving for lateral motion (for which the approach data is more or less useless) also solves for approach motion, and if you are doing that in the NNs, then what is the lidar giving you? And that, ultimately, is Tesla's logic, or at least that's my reading of it.

So why are others still using all these extra sensors? Well, in Waymo's case, I suspect the answer is rather pragmatic. They were an early entry in the autonomous car space, and at that time the hardware cost (in both $$, power needs and physical bulk) for the NN/CPU compute power needed was prohibitive. The radar/lidar data could be interpreted, however, to get some basic driving tasks done. My guess is, Waymo has gone so far down this path they are more or less stuck with it, unless they undertake a total re-write of their stack.
 
Nobody is saying that lidar/radar are a magic bullet. They are just sensors that give you perception data. Whether it's camera, lidar or radar, it's what you do with the data that matters. But lidar/radar are active sensors whereas cameras are passive sensors. When doing autonomous driving, you need perception to be 99.99999%. Vision-only cannot do that yet. Having active sensors like lidar/radar can be very beneficial.
I think this all boils down to Elon's vision (pun not intended) that cars can drive themselves on pure vision because humans do it. Humans don't need LIDAR or RADAR or even hearing (though hearing definitely adds benefits). We drive with two high-definition cameras, and a brain that stitches those images together to form a 3D image of the world around us.

The argument becomes, is the AI (the brain) good enough to handle it right now. If not, then we fall back on other inputs, like LIDAR, to help cover inadequacies of the AI processing.

I don't doubt that vision-only can handle the problem, given sufficient high-def cameras, and configuration of those cameras (so they can see in all directions, as well as a properly powerful and well configured AI to handle the data and make decisions.
 
  • Like
Reactions: diplomat33
Nobody is saying that lidar/radar are a magic bullet. They are just sensors that give you perception data. Whether it's camera, lidar or radar, it's what you do with the data that matters. But lidar/radar are active sensors whereas cameras are passive sensors. They do have the benefit that they can give you very precise direct distance measurements whereas camera vision can only estimate distance indirectly. When doing autonomous driving, you need perception to be 99.99999%. Vision-only cannot do that yet. Having active sensors like lidar/radar can be very beneficial.
Again, you are making unproven claims. Sure, we need high reliability (which is different from precision btw) though I'm not sure your 99.99999% has any meaning. You keep saying, again and again, that lidar/radar can do xxx, but based on what? You CANNOT point to Waymo, because they use HD maps. So where is the data to backup your assertions? You can't just say "can be very beneficial" and leave it at that .. thats not proof, its an unsubstantiated claim.
 
I don't see why you need a neural net to measure the direction and speed of a moving object from a cloud of points. That can be done with basic math (though maybe a little more advanced than my coding ability).
Waymo went 6 million miles without hitting a curb. Are you saying that LIDAR didn't help them do that?
How do you determine what constitutes an "object" from the cloud of points data ?
How do you determine the continuity of that object across time (frame to frame)?
How do you determine the world-space velocity of an object, once you have identified it?
How do you handle object intersections? Or even determine that they exist?
 
This is a massively different problem. In your example all the system is doing is computing the approach velocity, and yes you can get this (more or less) directly from various measurements (doppler shift or frame-by-frame distance deltas). But computing the lateral velocity of an object is a totally different class of problem. And guess what? It requires solving basically the same hard set of problems that you face when doing so from camera data. That is, object identification, object placement, object persistence, and velocity projection. These are all NN problems and are not "solved" by the sensors any more than the cameras "know" that they are seeing a car.

And if you go down that path, the extra data you are getting from the lidar/radar becomes less critical .. because solving for lateral motion (for which the approach data is more or less useless) also solves for approach motion, and if you are doing that in the NNs, then what is the lidar giving you? And that, ultimately, is Tesla's logic, or at least that's my reading of it.

So why are others still using all these extra sensors? Well, in Waymo's case, I suspect the answer is rather pragmatic. They were an early entry in the autonomous car space, and at that time the hardware cost (in both $$, power needs and physical bulk) for the NN/CPU compute power needed was prohibitive. The radar/lidar data could be interpreted, however, to get some basic driving tasks done. My guess is, Waymo has gone so far down this path they are more or less stuck with it, unless they undertake a total re-write of their stack.
I'm pretty sure you're wrong about this. For example there is something called "joint probabilistic data association" to track objects. Intuitively it makes sense that NN's would not be the optimal way to transform a point cloud to object tracking data.


Again, you are making unproven claims. Sure, we need high reliability (which is different from precision btw) though I'm not sure your 99.99999% has any meaning. You keep saying, again and again, that lidar/radar can do xxx, but based on what? You CANNOT point to Waymo, because they use HD maps. So where is the data to backup your assertions? You can't just say "can be very beneficial" and leave it at that .. thats not proof, its an unsubstantiated claim.
What sensors did they use to make the HD maps? :p
 
  • Funny
Reactions: AlanSubie4Life
What sensors did they use to make the HD maps?

LIDAR didn't help them do that?

helpfulness of LIDAR.

LIDAR for localization


EB2745EB-D854-4BC4-98DF-096700F26C73.png
 
Again, you are making unproven claims. Sure, we need high reliability (which is different from precision btw) though I'm not sure your 99.99999% has any meaning. You keep saying, again and again, that lidar/radar can do xxx, but based on what? You CANNOT point to Waymo, because they use HD maps. So where is the data to backup your assertions? You can't just say "can be very beneficial" and leave it at that .. thats not proof, its an unsubstantiated claim.

No, I am not making unproven claims. Yes, I can point to Waymo. HD maps have nothing to do with it. Waymo's camera vision, lidar vision and radar vision are more accurate and reliable than Tesla's vision.

Check out the videos in this blog that show Waymo's perception view.


It shows lots of objects with very high accuracy in both position and velocity as well as classifying objects. it also annotates object's intent. It is clearly more accurate and reliable than Tesla's vision. I can also point to Waymo's research papers that show how accurate their vision is.

I am not against camera vision. But it is a fact that lidar and radar provide advantages over camera vision. I've spelled out those advantages many times on this forum. To act like lidar/radar don't provide any advantages over camera vision is silly.
 
Last edited:
Very expensive very high-resolution ones connected to boatloads of computing power.
I'm still unclear on why Waymo and Cruise "don't count" as examples of the helpfulness of LIDAR. I could see that argument if they only used LIDAR for localization on the HD map but they also use it for objects that are not mapped.