Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

LIDAR (out of main)

This site may earn commission on affiliate links.

Artful Dodger

"Neko no me"
Aug 9, 2018
17,095
234,237
Canada
We can debate whether Tesla *may* do better someday. I certainly hope so. But it certainly does not today.
Lol, please justify your claim it "certainly does not today". Are you conflating Autopilot lane keeping with the FSD suite, which has not been reviewed independantly. Indeed, the only 3rd party reports about FSD came out of the rides offered on Autonomy Investor's Day and those were far from technical reviews.

For that matter, please also provide a link to any independant review of other autonomous driving solutions. TIA. I don't see any "certainty" in this space, only competetive commercial claims.

EDIT: There's a reason Waymo chose Phoenix for its trial: the lack of rain hides the shortcomings of depending upon LIDAR (whereas Tesla's combination of RADAR and computer vision has already been demonstrated to be effective)


 
Last edited:
We can debate whether Tesla *may* do better someday. I certainly hope so. But it certainly does not today.

We can debate whether Waymo’s market is more limited. That may be true. But all driving services are regional, so that may not matter. Waymo will serve areas where it can.

But I fail to see how anyone can say that Waymo’s approach won’t work when it is already running a driverless taxi service in a portion of Phoenix today. How can it be a nightmare of any kind when it’s already working? That’s like seeing a bumblebee fly and declaring it’s not possible.

Waymo is not running a driverless taxi service. It has human drivers in the vast majority of its cars, and the few rides without a human driver have a human monitoring remotely from a desk ready to intervene. This is not scaleable, even within the geofenced, extensively 3D mapped and simulated, flat and forever sunny environment it is restricted to. Operation costs are also far higher than Uber (even excluding safety driver costs), partly because of huge car depreciation from extremely expensive hardware. The ride experience is also subpar and has little above novelty value.

Tesla is not dependent on billions of miles of experience (useful data is only a tiny % of this, so Tesla uses an extremely smart in-car data filtering system to collect the right data) because it has to because it doesn't have lidar. Data is not a substitute for lidar, lidar solves the easy problems more easily, data is needed to solve the hard problems whether or not you have lidar. The billions of miles of real driving experience is a requisite whether or not you have Lidar. Having lidar just makes it impossible to get those billions of miles of experience because you cannot build a hardware suite cheap enough to install in consumer vehicles. Without enough data your strategy will hit a roadblock at the point where you don't have enough experience to know what are the next problems you need to solve.
 
Last edited:
Waymo is not running a driverless taxi service. It has human drivers in the vast majority of its cars, and the few rides without a human driver have a human monitoring remotely from a desk ready to intervene. This is not scaleable, even within the geofenced, extensively 3D mapped and simulated, flat and forever sunny environment it is restricted to. Operation costs are also far higher than Uber (even excluding safety driver costs), partly because of huge car depreciation from extremely expensive hardware. The ride experience is also subpar and has little above novelty value.

Tesla is not dependent on billions of miles of experience (useful data is only a tiny % of this, so Tesla uses an extremely smart in-car data filtering system to collect the right data) because it has to because it doesn't have lidar. Data is not a substitute for lidar, lidar solves the easy problems more easily, data is needed to solve the hard problems whether or not you have lidar. The billions of miles of real driving experience is a requisite whether or not you have Lidar. Having lidar just makes it impossible to get those billions of miles of experience because you cannot build a hardware suite cheap enough to install in consumer vehicles. Without enough data your strategy will hit a roadblock at the point where you don't have enough experience to know what are the next problems you need to solve.

well put...
 
EDIT: There's a reason Waymo chose Phoenix for its trial: the lack of rain hides the shortcomings of depending upon LIDAR (whereas Tesla's combination of RADAR and computer vision has already been demonstrated to be effective)



If it doesn't like rain its not going to like this dust storm
IMG_3775.JPG
 
Last edited:
Actually, I'd think that LIDAR wouldn't mind that. It's fog, it doesn't have the same density as rain does and probably can be easily ignored by the LIDAR.

That would be false.

LIDAR's ability to image is dependent upon the visible light from the laser making a round trip from the car to the target and back again. Vision-based cameras only need to see the object through a one way trip of the light. In other words, the distance that lidar provides reliable returns degrades much faster than vision in inclement conditions whether those conditions are due to fog, dust, snowflakes or smoke while vision-based systems degrade in line with the way human drivers deal with poor visibility. IE, in low visibility conditions, all cars need to slow down whether human or camera-based but cars that depend upon lidar will need to drop out sooner.

Lidar could theoretically overcome that by increasing the power output of the lasers but they are already at the regulated limit for use in areas where eye-safe brightness levels are mandated (which is everywhere that humans without laser safe goggles might be found).
 
Last edited:
Waymo is not running a driverless taxi service. It has human drivers in the vast majority of its cars, and the few rides without a human driver have a human monitoring remotely from a desk ready to intervene. This is not scaleable, even within the geofenced, extensively 3D mapped and simulated, flat and forever sunny environment it is restricted to. Operation costs are also far higher than Uber (even excluding safety driver costs), partly because of huge car depreciation from extremely expensive hardware. The ride experience is also subpar and has little above novelty value.

Tesla is not dependent on billions of miles of experience (useful data is only a tiny % of this, so Tesla uses an extremely smart in-car data filtering system to collect the right data) because it has to because it doesn't have lidar. Data is not a substitute for lidar, lidar solves the easy problems more easily, data is needed to solve the hard problems whether or not you have lidar. The billions of miles of real driving experience is a requisite whether or not you have Lidar. Having lidar just makes it impossible to get those billions of miles of experience because you cannot build a hardware suite cheap enough to install in consumer vehicles. Without enough data your strategy will hit a roadblock at the point where you don't have enough experience to know what are the next problems you need to solve.

Wrt Tesla miles, it’s still billions of miles whether it’s to capture edge cases or count sheep on the hills.

As for Waymo, how do you *know* that Waymo will not scale as they improve their software and service? Lidar gets them up and running quickly, but nothing suggests they won’t use more ML and NNs to further integrate data from Lidar and maps with their vision system. Waymo is not relying exclusively on high res maps and Lidar.

Waymo *is* servicing a specific market today. A specific part of town is a market. Airport shuttles are a market. So are fixed bus routes. If a bag or can on the ground is intolerable, they wouldn’t be able to do what they’re doing now.

Waymo also ordered 80,000 more cars to expand their fleet over 100x, so their 1M miles a month will become 100M miles monthly. Acquiring enough miles is already in the works. They’ll reduce hw costs, and collect their billions of miles by using human supervision paid at least partly through fares.

Waymo critics can argue that their approach is only good for 1% or 5% or 20% today, but no one can say what it will be tomorrow. Remote monitoring or not, they have a real world application. The only debate is how much they can improve it.
 
Traffic lights which emit their own photons? Sure. Otherwise, vision systems rely on photons that first travel through 10s of kms of atmosphere, several km of which can be occluded by the same fog/rain/etc. before bouncing off some object then traveling to the car's camera. LIDAR photons travel a much shorter distance.

In principle, with vision, all you care about is what the photon was last emitted from. It doesn’t matter at all whether it originally came from the sun, a street light, your headlights or the explosion of the LiDAR-based self driving car down the street that couldn’t see in the fog. So long as there’s enough light hitting the object you want to see, and it doesn’t have a visible-light cloaking device, you can see it.

With LiDAR, of course, it absolutely must be that photon.
 
In principle, with vision, all you care about is what the photon was last emitted from. It doesn’t matter at all whether it originally came from the sun, a street light, your headlights or the explosion of the LiDAR-based self driving car down the street that couldn’t see in the fog. So long as there’s enough light hitting the object you want to see, and it doesn’t have a visible-light cloaking device, you can see it.

With LiDAR, of course, it absolutely must be that photon.
LIDAR and RADAR are, of course, reflective technologies. You emit photons that drop in energy according to the inverse square law. Then they hit something, which absorbs some of the energy, and the thing they hit also acts as if it was a point source of the reflected photons, so they drop again according to the inverse square law. In other words, what you receive back is attenuated by a factor depending on the absorption of the target, and according to the 4th power of the distance! This is a lot! I hope no-one is using IR-absorbing paint.
 
  • Like
Reactions: Fact Checking
Musk says Robotaxis are worth 200k each, but depreciating 10k of lidar destroys the economics?

Show me a $10k 360° LIDAR system that isn't going to kill a fast motorcycle driver that (lawfully) overtakes the FSD car from behind in rain while the car is doing an unprotected left turn...

LIDAR is also reflective, and thus vulnerable to objects covered in naturally light absorbing or highly reflecting materials that are otherwise readily visible during the day to human vision or are illuminated at night.

Traffic lights which emit their own photons? Sure. Otherwise, vision systems rely on photons that first travel through 10s of kms of atmosphere, several km of which can be occluded by the same fog/rain/etc. before bouncing off some object then traveling to the car's camera. LIDAR photons travel a much shorter distance.

This is a highly disingenuous argument: while the photons from the sun travel millions of miles, they also are abundant during the day most of the time, and are equivalent to a diffuse light source close to the target.

LIDAR photons do have to travel from car to object to car again in every circumstance, which makes LIDAR much more weather dependent during the day than human vision - which is the benchmark to compare against.

Of course, the above is only true half the time. At night automotive vision systems mostly rely on headlight photons which travel the same round-trip that causes you grief. And headlights only point forward while main LIDARs see 360. Oh, and not all LIDARs use visible light.

How many non-illuminated targets are going to overtake a car at night? The 360° vision advantage of LIDAR at night has comparatively little relevance, while its poor vision in light absorbing air (rain, snow, dust, etc.) disadvantage makes LIDAR worse than human vision and can kill.

To go with the motorcycle example: at night a lawfully driving motorcycle will be spectacularly illuminated by its own headlights, making it straightforward to detect for human drivers and camera based FSD systems. LIDAR systems have to illuminate it with their own source of photons, which have several orders of magnitude lower intensity and are also double-distance attenuated and reflection attenuated by having to travel from LIDAR to the motorcycle, reflect from it exactly towards the LIDAR and back.

Also, LIDARs using infrared lasers are more dangerous, because moist air attenuates infrared photons more heavily than visible light.

Additionally, most Waymo LIDAR experience is with ~$75,000 class mechanical LIDARs from Velodyne - which have adequate resolution.

All of the "cheap" solid state LIDAR sensors I've seen proposed so far (very few of which are in mass production) have limited field of view in the 60°-90° range, and their price scales up with field of view. 4x 90° LIDAR units quadruple the cost. They also have significantly lower angular resolution than Velodyne's mechanical LIDARs.

But LIDAR is not just expensive, it is basically also a "p*ss in your pants in the freezing cold for warmth" kind of technology on the software project management level: it's not just a limited shortcut, but its presence is (socially) crowding out the real solution within your self-driving team.

(Or do you really think a FSD project lead who is a LIDAR expert is going to eliminate LIDAR from the project?)

Or as Elon said it: LIDAR is a local maximum that makes it harder to find the absolute maximum.

Your understanding of the various disadvantages and limitations of LIDAR seems to be very limited, and I agree with @ReflexFunds that current LIDAR technologies are an expensive trap.
 
Last edited:
  • Love
Reactions: Artful Dodger
Having lidar just makes it impossible to get those billions of miles of experience because you cannot build a hardware suite cheap enough to install in consumer vehicles. Without enough data your strategy will hit a roadblock at the point where you don't have enough experience to know what are the next problems you need to solve.
Now that we are talking about FSD - here is how I see the FSD race.

There are clearly two camps. The Lidar camp led by Waymo and Vision camp led by Tesla. They are clear leaders within their approach (MobilEye seems to have changed course).

Waymo is close to Level 4 within their geofenced area. They may be expanding now s they recently applied for and got permission in CA. Tesla is Level 2+ ... but in much of US & EU.

Waymo has to scale geographically. Tesla has to scale up technically. The question is who can get "there" first. Let us say the target is L4 in the top 200 US cities.

Waymo says they can potentially expand to a new market by training their models and testing in the new market in a couple of months. They have the financial means to do this expansion in parallel - say a dozen cities at a time. So, they could in a couple of years expand to top 200 cities in US. Tesla can first get to City NOA and slowly add all the "edge" cases that Waymo already handles (see the list in my Feature Complete thread) in the next 2 years.

So, I think it is still anyone's game.

As to level 5 - who knows. Yes, more data is theoretically needed to handle the long tail of edge cases - but how long that will actually take is uncertain. As I wrote before, theoretically there is no difference between theory and practice, but in practice there is.
 
  • Like
Reactions: Doggydogworld
Traffic lights which emit their own photons? Sure. Otherwise, vision systems rely on photons that first travel through 10s of kms of atmosphere, several km of which can be occluded by the same fog/rain/etc. before bouncing off some object then traveling to the car's camera. LIDAR photons travel a much shorter distance.

Please, I know what I'm talking about. A camera works with a subject that is illuminated by diffuse ambient light. As long as the subject is illuminated to a suitable level the light reflected by the subject only needs to make a one way trip to the sensor. Lidar, as you have pointed out, makes a round trip, like a headlight on a dark night with no streetlights or other ambient light. The distance at which it is useful declines much more rapidly than a camera under most adverse seeing conditions anytime there is helpful ambient light present.

Of course, the above is only true half the time. At night automotive vision systems mostly rely on headlight photons which travel the same round-trip that causes you grief. And headlights only point forward while main LIDARs see 360.

That's why highways are often illuminated and, when they are not, and there is minimal ambient light from the moon, other vehicles or buildings, in other words, a black night, people drive more slowly so they are not out-driving their headlights. Particularly if there is snow, fog, smoke or dust. Because these adverse conditions often reflect more light back to the car than the objects of interest. This is true with lidar and headlights. But headlights often get around this problem if there is sufficient ambient light provided by the sun or by streetlights, full moon, lighted signs and buildings and other cars headlights. LIDAR cannot make use of streetlights, ambient light or the light from other cars headlights, it relies solely on the light emitted by the laser.

People who still argue that LIDAR doesn't degrade more quickly than cameras under adverse conditions are generally trying to say Tesla's approach is an inferior approach. This viewpoint is popular with Tesla short-sellers.
 
OT

LIDAR and RADAR are, of course, reflective technologies. You emit photons that drop in energy according to the inverse square law. Then they hit something, which absorbs some of the energy, and the thing they hit also acts as if it was a point source of the reflected photons, so they drop again according to the inverse square law. In other words, what you receive back is attenuated by a factor depending on the absorption of the target, and according to the 4th power of the distance! This is a lot! I hope no-one is using IR-absorbing paint.
Erm. The only 4th power law I can think of is from astronomy, T^4. Inverse square law is not fourth power, the attenuation for a round trip would be inverse square, then one quarter (double the distance, squared, its proportional). For example, at power 100 and distance 10 illumination would be power 1, but on a return it would be effective distance traveled of 20, or 1/400 instead of 1/100.

But, all of that is irrelevant for lasers. What you are talking about are things like gravity or light that radiate uniformly and the inverse square law can be understood from the geometry of an expanding sphere. But a laser does not illuminate like a light bulb, it is tightly focused, so attenuation from distance traveled is far less than what you are suggesting. I don't know lasers, much less the specific ones used in LIDAR, or their characteristics in atmosphere. The frequency of the laser matters, as does atmospheric condition.

So the problem with LIDAR isn't that it has to travel twice the distance*, but it does have problems with attenuation and absorption in the atmosphere. And, bottom line, using normal cameras is going to be the best fit because, as has been pointed out, vehicles are already required to provide their own light source for the benefit of human drivers. Cameras are not as good as human eyes, even poor ones, as their resolution is poor, the video stream is compressed resulting in artifacts, and CCDs simply don't have the dynamic range of human eyes. You can try to get closer by using multiple exposures (e.g., you could have three cameras mounted in tandem running at different exposure levels) and using that to generate an HDR frame, but that is processing and power and really... let's just say I hope it isn't a practical requirement.

The advantage of computer vision is that all of those video streams are processed simultaneously. No need to turn the head left, then right, and quickly back left for a check before pulling out and hoping right hasn't changed. As long as the basic vision is sufficient the increased coverage makes up for a lot of human failing.

[* edit: okay, it is a problem, but it isn't attenuation from the spread of the laser and certainly not inverse square law. Attenuation/absorption from the atmosphere is another matter, which will vary with distance.]
 
Last edited:
  • Like
Reactions: Doggydogworld
Your understanding of the various disadvantages and limitations of LIDAR seems to be very limited, and I agree with @ReflexFunds that current LIDAR technologies are an expensive trap.
The real problem with Lidar is
- it is used to localize the car within centimeters of the HD map and that is used for driving. This makes L5 not possible. L4 in a limited number of cities is still viable.
- ofcourse, you can't get a lot of data and if it turns out you need billions of miles of data from consumer cars, Lidar hinders that kind of deployment

If Lidar is used as another sensor whose input is fused with vision, there really shouldn't be any issue. But that is not how it is used today.
 
  • Informative
Reactions: Artful Dodger
Cameras are not as good as human eyes, even poor ones, as their resolution is poor, the video stream is compressed resulting in artifacts, and CCDs simply don't have the dynamic range of human eyes.

If you really believe that, you've never driven with my father-in-law! He would sacrifice three goats and a cat to be able to see as well as a camera in a Tesla (even an early Tesla).
 
If you really believe that, you've never driven with my father-in-law! He would sacrifice three goats and a cat to be able to see as well as a camera in a Tesla (even an early Tesla).
Okay, you can be legally blind and that isn't as good as a camera. And the crossover happens before then. But I suggest you don't have enough appreciation for the human eye. We have an amazing vision system and, unless it is severely degraded, cameras simply do not match them. Not even expensive cameras. A good photo artist can present amazing work, but that is a credit to their skill. I gave specific ways (e.g., contrast) that cameras simply do not compete and your counterargument is that you can find examples where vision is worse? Sure, you got me on hyperbole. But even poor human vision is normally going to exceed that of a camera in terms of resolution and contrast.

Look at the moon and observe whatever detail you can on it. Then take a photo with a camera. Its going to just be a bit of mostly undifferentiated white too small to have any detail at all. Cheat, and use zoom. Observe how there's a lack of detail (due to resolution limits) and blurring (due to relative motion). Sure, driving doesn't involve celestial observation.

Find a tunnel you can see through (looking under a bridge is often the easiest to find). Depending on the width and length of the tunnel the effect will be weak or strong so an underpass may not give a strong indicator, but going through the tunnels in Pennsylvania are an example of this with vehicles. But for demonstration purposes just use a tunnel. Stand in front and look in and through, observing how you can see the tunnel interior and what is past it. Now take a photo and observe how you can set the exposure to (poorly) see what is in the tunnel or to see what is past it. But you are not going to get both if there is any narrowness or length to the tunnel.

This last is a classic example for using multiple exposures to composite a high dynamic range image, but once you get used to seeing the limits you can see where they exist even if more weakly. Photographers are well aware of the limits of their equipment and now how to work them (e.g., HDR photography) or cheat (you can't create something from nothing, but you can enhance contrast in areas to give that impression).

Don't underrate the human eye, it is amazing.

And don't forget my other point, which is that the real advantage of computer vision with respect to vehicles is the simultaneous viewing of image streams from multiple sources. One thing I omitted is that each time you turn you have to "refocus" or "resettle" your vision to meaningfully see. How long this takes increases with age: when I was younger I could snap back and forth, quickly apprehend everything, and pull out. Now I have to be rather more deliberate and the problem with this is the more deliberate you are the greater the latency and the greater the likelihood that what you saw has changed (that is, it was clear but isn't any longer). And that isn't even getting into blind spots while driving in traffic.
 
  • Like
Reactions: Doggydogworld
LIDAR and RADAR are, of course, reflective technologies. You emit photons that drop in energy according to the inverse square law. Then they hit something, which absorbs some of the energy, and the thing they hit also acts as if it was a point source of the reflected photons, so they drop again according to the inverse square law. In other words, what you receive back is attenuated by a factor depending on the absorption of the target, and according to the 4th power of the distance! This is a lot! I hope no-one is using IR-absorbing paint.

Laser does not have the same spreading issue as radar, due to being highly collimated. Collimated beam - Wikipedia

Erm. The only 4th power law I can think of is from astronomy, T^4. Inverse square law is not fourth power, the attenuation for a round trip would be inverse square, then one quarter (double the distance, squared, its proportional). For example, at power 100 and distance 10 illumination would be power 1, but on a return it would be effective distance traveled of 20, or 1/400 instead of 1/100.

Doubling the distance doesn't give the correct value since you are dealing with the original source spreading and the proportional spread is uniform with distance i.e. each doubling gives 1/4 the power.

As the radar beam propagates, the energy per area falls off as the square of distance (section of circular area 4*pi*R^2*beamAngle). The object being illuminated can only reflect the energy that hits it, and that reflection also spreads out the same way so returned energy at the sensor is proportional to 1/distance^2 of the 1/distance^2 value so 1/distance^4.

Radar Basics - The Radar Equation
 
Don't underrate the human eye, it is amazing.
But the point is not about comparing camera to human vision. It is about comparing driving with human vision to car driving with its multiple cameras. Various types of processing (and later potentially cameras that can capture non visual frequencies) can make it easier for the car to see the world better and drive.

A trivial example is how telescopes can see better than human eye - or microscopes.