Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Things even FSD won't do

This site may earn commission on affiliate links.
It is not like this hasn’t been discussed before on the internet:

That's how I'm aware of the LIDAR disadvantage in inclement weather. The LIDAR has to travel through the moisure twice (vs. only once for a camera based systems). If humans can drive in it, so can camera based systems. The same cannot be said for LIDAR which has major problems as visibility is restricted. That is inherent in the technology. It can be mitigated, but only partially.
 
That's how I'm aware of the LIDAR disadvantage in inclement weather. The LIDAR has to travel through the moisure twice (vs. only once for a camera based systems). If humans can drive in it, so can camera based systems. The same cannot be said for LIDAR which has major problems as visibility is restricted. That is inherent in the technology. It can be mitigated, but only partially.

Every sensor type has their advantages — that’s why many manufacturers elect use several. Nobody does Lidar only, very few do camera only. Lidar does very reliable range finding for impressively low false negatives. It also works in complete darkness.

I know the ”if humans can do it, camera based can do it” argument. For me there are two issues with it: 1) human vision is tied to a much more powerful computer for now and 2) sensor fusion with a range of sensor types allows for superhuman sight.

The question isn’t so much is vision better than Lidar or vice-versa. The question is, is around the car vision+Lidar+radar better than around the car vision only (with front radar).
 
  • Disagree
Reactions: Mader Levap
Every sensor type has their advantages — that’s why many manufacturers elect use several. Nobody does Lidar only, very few do camera only. Lidar does very reliable range finding for impressively low false negatives. It also works in complete darkness.

I know the ”if humans can do it, camera based can do it” argument. For me there are two issues with it: 1) human vision is tied to a much more powerful computer for now and 2) sensor fusion with a range of sensor types allows for superhuman sight.

The question isn’t so much is vision better than Lidar or vice-versa. The question is, is around the car vision+Lidar+radar better than around the car vision only (with front radar).

Obviously, the more sensors you have, the better it could be. But that assumes unlimited processing power. We all know there is not unlimited processing power. The more sensor data you have to integrate into the FSD model, the more difficult the development job becomes.

Musk says they can do it with vision (and radar which humans don't even have) and I tend to believe him over an uncredentialed "Internet expert". No offense to you personally, but Musk is extremely intelligent and not the scam artist the shorts make him out to be. Scam artists don't run wildly profitable multi-billion dollar rocket companies that dock with the International Space Station and have their booster rockets landing on drone ships in the middle of the ocean for re-use. People said it couldn't be done and that Musk would go bankrupt. Instead, he has billions of dollars in contracts with international corporations and federal agencies to do what skeptics said was impossible and they are wildly profitable. NASA alone is realizing savings of OVER 90% by using SpaceX instead of their own technologies. And NASAand SpaceX have agreements in place that protect SpaceX's proprietary technologies from disclosure to third parties.

He's not a man I would bet against, especially not because an Internet armchair expert said it wouldn't work without LIDAR. That's actually laughable.
 
The ambulance siren noise is just to get your attention. The cameras on a self-driving car never stop paying attention.;)
Yeah and those camera's may or may not see it, I personally think it wouldn't react as fast as a person who hears it before it see's it.

Thanks for explaining to me that the camera's don't stop looking, I had NO idea! [/end sarcasm]:p
 
  • Disagree
Reactions: smatthew
Yeah and those camera's may or may not see it, I personally think it wouldn't react as fast as a person who hears it before it see's it.

My brother was a paramedic. He was shocked how many people just plodded along clueless that there was an ambulance with lights/sirens blaring away right behind them. I guarantee that by the time FSD is approved it will reliably pull over for ambulances. It's not a difficult scenario (for a computer, that is, humans have their own set of "issues")..

Thanks for explaining to me that the camera's don't stop looking, I had NO idea! [/end sarcasm]:p

I'm speechless. You are the one who thought the ambulance might not be noticed in time!:rolleyes:
 
  • Like
Reactions: smatthew
Yeah. It just keels over dead in fog. :)
True. And heavy rain or snow. Contrary to claims otherwise, it simply doesn't work reliably in heavy weather of any kind. Sure, it can filter out light rain or snow, but only if it's light.

Certainly there are circumstances where Lidar fails but I think you are exaggerating the difference. In many cases camera fails too, it certainly can’t see through fog for instance. Both camera and Lidar are visual sensors with certain unique benefits and downsides for both. The question is who can implement the best selection and mix of technologies for the best results. Jury is still very much out what that is.

I mean the radar on a Tesla fails on a slight sheet of snow on the bumper and takes the whole Autopilot with it. But that shouldn’t be taken as an indictiment on radars in general.

With everyone else pretty much it is a mix of 360 degree vision, 360 degree Lidar and 360 degree radar... with Tesla it is pretty much vision only where they are headed (with possible help from one radar and ultrasonics in some scenarios), if they stick to their public plan.
 
  • Disagree
Reactions: Mader Levap
Certainly there are circumstances where Lidar fails but I think you are exaggerating the difference.

How am I exaggerating? LIDAR fails when humans can still see visually with enough acuity to continue unimpeded. There is a reason for that. LIDAR uses light in the visual spectrum that originates from the car. It's a rapidly spinning laser beam smaller in diameter than a raindrop. If it gets deflected on it's way to the target it's not going to get reflected back from an object 200 meters down the road. Since it's spinning rapidly it only gets one chance for every point it's trying to map. Cameras also use visual light but the light they use is diffuse ambient light. Even in the rain or snow, it's everywhere. If a human can detect an object down the road visually, so can a camera. The same cannot be said for LIDAR. I'm not exaggerating the difference, it's a very real difference. Any self-driving system depending upon LIDAR is going to "go blind" in inclement weather far before any system depending upon visual acuity because the light needs to make a two-way trip with LIDAR.

In many cases camera fails too, it certainly can’t see through fog for instance. Both camera and Lidar are visual sensors with certain unique benefits and downsides for both. The question is who can implement the best selection and mix of technologies for the best results. Jury is still very much out what that is.

Fog is an issue for all systems depending upon visual light, including humans. If the fog is too thick to allow unimpeded visual driving, the driver will need to slow down regardless of whether it's human or computer. But if the computer is depending upon LIDAR, that point will be reached sooner than a computer depending upon the same visual acuity a human needs to drive safely. Again, because LIDAR depends upon the light making a round trip it is disadvantaged.

I mean the radar on a Tesla fails on a slight sheet of snow on the bumper and takes the whole Autopilot with it. But that shouldn’t be taken as an indictiment on radars in general.

True, the problem of a radar being blocked by a light coating of snow or ice on the bumper can be solved with a heated spot on the bumper. It's something the Model 3 lacks but could be easily added to upgrade the snow/sleet/ice capabilities when the system was ready to drive through winter storms.

With everyone else pretty much it is a mix of 360 degree vision, 360 degree Lidar and 360 degree radar... with Tesla it is pretty much vision only where they are headed (with possible help from one radar and ultrasonics in some scenarios), if they stick to their public plan.

I'm confident Tesla will stick to their public plan unless they run into a problem implementing the system as conceived. And I think the system, as conceived, was well thought out. It should not be necessary to have capabilities that go beyond the best human drivers to drive more safely than the average human. Simply removing human weaknesses (distractions, delayed reaction times and poor judgment) will make machines far safer than humans without resorting to sensing mechanisms humans don't even possess.
 
  • Like
Reactions: Mader Levap
How am I exaggerating? LIDAR fails when humans can still see visually with enough acuity to continue unimpeded. There is a reason for that. LIDAR uses light in the visual spectrum that originates from the car. It's a rapidly spinning laser beam smaller in diameter than a raindrop. If it gets deflected on it's way to the target it's not going to get reflected back from an object 200 meters down the road. Since it's spinning rapidly it only gets one chance for every point it's trying to map. Cameras also use visual light but the light they use is diffuse ambient light. Even in the rain or snow, it's everywhere. If a human can detect an object down the road visually, so can a camera. The same cannot be said for LIDAR. I'm not exaggerating the difference, it's a very real difference. Any self-driving system depending upon LIDAR is going to "go blind" in inclement weather far before any system depending upon visual acuity because the light needs to make a two-way trip with LIDAR.

All of this is certainly true, but again it is the end-result that counts, where there eventual line of ”this works and this doesn’t” is drawn.

We can certainly describe the characteristics of Lidar as you did but that doesn’t mean — and the inertia of the autonomous car industry does not suggest — it would necessarily be a reason why Lidar could not offer tangible benefits compared to a scenario without Lidar. That is mainly my point. Personally I am especially opposed to an implication that a car using Lidar as part of their system would be susceptible to those issues, given that nobody intends to use Lidar only. So really it comes down to: can you make a better system with Lidar than without, not Lidar vs. vision...

We don’t know yet where things will settle but it is definitely interesting to watch. All of these main technologies have their inherent benefits and weaknesses, no doubt.

True, the problem of a radar being blocked by a light coating of snow or ice on the bumper can be solved with a heated spot on the bumper. It's something the Model 3 lacks but could be easily added to upgrade the snow/sleet/ice capabilities when the system was ready to drive through winter storms.

This issue exists on Model X and AP2 (or was it facelift?) Model S as well.
 
Last edited:
All of this is certainly true, but again it is the end-result that counts, where there eventual line of ”this works and this doesn’t” is drawn.

We can certainly describe the characteristics of Lidar as you did but that doesn’t mean — and the inertia of the autonomous car industry does not suggest — it would necessarily be a reason why Lidar could not offer tangible benefits compared to a scenario without Lidar. That is mainly my point. Personally I am especially opposed to an implication that a car using Lidar as part of their system would be susceptible to those issues anyway, given that nobody intends to use Lidar only. So really it comes down to: can you make a better system with Lidar than without, not Lidar vs. vision...

We don’t know yet where things will settle but it is definitely interesting to watch. All of these main technologies have their inherent benefits and weaknesses, no doubt.



This issue exists on Model X and AP2 (or was it facelift?) Model S as well.

I totally agree. And I'm opposed to any implication that any system not using LIDAR is not going to be adequate. And I know you didn't say that. But many have and it's a non-sensical argument.

Certainly, LIDAR could add to the situational awareness in conditions when it's not crippled by heavy weather but this presumes the presence of enough processing power to make use of the extra data. It also requires rooftop placement to achieve 360-degree sensing which necessarily increases the Cd of the vehicle and decreases range. Certainly, systems can be streamlined well beyond the ridiculous systems as they currently exist, but even a streamlined 360-degree LIDAR will impact range and efficiency to a small degree.

Personally, I think the best software will win the race to practical self-driving regardless of which sensing systems are employed. Having said that, does anyone know if two of the front mounted cameras in the Model 3 are suitable for use as binocular (stereo) vision? Because I do think that is potentially a very useful tool of FSD. Certainly, people with only one working eye can drive safely but the lack of two eyes is a distinct disadvantage.
 
  • Helpful
Reactions: electronblue
@StealthP3D I agree Lidar isn’t mandatory in any way, I don’t see why any single sensor type or mix necessarily would be. We don’t really know what it actually takes to get there... We will know when someone makes it what mix got there first. :)

One minor disagreement though, when using Lidar, I don’t think rooftop Lidar is required or really the inertia of the industry either (at least in consumer cars). Similar results can be gained from several side-mounted Lidars.
 
Last edited:
@StealthP3D I agree Lidar isn’t mandatory in any way, I don’t see why any single sensor type or mix necessarily would be. We don’t really know what it actually takes to get there... We will know when someone makes it what mix got there first. :)

One minor disagreement though, when using Lidar, I don’t think rooftop Lidar is required or really the inertia of the industry either (at least in consumer cars). Similar results can be gained from several side-mounted Lidars.

Yes, I suppose several LIDAR could substitute for one 360-degree LIDAR on the roof. That will raise the cost as LIDAR cost several times that of cameras.
 
Yes, I suppose several LIDAR could substitute for one 360-degree LIDAR on the roof. That will raise the cost as LIDAR cost several times that of cameras.

I find the cost of sensors rather irrelevant, given what a paradigm-shift a truly autonomous car will be, but yes the higher cost of Lidars certainly means their proliferation in driver’s aids will be slowed than that of cameras.

But if someone sold to consumers a self-driving car tomorrow that cost double what a similar ”manual” car cost and had a gigantic dome on the roof, it would still sell like crazy... that’s how big a deal it would be.

Indeed safety tends to be a big deal for people too. If it is your kids being taken to school automatically, whomever can create the safest solution tends to get customers...

It is a bit like the BEV right? The cost argument was used against battery electric-car for a long time, too expensive. It was still worth it in the early days and the price goes down all the time...
 
  • Avoid a pothole
  • Shift a few inches to not ride on a road seam
  • Let the fellow signaling for a lane change in
  • Invite someone to go next
  • Avoid a tire tread in the lane
  • Give a little more space on the side with the oversized flatbed trailer
  • Avoid driving directly between two other vehicles (Remember "Leave yourself an out?")
What else? Did I get any of these wrong?

Mojo

AP doesn’t currently make me sexy... it only makes me feel sexy. It’s a current software limitation I guess.
 
  • Funny
Reactions: StealthP3D
Certainly there are circumstances where Lidar fails but I think you are exaggerating the difference. In many cases camera fails too, it certainly can’t see through fog for instance.

If the fog is thick enough, yes, that's true. Of course, up until you lose vision completely, you can usually compensate for fog well enough to detect objects crudely by just cranking up the contrast. That won't get you the text of road signs (e.g. speed limits), but most of the time, that behavior isn't safety-critical, and when it is, the signs also typically have a distinct shape, so reading the word STOP or ARRÊT or whatever usually isn't critical to detection or interpretation.

With LIDAR, you have to use statistical techniques to guess whether a return is real or not, and that only goes so far unless you have an insane amount of data (read "scanning the same scene for several minutes"). So the jury is still out on whether any of those techniques will turn out to be practical in real-world use.

And you can do the same sort of analysis with a large number of camera photos taken over several minutes, too, which is to say that even if those techniques do prove to be viable, they still won't necessarily give LIDAR any particular advantage over cameras.

And for daylight fog, the fact that the light source (the sun) is diffuse should give cameras a large advantage over LIDAR, because you aren't generating a point source of light that gets reflected in nonuniform ways. At night, because headlights are point sources, they could theoretically be almost as much of a problem as LIDAR, though changing the beam angle (fog lights) can significantly reduce that problem.