Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Lidar vs Camera revisited

This site may earn commission on affiliate links.
To my knowledge Lidar doesn't introduce phantom braking events.

Radar has known phantom braking due to a noisy signal, and lack of resolution (especially what Tesla uses).

Vision has known phantom braking with things like shadows. It took Subaru years to evolve their eyesight into something that works reasonably well.

My biggest issue with Phantom braking for awhile now hasn't been due a sensor, but other sources. Sources such as Nav issues or the car suddenly tracking the car in the lane next to mine for who knows what reason.

Heck sometime I'll get phantom phantom braking. Meaning the car will flash something like "stopping for signal" on a freeway where it never actually phantom brakes. It just warns me that its highly considering it. :p
ALL new sensor inputs can lead to phantom braking, if you define PB as any braking action that appears, to the human, to have no clear cause. And, in that context, the nav system (maps) are a sensor input. It's never going to go away totally for a couple of reasons: (a) the car will always be more cautious than humans (you can thank the liability lawyers for that) and (b) the car will spot potential dangers that humans miss.

I'm not saying the car should not be improved, of course it should, but PB will never go away totally.
 
ALL new sensor inputs can lead to phantom braking, if you define PB as any braking action that appears, to the human, to have no clear cause. And, in that context, the nav system (maps) are a sensor input. It's never going to go away totally for a couple of reasons: (a) the car will always be more cautious than humans (you can thank the liability lawyers for that) and (b) the car will spot potential dangers that humans miss.

I'm not saying the car should not be improved, of course it should, but PB will never go away totally.
I agree that phantom braking on roads shared with humans/animals will never go away completely, and in fact I think its important for it NOT to go away completely.

Even humans phantom brake occasionally. Sometimes it a sixth sense kind of thing where our gut tells us someone is going to go into our lane or we fail for some reason or another.

The point I was making was there was no weakness that I was aware of with lidar where it introduces noise that would cause phantom braking itself. Especially if it was run through some sort of filtering.

I would be really surprised if any of the L2/L3 vehicles released onto the market with frontal lidar had abnormally high phantom braking events.

Tesla has abnormally high phantom braking events. Part of that is simply due to what you're arguing in that it has to be cautious so it can't ignore things like lane intrusions like a human driver who is driving aggressively would. But, a lot of it is Tesla simply not caring about their customers experience.

The vast majority of the phantom braking I experience would likely be solved if there was a way to report them (like the FSD Snapshot) with engagement from a regional rep in charge of certain roads.

Adding lidar wouldn't magically fix Tesla phantom braking problem. But, it would give them a tool that could greatly improve their ability to reduce phantom braking events by cross checking Pure Vision data with lidar data. They don't even need to actively use the lidar data to determine a vehicle action. They can simply use it to provide training data to improve pure vision, and their ability to identify when pure vision isn't working correctly.
 
I agree that phantom braking on roads shared with humans/animals will never go away completely, and in fact I think its important for it NOT to go away completely.

Even humans phantom brake occasionally. Sometimes it a sixth sense kind of thing where our gut tells us someone is going to go into our lane or we fail for some reason or another.

The point I was making was there was no weakness that I was aware of with lidar where it introduces noise that would cause phantom braking itself. Especially if it was run through some sort of filtering.

I would be really surprised if any of the L2/L3 vehicles released onto the market with frontal lidar had abnormally high phantom braking events.

Tesla has abnormally high phantom braking events. Part of that is simply due to what you're arguing in that it has to be cautious so it can't ignore things like lane intrusions like a human driver who is driving aggressively would. But, a lot of it is Tesla simply not caring about their customers experience.

The vast majority of the phantom braking I experience would likely be solved if there was a way to report them (like the FSD Snapshot) with engagement from a regional rep in charge of certain roads.

Adding lidar wouldn't magically fix Tesla phantom braking problem. But, it would give them a tool that could greatly improve their ability to reduce phantom braking events by cross checking Pure Vision data with lidar data. They don't even need to actively use the lidar data to determine a vehicle action. They can simply use it to provide training data to improve pure vision, and their ability to identify when pure vision isn't working correctly.
I kinda disagree that phantom braking is a necessity in rare situations. If you anticipate a danger and brake, that’s by definition not phantom braking, but necessary braking, no? 😊

I would think of a phantom braking event as one where there was no anticipated danger other than some ghost the software thought it saw in the surroundings. 😊

I would prefer these events to be zero. At best they result in skipped heart beats and curses, at worst, a cardiac event in an older, vulnerable driver. 🙂
 
I kinda disagree that phantom braking is a necessity in rare situations. If you anticipate a danger and brake, that’s by definition not phantom braking, but necessary braking, no? 😊
The rare situation humans would phantom brake is when you believe a car will serve into your lane, but they never actually do. You would therefore brake when you didn't need to, thus phantom braking. I never did this too often in the past but after so jerk swerved into me about 2 years ago, I am very cautious of such situations.

As far as when you get phantom braking and no other cars are around, this should be eliminated. It might be that you consider only this phantom braking as I suspect many will disagree in this. In fact some have called a 1-2 mph slowdown phantom braking. I'm not intending to argue the definition, only that all people and self driving cars should slow or brake in certain situations as a precaution, by predicting that something may enter their lane of traffic...
 
The rare situation humans would phantom brake is when you believe a car will serve into your lane, but they never actually do. You would therefore brake when you didn't need to, thus phantom braking. I never did this too often in the past but after so jerk swerved into me about 2 years ago, I am very cautious of such situations.

As far as when you get phantom braking and no other cars are around, this should be eliminated. It might be that you consider only this phantom braking as I suspect many will disagree in this. In fact some have called a 1-2 mph slowdown phantom braking. I'm not intending to argue the definition, only that all people and self driving cars should slow or brake in certain situations as a precaution, by predicting that something may enter their lane of traffic...
I agree. What I am also talking about is when no other car is in the vicinity, it slams the brakes, hard. That’s the issue here.
 
  • Like
Reactions: okcazure
Do you prefer occasional phantom braking vs occasional crash ?

Basically false positive vs false negative.
I think we have a disagreement about what constitutes a phantom braking scenario.

Me : when there are no vehicles around, car is cruising, and slams the brakes hard, no reason.

You: when the car anticipates another car that’s about to swerve into you, the car brakes hard.

To me, the second situation cannot be classified as ‘phantom braking’, but accident avoidance. the first is obviously a false positive, and the incidence of such events must be reduced to zero, as much as possible.
 
I would frame LiDAR and cameras into two different formats:

Get distance and then detect objects (LiDAR)
Detect objects and then calculate distance (cameras)

Having a distance calculation is imperative to self-driving. LiDAR is highly accurate, but almost needlessly accurate. Cameras must wait for the object detection (or what ever algorithm) and must be accurate to get an accurate distance calculation, hence lies where phantom braking could occur.

Cameras and deep learning can estimate distances. To what degree is debatable, but should be sufficient for self-driving. False detects happen a lot in 2D object detection, and especially weird detections (adversarial examples comes to mind). This is probably why there are phantom braking events. Unless we can get deep learning not so overtrained on data and much more generalized, we can have a better system. All of Tesla's data collect and auto-labeling is probably helping. Getting to reduce false positives without ever having false negatives is tough. But if conditions are correct, it should be good. There are many ways to mitigate false positives if they do pop-up.

LiDAR can be used for training the 2D system to estimate LiDAR distances. Best way to get ground truth for training vision only on Teslas.
 
Last edited:
  • Informative
Reactions: okcazure
I think we have a disagreement about what constitutes a phantom braking scenario.

Me : when there are no vehicles around, car is cruising, and slams the brakes hard, no reason.

You: when the car anticipates another car that’s about to swerve into you, the car brakes hard.

To me, the second situation cannot be classified as ‘phantom braking’, but accident avoidance. the first is obviously a false positive, and the incidence of such events must be reduced to zero, as much as possible.
We can come back to the second case later, which is more planner related.

Just looking at what you are saying, it’s still a case of false positive vs false negative. You can’t drive both down to zero easily. They are erring on the side of minimizing crashes. When they drive both down to zero, they would have perfect vision and perception. People claim LiDAR gives perfect vision easily compared to Tesla having to slog it out with cameras only.

Regarding planning issues, there are some cases most of us think of as phantom braking
- car coming in the opposite lane, FSD brakes suddenly from 50 to 30
- car turning from left on to oncoming lane, FSD brakes heavily
- car suddenly notices a person walking on the sidewalk and brakes heavily


ps : to make it more clear, if you decrease false positives, it tends to increase false negatives. So, phantom braking the way you define is also part of accident avoidance. Just that you are talking about NN/perception related problems.
 
Last edited:
  • Like
Reactions: okcazure
...The vast majority of the phantom braking I experience would likely be solved if there was a way to report them (like the FSD Snapshot) with engagement from a regional rep in charge of certain roads. ...
I agree with this pretty strongly. Tesla is easily a big enough company now to be able to have a staff - of perhaps several dozen initially, expanding as methods are refined - assigned the responsibility of resolving local mapping problems and adding map attribute data that would help FSD avoid repeatable misbehavior.

One could argue, as Karpathy alluded to, that this kind of mapping knowledge will come naturally as a consequence of fleet data gathering, so that a well-trained and large NN would eventually contain an intrinsic map of the entire Tesla driving domain, potentially the entire world. However I think it's a better policy not to wait for that to happen in some mythical NN future, but to help it along significantly with targeted assistance by local experts and informed by local users. It's pretty clear that Tesla uses some combination of map sources now, and this seems quite unlikely to become entirely replaced by NN-trained knowledge for at least several years.

It's also arguable, of course, that there are plenty of mapping companies and databases available to Tesla now, and they should simply do a better job of utilizing those. I wouldn't have any problem with that either, but it doesn't seem to be working very well so far. Tesla should make a decision: either use available mapping more competently or pursue the vertical integration approach and do more of it themselves.

And just to be clear, this is not aboutl so-called HD Maps. There have been many discussions here where people confuse "need for correct maps" with "need for HD Maps". I don't believe that better use of maps to inform the navigation & FSD decisions requires centimeter-level maps nor over-reliance on maps.
 
  • Like
Reactions: S4WRXTTCS
One could argue, as Karpathy alluded to, that this kind of mapping knowledge will come naturally as a consequence of fleet data gathering, so that a well-trained and large NN would eventually contain an intrinsic map of the entire Tesla driving domain, potentially the entire world. However I think it's a better policy not to wait for that to happen in some mythical NN future, but to help it along significantly with targeted assistance by local experts and informed by local users. It's pretty clear that Tesla uses some combination of map sources now, and this seems quite unlikely to become entirely replaced by NN-trained knowledge for at least several years.

It's also arguable, of course, that there are plenty of mapping companies and databases available to Tesla now, and they should simply do a better job of utilizing those. I wouldn't have any problem with that either, but it doesn't seem to be working very well so far. Tesla should make a decision: either use available mapping more competently or pursue the vertical integration approach and do more of it themselves.

And just to be clear, this is not aboutl so-called HD Maps. There have been many discussions here where people confuse "need for correct maps" with "need for HD Maps". I don't believe that better use of maps to inform the navigation & FSD decisions requires centimeter-level maps nor over-reliance on maps.

Unfortunately the only way to have correct map is with HD maps. I'm sorry, i know that is hard to swallow. But its the truth.
And when i say HD Map, I don't mean 3D lidar cloud map.

For a map to be correct it needs to have a high refresh rate.
For a map to be able to be refreshed automatically, it needs to be localize-able.
Meaning it has to have a level of detail that allows you to link different elements of the road together and also
the level of detail for you to be able to spot what has changed, regardless if its a huge change or a very small change.
 
You should be sorry for writing such garbage.

That’s like saying only way any TV video can be correct for it to be 3D/VR.
if what i said was so garbage then Tesla would have correct maps already and not have parts of its maps outdated by months/years.
Unfortunately crowdsourced detailed map that is automatically updated is the only way to have correct maps at scale
 
The thing that bugs me about relying on Vidar or Pseudo Lidar is that there is just a little too much magic involved. When you are heading directly at a unresolvable object - like a solid white wall - the technology is useless. Lidar or Radar would still say "Object Ahead".

But to act on it, that LIDAR data has to be fused into the camera data, and no neutral net is going to be trained on merging a point cloud into a sea of white. Maybe you’ll get lucky, and some base policy against just the LIDAR will take corrective action, but odds are it will just be AEB.

Besides, by the time you can no longer see the edges of an object, it’s way too late to put on the brakes. And if you can see the edges, the camera-based pseudo-LIDAR should be able to recognize that there’s an object there.


If the situation demands life-critical sensing, I would say use the best sensor, don't Pseudo it with fancy computerized assumption software.

Actually, I’d say the opposite. In the absence of fusion, LIDAR by itself would create a lot of false positive detection events for things that don’t matter, and as soon as you start fusing sensors of different types, you’re basically back at square one with no advantages, but a far more complicated data pipeline that has to fuse two different types of data and process them in very different ways. From a safety perspective, simpler systems tend to be safer, if all else is equal, and less code and fewer neural nets are likely to be simpler.
 
Regarding planning issues, there are some cases most of us think of as phantom braking
- car coming in the opposite lane, FSD brakes suddenly from 50 to 30
- car turning from left on to oncoming lane, FSD brakes heavily
- car suddenly notices a person walking on the sidewalk and brakes heavily

None of those things are caused by accident avoidance, per se. Rather they are caused by the car failing to track the vector (speed and direction) of the other vehicle to know that it isn’t a real risk, and/or choosing the least effective mitigation if the other vehicle actually is a real risk.

For a car coming towards you, unless it is crossing the center line, the vehicle should be assumed to be a non-issue. If it is, the correct mitigation is to move away from the center line, and apply brakes only if that looks like it won’t be enough.

For a pedestrian, slight braking is warranted if the path of the pedestrian is potentially problematic, but again, the main correction should be steering unless there is a car coming, in which case extra braking may be warranted. But even then, slowing below about 35 makes no sense until you’re close, because the goal should be making a collision non-fatal, not avoiding it absolutely, unless doing so is safe. The 99.99% of the time when a collision doesn’t occur doesn’t warrant the high safety risk from extreme braking just to make the 0.01% case slightly safer.

And the same is true for cars crossing. Most of the time, by the time the car even decides to react, the other car is out of the lane. This is a performance problem coupled with a failure to recognize how long it will take for the vehicle to clear the intersection. That’s a bug or design flaw, not a safety feature.