Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
LIDAR doesn't provide redundancy. It provides contradictory data that must be reconciled with visual data, which adds complexity that is at least as likely to reduce safety as it is to increase it. Yes, right now, most (but not all) solutions do use LIDAR, but I think there's a desire to move off of LIDAR even for companies that use it currently, as soon as it is practical to do so. Nobody wants LIDAR. It's a workaround until computer vision reaches a level where it is unnecessary.

I have not seen any indication that companies like Waymo or Cruise want to ditch lidar as soon as camera vision is good enough.

Lidar does provide redundancy in many cases because lidar can do many of the same tasks if camera vision fails. For example, lidar can detect a car, measure distance to the car, and measure velocity of the car with extreme accuracy. So if camera vision is unable to detect the car, the lidar can do it. There also cases that camera vision can't do well, that lidar can do well, like detecting a pedestrian wearing dark clothing at night. So lidar will provide redundancy in many cases. In fact, in many cases, lidar will be more accurate than camera vision. Hence, why lidar is a good redundant sensor.

Also, lidar and camera vision have different failure modes. For example, camera vision will fail in zero ambient light where lidar will not fail. So the sensors are complementary. So I don't see lidar as a stop gap until we "solve" vision. I see lidar as complementary to camera vision.
 
Lidar does provide redundancy in many cases because lidar can do many of the same tasks if camera vision fails. For example, lidar can detect a car, measure distance to the car, and measure velocity of the car with extreme accuracy. So if camera vision is unable to detect the car, the lidar can do it

If camera vision cannot detect the car, the vehicle really should not be in motion. And the situations where camera vision is likely to be obstructed (e.g. rain, fog, etc.) are particularly bad for LIDAR, too. They have similar failure modes because they operate using similar wavelengths.

So really, assuming camera vision works well enough, you might as well just have twice as many cameras and be done with it.


There also cases that camera vision can't do well, that lidar can do well, like detecting a pedestrian wearing dark clothing at night. So lidar will provide redundancy in many cases. In fact, in many cases, lidar will be more accurate than camera vision. Hence, why lidar is a good redundant sensor.

That's the thing, camera vision should be able to easily handle that situation. Cameras can easily detect features that are almost invisible to the human eye, particularly if you're taking advantage of multiple frames to reduce the impact of sensor noise, and/or using machine learning to process the images.

Also, you're ignoring the bit depth difference. The human eye can't even achieve 24-bit color perception, i.e. your eyes provide just shy of eight bits per color channel, contrast-wise — maybe slightly more in the green channel in the middle of your eyes, blue channel in peripheral vision, whatever. The cameras in a Tesla can do 12 bits natively per color channel, or up to 20 bits per color channel in HDR mode (which Tesla probably does not use). So they provide anywhere from 16 to 4096 times as much information as your eyes. This translates to much, much better night vision than your eyes can manage, all else being equal.

And in terms of range from brightest to darkest, the camera wins, too. The human eye has a static contrast ratio of 100:1. The Aptina part in a Tesla has a contrast ratio of 115 decibels, or about 562,000:1. This means the cameras can distinguish darker darks that your eye would all perceive as being black.

The problem is, most folks look at the crappy, low-quality images that the dashcam produces, and they assume that this 8-bit-per-pixel, highly compressed video is comparable to what the computer vision systems are taking as input. I'm reasonably certain that this is not the case.

Also, lidar and camera vision have different failure modes. For example, camera vision will fail in zero ambient light where lidar will not fail. So the sensors are complementary. So I don't see lidar as a stop gap until we "solve" vision. I see lidar as complementary to camera vision.
Why are you driving with zero light? This is an automobile, not a stealth aircraft.

Your car has headlights. If they fail, your camera doesn't work. LIDAR has a laser. If that fails, LIDAR doesn't work. It's exactly the same failure mode, just with radically different, more expensive, more complicated, and likely more failure-prone optics.
 
If camera vision cannot detect the car, the vehicle really should not be in motion. And the situations where camera vision is likely to be obstructed (e.g. rain, fog, etc.) are particularly bad for LIDAR, too. They have similar failure modes because they operate using similar wavelengths.

Camera - Struggles in Low Light conditions and direct sunlight
Lidar - Excels in Low light conditions and direct sunlight

Different fail modes.

That's the thing, camera vision should be able to easily handle that situation. Cameras can easily detect features that are almost invisible to the human eye, particularly if you're taking advantage of multiple frames to reduce the impact of sensor noise, and/or using machine learning to process the images.

Also, you're ignoring the bit depth difference. The human eye can't even achieve 24-bit color perception, i.e. your eyes provide just shy of eight bits per color channel, contrast-wise — maybe slightly more in the green channel in the middle of your eyes, blue channel in peripheral vision, whatever. The cameras in a Tesla can do 12 bits natively per color channel, or up to 20 bits per color channel in HDR mode (which Tesla probably does not use). So they provide anywhere from 16 to 4096 times as much information as your eyes. This translates to much, much better night vision than your eyes can manage, all else being equal.

And in terms of range from brightest to darkest, the camera wins, too. The human eye has a static contrast ratio of 100:1. The Aptina part in a Tesla has a contrast ratio of 115 decibels, or about 562,000:1. This means the cameras can distinguish darker darks that your eye would all perceive as being black.

The problem is, most folks look at the crappy, low-quality images that the dashcam produces, and they assume that this 8-bit-per-pixel, highly compressed video is comparable to what the computer vision systems are taking as input. I'm reasonably certain that this is not the case.

The camera Tesla has 115 dB dynamic range and the human eye has a dynamic contrast ratio of about 1,000,000:1 or 120 dB.
But the most important thing being resolution. Tesla camera has 1.2 megapixels and can't even read a speed limit sign 100 meters away in broad day light.
The human eye on the other hand has 576 megapixels.

Your car has headlights. If they fail, your camera doesn't work. LIDAR has a laser. If that fails, LIDAR doesn't work. It's exactly the same failure mode, just with radically different, more expensive, more complicated, and likely more failure-prone optics.

The whole point is that the probability of them failing at the same same is extremely low. I have had my lights burn out on me acouple times while driving over the years. You are claiming that my lidar immediately go out at the exact same time.

Its funny that people like you claim there's no redundancy or back up here. You say things like if the camera says theres no ped and the lidar/radar says there is, which one should you believe?

This is like saying, why have two guards at the outpost, what if one guard sees someone move in the bushes for a moment and the other guard didn't.. which one should you believe? Its absurd because you investigate what either guard sees.

But let's keep going. People like you claim that the FSD Computer is redundant because when one fail, the other can keep going, or when one gives wrong data, the correct one will be used.

But wait, if one chip says there's a ped and another says there isn't which one should we believe? which one is the correct one? why have two? that's stupid unneeded complexity.

Its not yet illegal to think. Stop letting elon think for you.
 
Image sensors exist that work well in direct sunlight. One design I've seen shuts down pixels that are overloaded by the Sun.
And as most of you know, cameras can be optimized for low light. Bigger sensor, bigger lens, lower frame rates, lower resolution, no color.
 
Image sensors exist that work well in direct sunlight. One design I've seen shuts down pixels that are overloaded by the Sun.

I'm sure there are a number of more advanced cameras intended for machine vision that Tesla can use. I think image processing is more the way to view the future path. That image doesn't need to be photographic from a human perspective. I don't see why lidar can't be combined with a more convention camera image and presented to NN. Ideally each frames contains the most information possible that is useful.

I assume the current $10K price has two purposes. 1) Significant upgrades to HW4 for FSD buyers, perhaps including new sensors, and 2) Limit the number of HW3 cars requiring upgrades.

$10K pricing is brilliant. It allows a substantial upgrade budget for HW4 while encouraging true believers that Tesla must be close to FSD.

I doubt HW4 does robotaxi either. But kicking the FSD can down the road is working pretty darn well. And except for the robotaxi foolishness the car software is darn impressive.
 
Camera - Struggles in Low Light conditions and direct sunlight
Lidar - Excels in Low light conditions and direct sunlight

Different fail modes.



The camera Tesla has 115 dB dynamic range and the human eye has a dynamic contrast ratio of about 1,000,000:1 or 120 dB.
But the most important thing being resolution. Tesla camera has 1.2 megapixels and can't even read a speed limit sign 100 meters away in broad day light.
The human eye on the other hand has 576 megapixels.

The camera sensors being compared in those images (Sony IMX324 and IMX224) are not what Tesla uses nor are the lens and FOV the same. The filter being used is also different, as the IMX224 uses conventional RGGB as far as I can find and the IMX324 (and Tesla's Aptina AR0132) uses RCCC (which gives better spacial resolution at the cost of color resolution). These factors will result in different effective resolutions.

Humans only need 20/40 (6/12) vision to drive on the road. It's the equivalent of being able to read a license plate from 20m away. I did some calculations based on the Snellen chart and the fact the smallest local roads signs have the letters at 4 inches (101.6 mm) tall and work out it's equivalent to being able to read a road sign at 115 ft (35m) away.
How far must you be able to see ahead when driving?
https://www.teachengineering.org/co...man/cub_human_lesson06_activity1_eyechart.pdf

Edit Also looked at DOT standards for speed limit signs (24"x30" minimum) and looked up images and the "SPEED LIMIT" portion have 4 inch tall letters (so reading distance there is the same), the numbers are about 10 inches tall. So the number portion is 287 ft (87m).
Chapter 2B - MUTCD 2009 Edition - FHWA

Can't find Japanese Road signs, but EU's looks similar: 300mm minimum size, roughly 120mm numbering height. Works out to roughly 135 ft (41m).
https://ec.europa.eu/growth/tools-d...tion=search.detail&year=2015&num=659&dLang=EN
 
Last edited:
  • Like
Reactions: Doggydogworld
Just to clarify how this works, I recall seeing the computer waits for the next frame (1/60 sec later), and sees if they match.
For a dual redundancy system, it doesn't make decisions based on the two computers agreeing or disagreeing, as there is no way to know which one is right when one disagrees (only that it is likely something is wrong). I believe verygreen has said that after activation, the second redundant one simply acts like a live fallback (like if the first computer fails, second one can immediately take over).

Triple-mode redundancy is what is required to actually make decisions, as you can then have majority vote.
 
Triple-mode redundancy is what is required to actually make decisions, as you can then have majority vote.
Trouble is that the voting mechanism has no redundancy. There is still a possible single point of failure.

Many Boeing aeroplanes give the autopilot a completely separate control surface with a completely separate actuator, etc. These flaps are small enough so the pilot can counteract with much larger, manually controlled flaps.

We have something similar in Tesla cars, because the driver can use a simple mechanism, bypassing any actuators, to steer and brake.

But obviously this is only useful as long as there is a human driver available. Perhaps one could install two autopilots in cars, a sophisticated one for normal use and a simple one that only watches out for danger and has the power to overwhelm the other one if things go wrong.
 
  • Like
Reactions: daktari
Image sensors exist that work well in direct sunlight. One design I've seen shuts down pixels that are overloaded by the Sun.
And as most of you know, cameras can be optimized for low light. Bigger sensor, bigger lens, lower frame rates, lower resolution, no color.
And other SDC companies are using them while Tesla is stuck with a 1.2 map old camera
 
  • Like
Reactions: DanCar
For a dual redundancy system, it doesn't make decisions based on the two computers agreeing or disagreeing, as there is no way to know which one is right when one disagrees (only that it is likely something is wrong). I believe verygreen has said that after activation, the second redundant one simply acts like a live fallback (like if the first computer fails, second one can immediately take over).

Triple-mode redundancy is what is required to actually make decisions, as you can then have majority vote.
Again same problem. How would you know the first one failed. Again applying the same flawed disengagement logic. It doesn’t fly
 
Not sure this is the appropriate thread (feel free to delete or move if needed), but tomorrow is the two year anniversary of Tesla's Autonomy Day and I'm struck by how very little of their claims/predictions have come true in that time - both as it relates to FSD as well as their competitors who would "for sure" have to dump LIDAR.
There aren’t one million Tesla robotaxis on the road at the moment?!
 
  • Funny
Reactions: DarkandStormy
Interested in more details about predictions. I'm not seeing any LIDAR company being successful. I would classify Tesla as being successful because they are charging $10K.

Tesla CEO Elon Musk said that the company should have robotaxis on the roads in 2020.

“I feel very confident predicting autonomous robotaxis for Tesla next year,” Musk said on stage at the Tesla Autonomy Investor Day in Palo Alto, California. They won’t be “in all jurisdictions, because we won’t have regulatory approval everywhere, but I am confident we will have at least regulatory approval somewhere, literally next year,” he said.

 
Camera - Struggles in Low Light conditions and direct sunlight
Lidar - Excels in Low light conditions and direct sunlight

Different fail modes.



The camera Tesla has 115 dB dynamic range and the human eye has a dynamic contrast ratio of about 1,000,000:1 or 120 dB.
But the most important thing being resolution. Tesla camera has 1.2 megapixels and can't even read a speed limit sign 100 meters away in broad day light.

First, it says ">115 dB". Second, you're mixing up the static and dynamic contrast ratio of that camera.

The camera can provide only 20-bit data, which is about a 120 dB contrast ratio, hence ">115 dB". That limitation is determined by the maximum output signal bit depth, and that is effectively its best-case static contrast ratio (combining three frames with HDR) in a single frame.

The dynamic contrast ratio includes the static contrast ratio as altered by:
  • The iris, assuming the camera has one.
  • The exposure time (which any camera can vary, but your eye can't usefully)
  • Changing the gain (which any camera can do, but your eye only can to a very limited degree very slowly via rhodopsin and bleaching)
So you're comparing the dynamic contrast ratio of your eye to the static contrast range of the camera. The dynamic contrast range of the camera is way, way wider than 120 dB. It can, in a single HDR frame, represent very nearly the entire dynamic contrast ratio that your eye can see.

The static ratio of the camera in linear (non-HDR) mode is 72.24 dB. The static ratio of your eye is, if I'm understanding correctly, only about 40 dB.

The camera stomps your eye into the ground.

The human eye on the other hand has 576 megapixels.

And low-quality optics whose angular resolution renders that largely moot. Also, that's 576 megapixels in one direction. Your car has cameras pointing in every direction. Also, remember that it just has to be better than the legal threshold, which in most states is 20/40.

Also, your eyes do not actually see 576 MP in a single image; that's based on producing a full image that would present full resolution for your entire eye's field of view. However, you only have high resolution near the center of your eye; everywhere else, your vision is absolute crap. The effective resolution of a single frame of the dense part of the human eye is only on the order of 5 to 15 MP. That's still better than the Tesla cameras at 1.2 MP, but only by potentially as little as a factor of 4 (which would only be 2x the resolution in each direction).


And in the only direction where long-distance vision really matters much (the direction your car is moving in), your car also has a zoomed-in camera (and a wide-angle camera). The 35mm-equivalent focal length of your eye is about 43mm. The cameras on a Tesla are about 6mm, 46mm, and 69mm. That last one effectively gives it about the same single-frame resolution at a distance as the worst-case estimate for human eyes. (The wide-angle camera is probably mostly useless.)

Also, computers can do inter-frame superresolution to resolve detail way smaller than you can resolve in a single frame. Your brain really doesn't do much of that.

Also, your brain is mostly only usefully paying attention to the center of your field of view. You can almost entirely miss stuff going on in your peripheral vision (where the image quality is crap) that a computer could easily see by virtue of having way more cameras pointing in more directions, even at lower resolution (but still higher resolution and more in-focus than your peripheral vision).

In short, the cameras win again.

The whole point is that the probability of them failing at the same same is extremely low. I have had my lights burn out on me acouple times while driving over the years. You are claiming that my lidar immediately go out at the exact same time.

Both lights and the high/low-beam failover? Unlikely. For headlights to fail, you have to lose four different lighting elements. The wiring or switch, maybe, particularly if there's a design defect, but that's probably at most a once-in-a-billion-miles thing, particularly with modern LED headlamps.

And if the failure is caused by a 12V electrical system failure, your LIDAR is probably going down, too, so it isn't entirely independent.


Its funny that people like you claim there's no redundancy or back up here. You say things like if the camera says theres no ped and the lidar/radar says there is, which one should you believe?

This is like saying, why have two guards at the outpost, what if one guard sees someone move in the bushes for a moment and the other guard didn't.. which one should you believe? Its absurd because you investigate what either guard sees.

None of these are realistic scenarios or comparisons. Two guards is like two cameras. RADAR and LIDAR are more like a guard and a guard dog. After the hundredth time the dog guard goes nuts because of a stray cat, you shoot the guard dog and hire a second guard. That's what RADAR is like. If RADAR says there's a pedestrian, it's more than likely a pothole.

LIDAR is somewhat better, sure, but the burden of proof is still on you to show that there are realistic, common scenarios in which such a detection failure would occur with cameras over a large enough number of frames to result in an accident that can't be fixed by having a second camera from a different angle (remembering that there are already three cameras in the only direction that really matters much).

The rest of your post just continues to repeat the same fallacy in different words.
 
Last edited:
@powertoold Still waiting for you to acknowledge your previous comments you made in the past and their validity today. It seems like you are dodging them. I have said some things that have been wrong, for example I thought Lidar will become cheap and go mass production in 2019, i was wrong. Turns out 2021 is the year. I admit that, i don't hide or run from that.

In the same way I would like you to respond to Huawei's door to door L2 system coming out in November/December. And we are not talking about a release to small number of youtube influencers that upload curated demo videos. This is to everyone who buys the car and package. The same is the case with the Zeekr system using Mobileye's Supervision for door to door L2 everywhere in china.

Just to give you some details of the Huawei system.

  1. It will start with 4 cities in 2021 (Beijing, Shanghai, Guangzhou, and Shenzhen) then initially 6+ cities will be added every quarter culminating to 8+ cities each quarter in 2022, then increasing to 20+ each quarter in 2023.
  2. The system has three modes, HD map, local fleet learned & no map mode. local fleet learned is as good as HD map mode as long as you have driven that road once before. So essentially you can drive point to point anywhere in china.
  3. For the Mobileye's Supervision in the Zeekr 001, they likely only have 1 mode because they already have crowdsourced world wide map everywhere. But it has door to door anywhere in china including garages and parking lots (and you don't even need to be in your car).

Although we haven't seen footages from Zeekr 001 yet, we know from Supervision clips in Israel and Germany what it is capable of. For the Huawei system, hundreds of rides were given at the Shanghai auto show and not one disengagement has been reported. Yet this is in an environment that is at least a order of magnitude more complex than in the US.

When FSD Beta first came out you said: "Well, we did see a quantum leap with the FSD beta."

  • So would you also consider this a quantum leap compared to the systems we had available before? Tesla NOA, Navigate on Nio Pilot, Navigated Guided Pilot?

You saw several clips and said they were "That's just mindblowing. The number and coherence of predictions needed for that maneuver are :eek:"
and "prime example of how assertive and human-like this newest version is. Perfectly paced unprotected left turn and then going around the park car with no hesitation, with perfect speed control:" and "The capabilities in the FSD beta are mind-boggling."

  • Would you also say that for the clips below?

You also said that "Most people are numb to how amazing the FSD beta is because of the LIDAR and HD map approaches which produce demo-worthy maneuvers."

  • Are you going to retract that when these systems releases? You didn't for waymo who went completely driverless. But now you must admit that your logic was flawed right?

Finally, initially when FSD Beta came out you said that Tesla was 5 years ahead and most recently you said "Ok but it's still looking like Tesla will win to me. I have a hard time seeing how these other slow moving or bs developers will magically have widely available fsd anytime within the next 3-4 years. I think Waymo was planning to expand in Arizona a while ago, but we haven't seen that yet."

  • With Huawei's Arcfox and Mobileye's Supervision Zeekr 001 releasing to regular customers with all the features of FSD Beta in a more complex environment at a safety reliability level at or much greater than FSD Beta. Will that stop you from calling other companies BS devs or claiming that Tesla is 5 years ahead or 3-4 years ahead?

How about when you posted "Safe, sure. Do you believe in what Elon describes as "local maximums"?"

  • Will you now reject the flawed notion that "lidar and/or Hd map leads to a local maximum"?

You said that " Going with pure vision is a big deal. It means they've "solved" vision."

  • So does that mean that Mobileye also solved vision? The Zeekr 001 has 11 cameras and only 1 forward radar?

Finally you made this comment "Once they deploy it wide, I think it's game over. Tesla would be years ahead while everyone else is still trying to deploy automatic lane changing and traffic controls."

  • Will you finally admit that this comment is wrong? Not only is automatic lane change available in around 5+ cars today and counting. Traffic controls system is also available in the BMW in select locales. However systems with full-scenario and complete L4 feature-set will be available from multiple OEMs so its actually that they are trying to deploy L4 cars as a L2 car not struggling with auto lane change/traffic controls.

Last question, whose system will be deployed widely to the entire customer base? FSD Beta? Or Huawei ADS or Zeekr 001's Mobileye Supervision?
and will it be feature complete?

FSD Beta for example can't yield, doesn't reverse, doesn't recognize or respond to emergency car, doesn't respond to traffic personnel, doesn't handle construction/detours, doesn't auto park, etc.

Your response to these questions will show how objective you are.

Avoiding an wooden object which i think FSD Beta would have hit.

v2_670368221fbf42f2897e48e651ec3aff_img_000


v2_f9079fceb91b4176a523a2c0f448e048_img_000


v2_229661acdaa94d2faf536d7874074c91_img_000



QkJnQZ.gif


w09VPz.gif


WPOLgn.gif
 
Last edited: