Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Will Tesla ever do LIDAR?

This site may earn commission on affiliate links.
I'm also having an open mind and I still believe Buffet could be just another Madoff. You never know.

Anyone does not know or pretend does not know Theranos was a secretive private company with not a single person with medical testing or medical venture knowledge on the board and never had its financials audited should just keep his mouth shut. He has lost all his remaining credibility.
 
Last edited:
I guess if you can't tell whether Elon Musk is another Theranos or not, then your mind is more "open" than mine (but not in an "optimistic way). Because I've already concluded that he is not based on the fact that I have two of his products and they both work as advertised. Theranos was a massive fraud and Elizabeth Holmes a massive fraudster with products that worked so poorly they were secretly using standard blood testing machines to process their blood tests. But Tesla's product is right in front of me to use and enjoy. And it keeps getting better at a blindingly fast rate (I've only had a Model 3 for less than a year and the AP is already leaps and bounds better than when I took delivery).

Well, that’s the thing. I am not convinced Elizabeth Holmes intended to be a fraudster either. It is possible events simply unraveled that way over time and she got caught in one exaggeration too many hoping to make it right down the road.

I don’t think there is any doubt Musk has achieved great things. The question I keep asking myself is this one specifically: Is FSD — ”Level 5 capable hardware” (Elon Musk, 2016) — an uncharacteristic mistake from him? I can not shake the feeling that he could be trying to fake it until he makes it there and we don’t know what the outcome will be...

I bought my AP2 car in late 2016 so I do have my perspective on the product as well.

It may well work out but I think these concerns are more than just naysaying.

Mind you, I don’t think Tesla won’t deliver an FSD of some kind. I am sure they will. I just think there is a larger than 0 chance not even Tesla is sure at this time will they deliver on Musk’s big words from 2016 on this. ”Level 5 capable hardware”, sleeping in you car, coast to coast summon...
 
Last edited:
I'm also having an open mind and I still believe Buffet could be just another Madoff. You never know.

Anyone does not know or pretend does not know Theranos was a secretive private company with not a single person with medical testing or medical venture knowledge on the board and never had its financials audited should just keep his mouth shut. He has lost all his remaining credibility.

Theranos is just an example of viscious circle Silicon Valley projects sometimes get into as big, bold promises are needed to kickstart things and often the race to reach those bold goals only starts afterwards... that happened with Theranos and has happened with many companies. Sometimes they will make it eventually (Microsoft with MS-DOS is a great example, they didn’t have it when they sold it to IBM) but that doesn’t remove the reality that some companies have ”nothing” when they start and that poses a risk for the buyer if they think there is more there than there actually is.

I guess it is quite widely agreed Tesla had little in terms of own FSD software in 2016. Indeed many believe only Karpathy much later implemented for them something that works. That rings to me a little faking it until you make it... but who knows...

How much did Tesla have in 2016 when they said their then-shipping Level 5 capable hardware and coast to coast summon while you sleep only needed software validation and regulatory approval? Was that true?
 
I'm also having an open mind and I still believe Buffet could be just another Madoff. You never know.

Haha! You remind me of all the old men that thought the Apollo moon landing was a big Hollywood fraud with all the footage shot in a studio or the Nevada desert.

Buffet is the greatest investor that ever lived. Unlike Madoff he actually buys and sells things. Now excuse me while I bask in the joy of my most recent stock market win and my largest holding, by far, QCOM.
 
Last edited:
  • Like
Reactions: CarlK
I guess it is quite widely agreed Tesla had little in terms of own FSD software in 2016. Indeed many believe only Karpathy much later implemented for them something that works. That rings to me a little faking it until you make it... but who knows...

This makes me question whether you have been experiencing Tesla's Autopilot over the the last year and witnessed first hand the dramatic improvements it's seen!

How much did Tesla have in 2016 when they said their then-shipping Level 5 capable hardware and coast to coast summon while you sleep only needed software validation and regulatory approval? Was that true?

Except that's not what they said! And it's unclear to me why you would just make that up.
 
  • Like
Reactions: CarlK
This makes me question whether you have been experiencing Tesla's Autopilot over the the last year and witnessed first hand the dramatic improvements it's seen!

I guess the difference in our feel for Tesla’s 2016 status comes from my experiencing the earliest versions that gave insight into Tesla’s actual level at the time of the now-debated announcement. The difference between the 2016 start and the system today is of course dramatic as a Level 2 highway driver’s aid.

Tesla did have a sporadically working basic lane-keeping system in late 2016, that is true, and that has been improving since. But so far none of us seen the FSD features yet but of course I believe Tesla has some of that ready today.

It is widely accepted the 2016 video was just a rigged demo possibly even running Nvidia’s demo code so in my view there is reason to at least doubt how much Tesla was ”faking it until they make it” after the MobilEye break-up.

Except that's not what they said! And it's unclear to me why you would just make that up.

Here are what I summarized/referenced. Tesla announced ”Level 5 capable hardware” in AP2 cars here:

Tesla announced that EAP and FSD were merely waiting for ”validation and approval” in the Design Studio:
enhanced-autopilot-self-driving-tesla-autopilot-cost.jpg


Sleeping in your car and coast to coast summon came from Elon Musk.
 
We all know that Musk has been quite adamant that he is against LIDAR. And a few years back, I think it was somewhat understandable. Back then, LIDAR was clumsy and expensive. There was just no way that Tesla could afford to put LIDAR in every car they sell, not to mention the issue of ruining the aerodynamics and aesthetics of the cars with a LIDAR tower on the roof. So I think back then, it made sense for Tesla to go the camera vision only route. After all, if they could manage to achieve the same result with camera vision only for a fraction of the cost of LIDAR, why not?

But today, these problems with LIDAR are pretty much solved. LIDAR is cheaper and smaller. Tesla could integrate LIDAR in the car in a way that does not ruin the aerodynamics or the aesthetics of the car at a much more affordable cost. Plus, there is no question that even if Tesla does manage to achieve great things with camera vision alone, LIDAR would offer more redundancy and make true Full Self-Driving much more robust. In other words, even if camera vision works, why not have that extra redundancy of camera vision, radar and LIDAR to give the car an even fuller picture of the environment to make FSD even better? There is no downside. So I am thinking that Tesla will eventually cave in a few years and add LIDAR to the FSD hardware.

Thoughts?

LIDAR is transitioning from a bulky expensive spinning mechanical design to MEMS. But is still an optical system like a camera, and affected by weather and other visibility issues. A good rule for reliability is never to support or back up one technology, with another technology that has the same weakness. This gives you a common mode of failure.

So you definitely need cameras (optical) but can supplement them with radar, which is an electrical signal ( FMCW microwave) to get distance measurements even with poor visibility. As I understand it, the Tesla / Continental radar is dual field, dual channel design, using coarse cloud technology. Like two radars in one. There is a signal from a long range narrow field, and another from a close range wide field. It should be able to read distance and velocity of these measurement points, even in poor weather.

Yes LIDAR will give you a lot more measure points, but for robustness, is seems radar wins, and is in a very large number of cars now for intelligent cruise control. So it is more proven in the field. How many production cars currently come with LIDAR?

If cost is no object, you could have both. Time will tell how this plays out.
 
Tesla announced that EAP and FSD were merely waiting for ”validation and approval” in the Design Studio:
enhanced-autopilot-self-driving-tesla-autopilot-cost.jpg

Per your own graphic, Tesla did not represent they were "merely" waiting for software validation, they said "Self-Driving functionality is dependent upon extensive software validation and regulatory approval. They even put those parts in bold letters. Obviously, this was represented as a planned future feature that had to clear a number of future hurdles before it could be used.

Sleeping in your car and coast to coast summon came from Elon Musk.

True, and again, it was a look into the future, not the current capabilities. I'm not sure how much more clear Musk could have been about that!
 
Per your own graphic, Tesla did not represent they were "merely" waiting for software validation, they said "Self-Driving functionality is dependent upon extensive software validation and regulatory approval. They even put those parts in bold letters. Obviously, this was represented as a planned future feature that had to clear a number of future hurdles before it could be used.



True, and again, it was a look into the future, not the current capabilities. I'm not sure how much more clear Musk could have been about that!

Extensive software validation does not mean the same as implementing the software though.

If Tesla had nothing or very little on terms of the software to implement those promised features, I would say they were faking it before (trying to) make it.

But of course we do not know what was ready in 2016. I am merely explaining why I feel it is possible they were ”faking it” with the intent to ”make it” later.
 
I believe LIDAR is not a necessary component of self-driving cars.

Do you have any objectively proof? and please don't say humans.

But I'll point out that I'm not saying LIDAR doesn't work, I think you could make self-driving work with ONLY LIDAR (just don't drive in the rain, snow or fog).

Lidar works in the rain and snow, its called a neural network. The same way NN are able to recognize a bunch of numbers as actual objects.

And I don't think it would be as safe as a system based on cameras which are capable of providing more useful information.

Everyone who uses uses lidar also uses cameras. Its not either or. I don't grasp why people don't understand this basic point.
 
This makes me question whether you have been experiencing Tesla's Autopilot over the the last year and witnessed first hand the dramatic improvements it's seen!

It has always been bothering me that little is known of his background. It's hard to make objective discussion with someone without knowning whether if he has his own agenda. My best guess is he does have some kind of agenda he does not want us to know.

Him and @Bladerskb do not seem to own a Tesla. To them everything Tesla is doing is wrong. Everything Tesla is not doing is what should have been done. If that's what they think why they are wasting so much time hanging around the Tesla forum instead of going to, say, the Mobileye forum they are so gaga about?

It seems Apple is going the Lidar route:
Exclusive: Apple in talks with potential suppliers of sensors for self-driving cars - sources - Reuters

Will be interesting to follow which route was the right one. Imo after reading this paper I got a lot new faith for vision only systems:
https://arxiv.org/pdf/1904.04998.pdf

You can get distance info with a single image. Consumer cameras have been doing this for ages (in the autofocus focus function). All it takes is the algorithm and extra computing power. It will be even easier with the multi-camera setup. That advantage of Lidar can be easily be overcome with faster computer and better algorithm. The disadvantage is of course you can't sell one to customer at a reasonable price in the foreseeable future and you can't equip hundreds of thousands cars to help train the NN. It's not without reason that Elon calls it a crutch.
 
Last edited:
Do you have any objectively proof? and please don't say humans.

You could argue that humans are not proof that LIDAR is not required to drive safely because humans, as a group, do not drive safely as millions are killed in accidents due to human error. However, if you could drive with camera vision only and eliminate the human error component of accidents, then, obviously, LIDAR is not needed for that. So, yes, I will say humans are proof that LIDAR is unnecessary for machines to drive more safely than humans.



Lidar works in the rain and snow, its called a neural network. The same way NN are able to recognize a bunch of numbers as actual objects.

Again, not to naysay LIDAR but it's important to recognize it has serious limitations in poor visibility conditions compared to cameras. Because LIDAR depends upon a point source of light (a spinning laser) to penetrate all the way to the target and back again. Cameras only depend upon ambient light for illumination so the path the light takes is essentially a one way trip vs. a two way trip for LIDAR. In heavy snow/rain, that's problematic.



Everyone who uses uses lidar also uses cameras. Its not either or. I don't grasp why people don't understand this basic point.

I get that fully. I was simply expanding upon my comment that I believe self-driving could be accomplished entirely with LIDAR but that would not be a superior path to take.
 
  • Like
Reactions: Engr
You can get distance info with a single image. Consumer cameras have been doing this for ages (in the autofocus focus function). All it takes is the algorithm and extra computing power.

That's false. The distance information is not extracted from the single image, it is from the auto-focus sensors that are like extra sensors taking seperate miniature (low resolution) images. It is also possible to get distance information from multiple images taken from the same lens but that is less than ideal in a moving environment.
 
  • Like
Reactions: CarlK
That's false. The distance information is not extracted from the single image, it is from the auto-focus sensors that are like extra sensors taking seperate miniature (low resolution) images. It is also possible to get distance information from multiple images taken from the same lens but that is less than ideal in a moving environment.

I meant a single shot on the sensor from a single lens (SLR or DSLR or mirrorless). Camera makers put phase detection pixels onto the sensor that allows the camera to get distance info and image at the same time from a single shot. That's a moot point though. Unlike cameras cars can easily put additional camera to obtain the info.
 
Last edited:
You could argue that humans are not proof that LIDAR is not required to drive safely because humans, as a group, do not drive safely as millions are killed in accidents due to human error. However, if you could drive with camera vision only and eliminate the human error component of accidents, then, obviously, LIDAR is not needed for that. So, yes, I will say humans are proof that LIDAR is unnecessary for machines to drive more safely than humans.

1) we don't have cameras with the same quality as the human eyes, definitely not the ones on Tesla cars. 2) we don't have an equivalent to the human brain

Those two things are very important.

Again, not to naysay LIDAR but it's important to recognize it has serious limitations in poor visibility conditions compared to cameras.

This is incorrect. Camera is the one with the serious limitations in various lighting conditions (limited and poor visibility). Lidar can see in pitch darkness while cameras struggle in the dark. Lidar can see under bright lighting conditions, cameras struggle severely.


Here's a tesla on AP hitting a deer at night


Here's a tesla on AP failing to see a pedestrian wearing a black coat in the middle of the road at night (driver had to take over)


Because LIDAR depends upon a point source of light (a spinning laser) to penetrate all the way to the target and back again. Cameras only depend upon ambient light for illumination so the path the light takes is essentially a one way trip vs. a two way trip for LIDAR. In heavy snow/rain, that's problematic.

This is wrong AGAIN. Camera struggles as much as Lidar in heavy rain and snow.
You don't see that because you see and understand a picture and can make out the content of a picture taken in heavy rain/snow. But a computer sees 3 numbers from 0-255 for every pixel. Your conclusion is based on the flawed premise below. If the human eye used Lidar tech, camera data would be just as foreign or "extraneous and useless data".

However you can use a NN on lidar data to remove the rain/snow just as you can to camera data.


Think how redundant LIDAR data would be to a human driver who is actually looking at the driving environment. It would be extraneous and useless data.
I get that fully. I was simply expanding upon my comment that I believe self-driving could be accomplished entirely with LIDAR but that would not be a superior path to take.

I think you are misunderstanding what's clearly going on here.

If you have a completely separate vision system that you can validate to only having one major sensing mistake every 10^4 miles (99.99%). Then you have another separate lidar system that you can validate to having one major sensing mistake every 10^4 miles (99.99%).

With the camera or Lidar system by itself, you have a perception system that can cause an accident every 10k miles.
But having the two systems which have a contrast in pros/cons improves your perception system to a reliable 10^8 (100 million miles), making you about 200x better than a human driver.
 
Lidar can see in pitch darkness while cameras struggle in the dark.

I think you are misunderstanding what's clearly going on here.

If you have a completely separate vision system that you can validate to only having one major sensing mistake every 10^4 miles (99.99%). Then you have another separate lidar system that you can validate to having one major sensing mistake every 10^4 miles (99.99%).

With the camera or Lidar system by itself, you have a perception system that can cause an accident every 10k miles.
But having the two systems which have a contrast in pros/cons improves your perception system to a reliable 10^8 (100 million miles), making you about 200x better than a human driver.

True that cameras and our own eyes need light. That is why cars have headlights. I don’t see the point of Fords demo of driving a car in the dark with headlights off, and instead using a number of expensive, bulky LIDAR units to keep it on the road. I would suggest this is just a publicity stunt of limited practical value.

Radar of course requires no light, either generated or ambient, and at this stage may be the best backup to cameras, eliminating the common failure mode of two optical systems. Radar and cameras are now widely deployed in the field. Perhaps if LIDAR ever gets commercially deployed in this segment on a large scale, there will be more data to evaluate.

Perhaps large commercial vehicles like busses and trucks could justify the addition of Lidar, along with radar and cameras.
 
  • Like
Reactions: StealthP3D
1) we don't have cameras with the same quality as the human eyes, definitely not the ones on Tesla cars. 2) we don't have an equivalent to the human brain

Those two things are very important.

Other than able to read speed and road signs you actually don't need or use a lot of vision resolution to drive. The shadow of an oncoming car gives all the information you'll need. You don't need to read the license number or see how driver of the car exactly looks. Matter of fact you don't want to see how drivers of other cars look as we sometimes do that would actually distract our driving. Sensors now can match human eyes but extra resolution that is not needed would just waste bandwidth and computing power.

Hardware-wise we already have enough brain power now. Human brain has about 100 billion neurons but not all of them are needed for driving a car. Nvidia's PX2 has 12 billions transistors. Tesla's HW3 is said to be a whole lot more powerful. It's safe to say it's at least very close to human brains in that respect. Then there is the synapses or neuron connections. Tesla has the most aggressive neural network machine learning program. There no limitation of software-wise getting there. So the human brain power for driving a car is all set up in the current version. All it needs is more learning.

Yes the two are very important and Tesla has the best solution of them among all.
 
  • Like
Reactions: StealthP3D
1) we don't have cameras with the same quality as the human eyes, definitely not the ones on Tesla cars. 2) we don't have an equivalent to the human brain

1) Irrelevant. You don't need HD resolution for purposes of driving.
2) The AI that will power the self-driving of the future is still being developed (and quite rapidly I might add). At some point of development, it will be better than the human brain for driving purposes because driving is a relatively simple task and computers always pay attention 100%, unlike humans who have "issues" being consistent.





This is incorrect. Camera is the one with the serious limitations in various lighting conditions (limited and poor visibility). Lidar can see in pitch darkness while cameras struggle in the dark. Lidar can see under bright lighting conditions, cameras struggle severely.

I didn't say "low light conditions" I said "poor visibility" referring to snow, mist, rain and fog. The laser needs to make a two way trip from car to distant objects and back to the sensor for it to work. Cameras have a much better go because the light only needs to travel from the object to the car, a one-way trip.

Here's a tesla on AP hitting a deer at night

Here's a tesla on AP failing to see a pedestrian wearing a black coat in the middle of the road at night (driver had to take over)

You are using examples of current technology to prove what future technology can't do? That makes no sense! Obviously, full self driving isn't ready yet! Duh!




This is wrong AGAIN. Camera struggles as much as Lidar in heavy rain and snow.
You don't see that because you see and understand a picture and can make out the content of a picture taken in heavy rain/snow. But a computer sees 3 numbers from 0-255 for every pixel.

This may surprise you but the human eye doesn't "see" in any more real of a sense than the sensor on a camera sees. The rods and cones transmit electrical pulses that are a lot vaguer and less precise than the zeros and ones a camera sensor transmits. The magic is in the AI processing in machine vision and in the brain in human vision. But even an insect, with a tiny brain can see well enough to navigate amongst branches and avoid predators. It may be difficult for you to understand how AI can make sense of the camera sensor info if you have a "binary" thought process or lack imagination. AI is a breakthrough technology and the mechanisms by which machine vision works are little understood or even imagined by many who are unfamiliar with how it learns and "remembers". It is only a viable technology due to vast amounts of cheap processing power. It doesn't involve "if, then" statements, it really is machine vision in the truest sense of the word. How else do you think a dragonfly, with its minimal processing power, can function in a varied and complex world? It doesn't have anything remotely approaching HD quality vision. In fact, it's vision has less resolution than an old-school TV!

The fact that you don't understand how these key processes work in humans and in machine vision are not good reasons why self-driving needs LIDAR to be complete.


With the camera or Lidar system by itself, you have a perception system that can cause an accident every 10k miles.
But having the two systems which have a contrast in pros/cons improves your perception system to a reliable 10^8 (100 million miles), making you about 200x better than a human driver.

I'm saying your premise here is wrong. Specifically, I don't believe the best safety a camera based self-driving system can achieve is a crash every 10K miles. You just pulled that out of thin air. When developed, vision only self-driving systems will provide much better safety than typical human drivers (which is the best we have right now). The improvement will be so dramatic that insurance rates will go down and, along with that, medical costs will go down. You have no basis to claim a camera only system will crash every 10K miles or that a hybrid camera/LIDAR system will reach a higher safety level before a vision only system achieves better than human safety. Sometimes simpler is better/faster. It's a race to the finish line and I believe the extra complexity of integrating LIDAR will slow down efforts using LIDAR to the point that vision only systems will be the clear winner to the finish line.

Time will tell.
 
1) Irrelevant. You don't need HD resolution for purposes of driving.
2) The AI that will power the self-driving of the future is still being developed (and quite rapidly I might add). At some point of development, it will be better than the human brain for driving purposes because driving is a relatively simple task and computers always pay attention 100%, unlike humans who have "issues" being consistent.

Its not irrelevant, its very relevant. Just because it doesn't support your point of view doesn't make it irrelevant.

Sony%2BQ2%2B2017-1.JPG


I didn't say "low light conditions" I said "poor visibility" referring to snow, mist, rain and fog. The laser needs to make a two way trip from car to distant objects and back to the sensor for it to work. Cameras have a much better go because the light only needs to travel from the object to the car, a one-way trip.

Each sensor has their pros and cons as i have proven from my videos. I have hundreds of thousands of experts backing me up. what do you have? some tweets from elon?

delphiLiDAR_Graphic_121115b.jpg



You are using examples of current technology to prove what future technology can't do? That makes no sense! Obviously, full self driving isn't ready yet! Duh!

Its not surprising that you admit that what you believe is basically hoping for a pipe dream. While the only hold up to Lidar/Camera deployment today is driving policy.

I'm saying your premise here is wrong. Specifically, I don't believe the best safety a camera based self-driving system can achieve is a crash every 10K miles.

I never said that 10k is the best but that's a very very very high bar already (99.99%). But if you were using camera alone you would need to be 99.999999% to be as good as a dual 99.99% lidar/camera system.

You just pulled that out of thin air. When developed, vision only self-driving systems will provide much better safety than typical human drivers (which is the best we have right now). The improvement will be so dramatic that insurance rates will go down and, along with that, medical costs will go down.

The only one pulling stuff out of thin air is you. Human driver get into accident every ~500k miles, with fatality every 86M miles. There are two type of accidents in a sdc, accidents caused because of a perception error that directly leads to bad path planning or a planning error that causes an accident. IF you separate the perception from the planning. Looking at the accident/fatalities numbers you get a sense of how good your perception has to be by it self.

You have no basis to claim a camera only system will crash every 10K miles or that a hybrid camera/LIDAR system will reach a higher safety level before a vision only system achieves better than human safety.

Its basic math, if you have two modality of something which don't fail for the exact same thing. It gives you a layer of redundancy when one sensor fails to detect something. Also If one independent system were to breakdown or individual sensors in that system (mud splattered), the car would be able to complete the trip using the other independent system (lidars), vice versa.

If you encountered a pedestrian on the road in all black at night. if you have two independent system with two independent way to detect a person. Because of the differences in Lidars and Cameras, we can confirm that they won't fail in a correlated manner. If either system sees a pedestrian, then you act as if the pedestrian is there. The chance of both independent systems, missing a pedestrian is so improbable that it will essentially never happen.

But if you have just one system with one way to see a pedestrian with all the cons of that system. If it doesn't see that pedestrian in all black then it will fail catastrophically.

Sometimes simpler is better/faster.
No it isn't you're literally just making stuff up in-front of mountains of evidence that says otherwise.

It's a race to the finish line and I believe the extra complexity of integrating LIDAR will slow down efforts using LIDAR to the point that vision only systems will be the clear winner to the finish line.

there's no complexity of integrating LIDAR.
 
Last edited: