Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Will Tesla ever do LIDAR?

This site may earn commission on affiliate links.
LIDAR has obvious benefits.

Even if you believe that FSD can be solved by vision only, LIDAR is an excellent failsafe.

This is what happens when you rely on vision + radar: Autopilot doesn't detect a truck partially in lane :( : teslamotors

Which is why I think Tesla will eventually incorporate LIDAR into their cars at some point in the future. Even if it is not strictly required, it will still be very useful as a fail safe. And considering Tesla wants hundreds of thousands of cars on the road to be FSD, having a good fail safe would seem like the prudent thing to do.
 
Well, you pretty much confirmed my observation that those who believe LIDAR is the best path to FSD is based on faith by admitting no one has fully solved Full Self Driving. Because there is no solution yet, we don't know who will be the first to solve it. Keep in mind, the successful "solution" will need mass market adoption to be considered solved. A million dollar computer on wheels might be able to self drive but was it really the best path if no one buys it?

By that logic vision for FSD is also faith-based because nobody has solved FSD on computer vision either.
 
Keep in mind, the successful "solution" will need mass market adoption to be considered solved. A million dollar computer on wheels might be able to self drive but was it really the best path if no one buys it?

I definitely agree with you on the last part. FSD has to be marketable. My earlier point was just that they do have actual scientific reasons for believing their method is the best. They are not believing in their method based purely on blind faith.

If someone had a self-driving car today on sale for one million dollars, I guarantee you it would sell far more than any other one million dollar car. And in a year or few it would cost one tenth of that and then half again...

If someone really had a self-driving car (as in, say, Level 5) on sale the price, looks and all that would be totally secondary. It would be a paradigm shift of gigantic proportions. They would sell every single one they can make.

But of course Tesla can’t sell an Autopilot suite that costs one million dollars today because they don’t have FSD, so they need a suite they can sell and ship within the regular price range... (Not that a 360 degree Lidar car would cost one million either.)
 
If someone had a self-driving car today on sale for one million dollars, I guarantee you it would sell far more than any other one million dollar car. And in a year or few it would cost one tenth of that and then half again...

If someone really had a self-driving car on sale the price, looks and all that would be totally secondary. It would be a paradigm shift of gigantic proportions. They would sell every single one they can make.

But of course Tesla can’t sell an Autopilot suite that costs one million dollars today because they don’t have FSD. (Not that a 360 degree Lidar car would cost one million either.)

My point is simply that the winner to FSD will be defined by the rate of adoption, not the existence of an approved system. That's all I was saying.
 
My point is simply that the winner to FSD will be defined by the rate of adoption, not the existence of an approved system. That's all I was saying.

Perhaps but my view is, the cost of the sensor suite is almost immaterial in the end, if that sensor suite were to guarantee a sufficient first place. The demand would be such that the cost would very quickly come down...

But of course if one believes someone with a cheap suite will reach FSD at the same time or sooner, then that math changes. Back to faith eh? :)
 
If someone had a self-driving car today on sale for one million dollars, I guarantee you it would sell far more than any other one million dollar car. And in a year or few it would cost one tenth of that and then half again...

Technically true but a $1 million FSD car would still be financially out of reach for like 99.9% of the population so what would be the point. FSD vehicles won't change transportation if nobody can use one. FSD has to be affordable to the masses in order to truly make a difference. It's similar to the adoption of electric cars. We know EVs will also create a paradigm shift but they have to be affordable to the masses. It's why Tesla worked so hard to get to the $35k Model 3. Tesla understood that having a fantastic EV does little good if it costs $100k or more where only the affluent can drive one. EVs only make a real difference when everyone can afford one. Same principle with FSD.
 
Technically true but a $1 million FSD car would still be financially out of reach for like 99.9% of the population so what would be the point. FSD vehicles won't change transportation if nobody can use one. FSD has to be affordable to the masses in order to truly make a difference. It's similar to the adoption of electric cars. We know EVs will also create a paradigm shift but they have to be affordable to the masses. It's why Tesla worked so hard to get to the $35k Model 3. Tesla understood that having a fantastic EV does little good if it costs $100k or more where only the affluent can drive one. EVs only make a real difference when everyone can afford one. Same principle with FSD.

The point is: The demand would be so large that the scale would be there and the cost would come down quickly.

Tesla did not start with a $35k Model 3 either. OK, Roadster did not cost a million, but for the average car buyer it might as well have. It was a start.

And with a truly self-driving car, the demand would be much larger than it was for EVs...
 
  • Like
Reactions: diplomat33
Perhaps but my view is, the cost of the sensor suite is almost immaterial in the end, if that sensor suite were to guarantee a sufficient first place. The demand would be such that the cost would very quickly come down...

But of course if one believes someone with a cheap suite will reach FSD at the same time or sooner, then that math changes. Back to faith eh? :)

You are assuming I was talking about the cost of the sensor suite alone. I wasn't. My observation is that LIDAR (for those who end up using it) will likely be in addition to visual (or it won't work in many kinds of stormy weather). So the processing power to process it is in addition to whatever is needed for systems that can drive without LIDAR. And then there is the development time/cost too. All that data needs to be integrated and processed in real time. I'm afraid the developers who don't realize LIDAR is not necessary to achieve the goal of driving more safely than humans are put at a competitive disadvantage on a number of levels. Sometimes simple is better.

Time will tell.
 
This discussion illustrates the different approaches to FSD. Affordability and capability are two sides of the same FSD coin. Companies like Waymo are tackling the problem from the capability side, trying to achieve FSD first and then make it affordable later. Tesla is starting with a product that is affordable and trying to make it capable of FSD later.
 
My belief is Lidar will become a regulatory requirement in some countries for any L3 driving or above.

So Tesla will be forced to include it.

Unlikely. Regulators tend not to pass laws that require specific technologies, because those laws get stale very quickly. Instead, they tend to pass laws requiring specific behavior. And if Tesla can achieve the required behavior without LIDAR, then they won't have to put in LIDAR.


You can do range with one camera and a neural network or one camera comparing to previous frame given vehicle’s velocity from IMU/motion flow. But the error rate will not be centimeter level like the Lidar that measures range natively and there will be very different failure cases.

It's probably worth pointing out that with the possible exception of parking (where LIDAR is unlikely to be helpful unless you have a dozen of them), if a car's survival requires centimeter-level accuracy, the driver is probably in deep trouble either way, because brakes don't have centimeter accuracy, and neither does steering. :)


Which is why I think Tesla will eventually incorporate LIDAR into their cars at some point in the future. Even if it is not strictly required, it will still be very useful as a fail safe. And considering Tesla wants hundreds of thousands of cars on the road to be FSD, having a good fail safe would seem like the prudent thing to do.

That's not necessarily true. In the short term, yes, LIDAR probably would make things safer, if only because computer vision isn't quite there yet. But in the long term, it could actually increase the risk. After all, every time you add a new input source, you add extra processing with extra code, which increases the risk of bugs, misbehavior due to erroneous input, etc. So if you start with a camera-based self-driving system that is is already reasonably safe, adding LIDAR is probably equally likely to make things better or worse.
 
Unlikely. Regulators tend not to pass laws that require specific technologies, because those laws get stale very quickly. Instead, they tend to pass laws requiring specific behavior. And if Tesla can achieve the required behavior without LIDAR, then they won't have to put in LIDAR.

That's correct, but I fail to see any other sensor as being able to provide the level of redundancy Lidar offers.

The specific behavior in question is the amount of fail-safe that might be required to deliver whatever safety level was agreed upon.

Personally I think the US federal government made a massive mistake in letting the germans get to L3 first. By doing so I'm concerned that they'll set the precedence.

The germans are a lot less cavalier with stuff than we are.
 
Personally I think the US federal government made a massive mistake in letting the germans get to L3 first. By doing so I'm concerned that they'll set the precedence.

The "government" didn't "let" any particular entity get to L3 first. This is a private race in which the government only regulates the safety/effectiveness of any solution provided by the private sector.

And don't make the mistake of thinking one entity is ahead of another simply because they have a more stable system under a limited set of conditions. The real race is a self-driving vehicle that can handle new situations.
 
True, but multiple cameras also gives you redundancy, and without the added risk resulting from having to interpret two different types of data in two different ways.

I think it helps to think about Sensors in terms of what they sense exactly.

Camera Sensors only tell you light levels at each pixel position. Luckily our brains can translate those light levels into actual objects so we can make sense of our surrounding. To know a building is a building, and a tree is a tree. The reason that Tesla's have crashed into buildings and trees is because the AP computer has no idea that building is a building, and a tree is a tree. It has a to rely on a Neural Network to determine what's in the image.

The neat thing about Lidar is it's a completely different type of sensor that tells you the range between you, and whatever the signal is bouncing off of. It doesn't require a Neural Network to identify the object to know it's there.

What I expect to happen over the next 5-10 years is for Camera Sensor manufactures to start making solid state Lidar sensors in the exact kind of formats that Optical sensors are.

it will become a commodity item like CMOS imaging sensors, and we won't look back.

Here is an article that does a much better job of describing the sensors than I did above.

Sensor Fusion

Disclaimer: I'm a huge advocate of sensor fusion, regulatory guidelines for automation, infrastructure improvements geared towards autonomous vehicles, and v2x communication. I do have a Tesla with the EAP/FSD options, and I got those options to play around with this stuff as we make our journey towards autonomous driving. It's exciting times. No one can really predict what we'll be needed as no one has really pulled it off yet.
 
  • Helpful
Reactions: electronblue
Luckily our brains can translate those light levels into actual objects so we can make sense of our surrounding. To know a building is a building, and a tree is a tree. The reason that Tesla's have crashed into buildings and trees is because the AP computer has no idea that building is a building, and a tree is a tree. It has a to rely on a Neural Network to determine what's in the image.

A human needs to rely on it's biological neural network to recognize scenes, for vision to have any meaning. Just like Tesla's neural network is in it's infancy, a newborn baby cannot make sense of the visual stimulation it receives, it's all just meaningless sensations. To map all that visual sensory data into meaningful and useful images, requires the baby to learn. That's why a crib should have dangly toys that the child can reach out and touch (to help them map the visual sensory data to be meaningful).

To say that FSD requires LIDAR in addition to vision because cars have crashed into unrecognized objects while the neural net was in its infancy is pretty short-sighted. Like a newborn baby, it needs to learn to recognize objects as it goes through life. That's what a neural net does and it's why Tesla has included cameras on all it's cars, even before they are capable of using them for FSD before the neural net is done learning.

The neat thing about Lidar is it's a completely different type of sensor that tells you the range between you, and whatever the signal is bouncing off of. It doesn't require a Neural Network to identify the object to know it's there.

What I expect to happen over the next 5-10 years is for Camera Sensor manufactures to start making solid state Lidar sensors in the exact kind of formats that Optical sensors are.

it will become a commodity item like CMOS imaging sensors, and we won't look back.

It's not as simple as you make it out to be. The data from LIDAR is raw data. It still needs to be processed and filtered and integrated into the visual data, mapping data, etc. That complicates the design and operation of the neural net several fold. Eventually, as FSD matures, LIDAR may be incorporated into the visual data but I'm pretty confident the first safe and useful FSD systems will not be handicapped by the extra complexity of integrating LIDAR into the operation. Once the visually based neural net can recognize objects similarly to a human, LIDAR becomes much less valuable. Think how redundant LIDAR data would be to a human driver who is actually looking at the driving environment. It would be extraneous and useless data.
 
Camera Sensors only tell you light levels at each pixel position. Luckily our brains can translate those light levels into actual objects so we can make sense of our surrounding. To know a building is a building, and a tree is a tree. The reason that Tesla's have crashed into buildings and trees is because the AP computer has no idea that building is a building, and a tree is a tree. It has a to rely on a Neural Network to determine what's in the image.

The neat thing about Lidar is it's a completely different type of sensor that tells you the range between you, and whatever the signal is bouncing off of. It doesn't require a Neural Network to identify the object to know it's there.

As long as what you care about is in at least two sufficiently distinct frames —either from different angles or different zoom settings — you can compute a depth map using only procedural code, without any neural network involvement.
 
  • Like
Reactions: StealthP3D