Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Lidar lost Tesla the robotaxi?

Outcome from Elon's insistence of not using a lidar?

  • Tesla will win over the robotaxi competition by 2024 due to lower production cost

    Votes: 13 14.6%
  • Tesla will win over the robotaxi competition after a few years due to lower production cost

    Votes: 15 16.9%
  • Competition already on the market with lidar will maintain the lead

    Votes: 15 16.9%
  • By the time Tesla launches a vision only robotaxi, competition have done the same

    Votes: 15 16.9%
  • Elon is forced to change his mind and will include lidar in a future Tesla model

    Votes: 31 34.8%

  • Total voters
    89
This site may earn commission on affiliate links.
Vision has higher resolution than lidar. That's the problem with lidar and why Elon ditched it, the resolution is too low to be of any use. It's only good for telling you if "a large object" is at a certain distance from you.

You are thinking of radar. Radar has very poor resolution. Tesla ditched radar likely due to supply chain issues and also radar causing false positives due to its low resolution nature. Elon is supportive of high-resolution radar, however. Not sure if HD radar is cost-effective enough. I'd bet Elon goes with HD radar before he goes with lidar.
 
Based on the videos I've seen of the latest Tesla FSD Software, 10.69, I think Tesla may actually reach level 5 in a year or two.
Here is competition driving for 1000 miles without interruptions 13 years ago: https://www.youtube.com/playlist?list=PLCkt0hth826Ea3d2wZ6FvMv7j-qmxZVsr

Today they are at L4.

Tesla has the benefit of standing on shoulders of those 13 years of AI research. But unfortunately Tesla has handicapped themselves with a very limited vision only sensor suite (vs lidar the competition is using) and a 5 years old TPU.
 
Last edited:
  • Like
Reactions: 2101Guy
Based on the videos I've seen of the latest Tesla FSD Software, 10.69, I think Tesla may actually reach level 5 in a year or two. I haven't thought so up to now. I have FSD version 10.12 which is very flaky and far from being autonomous. But 10.69 implements a whole new 3D visualization technology and suddenly it looks like it may have a shot. It still needs fine tuning and more features such as understanding hand signals and ability to understand signs posting different rules for different times of day but It looks like it may have jumped way ahead of the others as of Sunday night.
I think that's going too far, but it does appear on track to be an excellent high L2 low L3, one of the best and most widely applicable that can be found in ordinary customer owned cars.
 
  • Like
Reactions: johnnycnote
Now that "Elon is forced to change his mind and will include lidar in a future Tesla model" is the most popular option, what would this mean to the hundreds of thousands who purchased the FSD?
One possibility is those people are wrong and have a proclivity to that topic and enjoy debating it :)

We are literally watching 10.69 rolling out and are seeing AMAZING results over just the previous release on top of the fact that the management team at Tesla have been reiterating they think FSD will ship to all customers THIS year. (In the past we always heard "next year", it will be "next year".) It seems they have a much higher confidence now that they have solved it.

Now - one possibility could be they need higher quality cameras - something supported by the Samsung billion dollar deal. If they decide they need to upgrade the cameras to completely solve all the edge cases then I believe they would be in a bit of trouble with their current FSD customers. First it would not just be the cameras it would also mean a new compute unit to handle the resolution too, so that would be quite expensive. Its possible they retro fit all their FSD customers at no charge, but I doubt they would actually go that route. To be honest - I don't know what they would do that would make everyone happy.

I continue to be baffled by the lidar/radar debates and how it doesn't seem to fade in plain view of the incredible advances in autonomy that the team has made since making the decision to stop using the "crutch" (their words - not mine).

I await your downvotes.
 
One possibility is those people are wrong and have a proclivity to that topic and enjoy debating it :)

We are literally watching 10.69 rolling out and are seeing AMAZING results over just the previous release on top of the fact that the management team at Tesla have been reiterating they think FSD will ship to all customers THIS year. (In the past we always heard "next year", it will be "next year".) It seems they have a much higher confidence now that they have solved it.

Now - one possibility could be they need higher quality cameras - something supported by the Samsung billion dollar deal. If they decide they need to upgrade the cameras to completely solve all the edge cases then I believe they would be in a bit of trouble with their current FSD customers. First it would not just be the cameras it would also mean a new compute unit to handle the resolution too, so that would be quite expensive. Its possible they retro fit all their FSD customers at no charge, but I doubt they would actually go that route. To be honest - I don't know what they would do that would make everyone happy.

I continue to be baffled by the lidar/radar debates and how it doesn't seem to fade in plain view of the incredible advances in autonomy that the team has made since making the decision to stop using the "crutch" (their words - not mine).

I await your downvotes.
Well put and no downvote here. AMAZING doesn't even begin to describe 10.69. Granted, this is base on YT videos and not any personal experience (yet). Really exited to what Tesla will develop over the next 18-24 months
 
  • Like
Reactions: damonbrodie
Where is the alternative "Robotaxi success is uncorrelated to Lidar"?

A train on rails needs an immediate feedback system, the flanges and taper give mechanical feedback, no intelligence needed to stay in line.
Lidar can similarly give immediate distance to object feedback to keep a car clear of obstacles with relatively little processing while following a map. It's easy to see how self driving evolved from that and how it's hard to change once such an evolution is under way. But it's also important to take off our blinders and look at the big picture.

What gets lost in the debate is that navigating unknown environments takes intelligence and decision making skills that are well and truly beyond what's needed for the interpretation of multiscopic video. Interpretation of the input channel, be it Lidar, vision or something else, will be seen as child's play compared to the task of building the "Robotaxi brain".
 
  • Like
Reactions: damonbrodie
Elon has mentioned that his decision to focus on Vision (cameras) is due to the heavy processing and competeting data generated by using both Vision and Radar at the same time. With both sensors operating the computer must take in the two data streams, compare their data and try to sort out which of the two senarios to follow. It can be a very difficult decision for the computer to sort out.

Kind of like when you are driving manually down the road and your passenger begins screaming out you are going to crash and what to do, while your own brain is also taking in the data and you have processed a route to avoid the issue. The two data streams can be very different and you need to (in an instant) to do what the screaming passenger is telling you while your own brain is also processing a solution.
 
  • Like
Reactions: damonbrodie
I keep coming back to the fact that people have been driving cars for the past 100 years using only their two eyes (and a sophisticated neural net that is our brain).
It's only a matter of time until the Tesla neural net becomes sophisticated enough to manage driving. It is not necessary for the Tesla net to be as capable as a human brain, just enough to drive. When you look at nature, we have small insects that manage to navigate complex 3D environments with very small neural nets. I think that the Tesla net could become as good as that of a fly.
 
Please do not come up with own definitions of SAE levels. L3 and L2 are completely different. In L3 car drives itself and the manufacturer takes responsibility for safety. In L2 driver is responsible for driving. This is not a sliding scale. Mistaking L2 for a L3 might be fatal for the driver & passengers.
Then that's a legal liability issue and business decision, not necessarily a technical distinction.

The same technology could be L2, and then if someone pays $100 per month the liability switches to someone else as L3, if driven on certain highways in clear weather daytime without anything technically changing. I'd call that technology high L2 and low L3 capable then. That was my intent.

I think the current Tesla technology appears to be on path to get to that point, but whether they take on legal liability in some cases is not known.

I believe that closer to autonomy will require more and higher resolution visual sensors, with more redundancy & overlapping field of view (stereo), plus maybe high resolution radar. I don't think lidar is mandatory.

When Waymo started, there was no high resolution radar, and computer vision was nowhere near able to do what it can today, so lidar was the only option.
 
What gets lost in the debate is that navigating unknown environments takes intelligence and decision making skills that are well and truly beyond what's needed for the interpretation of multiscopic video. Interpretation of the input channel, be it Lidar, vision or something else, will be seen as child's play compared to the task of building the "Robotaxi brain".

Where Tesla has a potential large asset is many human drivers navigating across many areas. This can be processed and used as inputs for semantic labeling in supervised machine learning of the driving policy module. I don't know how much they do this now but it's a possibility.

One obvious use is using human driving to map out the accepted semantic behaviors and mapping at intersections---"If I am Here and want to go There, which path should I take?" This is a necessary shortcut in complex intersections which might have multiple lanes and obstructions and complex rules signalled to humans in natural language signs. Either literally at the same exact intersection where the car can follow the average human path, or to train ML models to guess at a path when there isn't a preprogrammed path in the map.

Some subset of good drivers can be used to model ideal natural behaviors which feel best to people, even if other driving styles are technically 'safe'.

I don't think the number of miles driven by autonomous cars is that good a measurement---the number of miles driven by the ground truth supervised training set though is a big deal.
 
  • Like
Reactions: Olle
I keep coming back to the fact that people have been driving cars for the past 100 years using only their two eyes (and a sophisticated neural net that is our brain).
Indeed. When you watch the FSD Beta visualization and compare it to the real world while it's driving, it is already now obvious that the missing piece is the decision making skill, not the input format. That's why Lidar vs vision has nothing to do with the issue.
 
<good stuff edited out>

One obvious use is using human driving to map out the accepted semantic behaviors and mapping at intersections---"If I am Here and want to go There, which path should I take?"
I think they have worked really hard to avoid doing any of this. Maybe I'm wrong, but if you get into manual pathing then that data has a shelf life as things slowly change over time. I expect they have been taking the "high road" on this and trying to keep the pathing as a pure algorithmic programing exercise. I'd love to have one of the FSD engineers comment on that as some point.
 
When you look at nature, we have small insects that manage to navigate complex 3D environments with very small neural nets. I think that the Tesla net could become as good as that of a fly.
Flys get killed all the time, splat on the windshield, eaten by all kinds of predators, in the air, while sitting still.

They only fly well enough to survive as a species, not as individuals.

I'm kinda over all debate about vision only vs lidar, radar, laser, whatever.

At the extreme: ok, Tesla perfects FSD based on vision, let's say in 2 weeks, it's perfect* everyone loves it.

*In good weather.

They're working inside their own self-defined "geo" fence: good weather. Then what? Woops, it's raining hard, take over bub. I already pull over when it's unsafe to drive in bad weather. How is this helping? OK it's better, in good weather, than 90% or whatever. What percent of all accidents does that solve for? Not 100, that's for sure.

I think that's a deal killer. And they don't get ANY knock-on functionality that they could use in the future to enhance regular driving, ability to see through fog, rain, snow, darkness.

Just seems like a huge blind spot, and the result will be a gimmick for daytime good weather use. Cool story, bro (Elon).

[long boring parts of trips are already really well solved by free AP, so that's not part of the FSD benefit]
 
  • Like
Reactions: GalacticHero
Flys get killed all the time, splat on the windshield, eaten by all kinds of predators, in the air, while sitting still.

They only fly well enough to survive as a species, not as individuals.

I'm kinda over all debate about vision only vs lidar, radar, laser, whatever.

At the extreme: ok, Tesla perfects FSD based on vision, let's say in 2 weeks, it's perfect* everyone loves it.

*In good weather.

They're working inside their own self-defined "geo" fence: good weather. Then what? Woops, it's raining hard, take over bub. I already pull over when it's unsafe to drive in bad weather. How is this helping? OK it's better, in good weather, than 90% or whatever. What percent of all accidents does that solve for? Not 100, that's for sure.

I think that's a deal killer. And they don't get ANY knock-on functionality that they could use in the future to enhance regular driving, ability to see through fog, rain, snow, darkness.

Just seems like a huge blind spot, and the result will be a gimmick for daytime good weather use. Cool story, bro (Elon).

[long boring parts of trips are already really well solved by free AP, so that's not part of the FSD benefit]
You bring up a good point. At this point in FSD's evolution, it does not do well in poor weather. Right now it just tells you to take over when it starts raining/snowing. Perhaps the logic for now should be as you suggested - when it detects poor weather that's unsafe to continue, it automatically changes lanes and pulls over with the hazards turned on.
 
Flys get killed all the time, splat on the windshield, eaten by all kinds of predators, in the air, while sitting still.

They only fly well enough to survive as a species, not as individuals.

I'm kinda over all debate about vision only vs lidar, radar, laser, whatever.

At the extreme: ok, Tesla perfects FSD based on vision, let's say in 2 weeks, it's perfect* everyone loves it.

*In good weather.

They're working inside their own self-defined "geo" fence: good weather. Then what? Woops, it's raining hard, take over bub. I already pull over when it's unsafe to drive in bad weather. How is this helping? OK it's better, in good weather, than 90% or whatever. What percent of all accidents does that solve for? Not 100, that's for sure.

I think that's a deal killer. And they don't get ANY knock-on functionality that they could use in the future to enhance regular driving, ability to see through fog, rain, snow, darkness.

Just seems like a huge blind spot, and the result will be a gimmick for daytime good weather use. Cool story, bro (Elon).

[long boring parts of trips are already really well solved by free AP, so that's not part of the FSD benefit]
there are all kinds of problems with snow. LiDAR and radar aren’t reading snow covered road signs either. Bad weather is a usecase to be handled after fair weather fsd is solved.
 
there are all kinds of problems with snow. LiDAR and radar aren’t reading snow covered road signs either. Bad weather is a usecase to be handled after fair weather fsd is solved.
There are more than visibility issues with adverse conditions like rain, snow and ice. GMs Supercruise (not a lidar system, but does have radar) cautions not to use in 'slippery or other adverse conditions including rain, sleet, fog, ice or snow'.

I do not know whether Utracruise will relax such restrictions.
 
Flys get killed all the time, splat on the windshield, eaten by all kinds of predators, in the air, while sitting still.

They only fly well enough to survive as a species, not as individuals.
Flys navigate in 3D and don't run into things. They only have vision. They're also very good at avoiding getting swatted.
I guess one of the mythical "Tesla killers" could swat a Tesla, though.