Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla Sensor Suite vs. LIDAR

This site may earn commission on affiliate links.
Hello new to the forums,

I was hoping to have a discussion about Tesla autopilot hardware.

Specifically it seems as though Tesla is going an entirely different direction when it comes to self-driving technology.

From listing to Investor conference calls it seems as though Tesla's final self-driving system will rely on very fine maps created by GPS (SpaceX satellite cluster perhaps), and not a LIDAR system creating a 360 map of the vehicles surroundings. Essentially Tesla vehicle seem to be just following lines created by GPS combined with self learning from people actually driving a Tesla.

Why are all other automakers installing LIDAR on their vehicles, while Tesla is alone with a simple sensor suite combined with GPS and front facing camera?

From what I can tell:

Sensor Suite Advantages: Can see through snow/rain/dust, cheaper, probably more standard off the shelf.

LIDAR Advantages: ?Laser beamz?...am I missing something?

Perhaps this is not the forum for a non-biased assessment, but does anyone know the advantage of LIDAR over the Tesla approach? Any information that you man have would be great.

Thanks very much in advance.

Mark
 
...Sensor Suite Advantages: Can see through snow/rain/dust, cheaper, probably more standard off the shelf....

It is nice that a radar can see through in inclement weather but what about in the best case of clear, sunny day, good weather with a big gigantic, tall and white tractor-trailer in Florida accident when the system could not brake itself?

LIDAR's proponent says that's a best example to use it for that Florida case.
 
  • Helpful
  • Disagree
Reactions: Rytis and BillO
An example of Tesla's limitation: it locks on a lead car and it follows that car. However, when there is a disabled car that's partially blocking the lane it still reacts to what it is programmed to do: follows the leader, centers the lane real good and as that result, it then crashes into the disabled car.

In a LIDAR system, it maps out every thing surround the car in advance. It then plans how to successfully plot a path that avoids obstacles either by braking to a stop or to steer away.

Tesla system: limited reactive system: It can only react what it is programmed to do (to follow the lead car and ignore the obstacle.)

LIDAR system: real time planning system in advance before approaching an obstacle (even when not visualized by you because the lead car blocks your line-of-sight that there's an obstacle such as a mattress beyond the lead car ...)
 
Last edited:
...does anyone know the advantage of LIDAR over the Tesla approach? Any information that you man have would be great...

I'm no expert in this area, but my understanding is that LIDAR, being of much shorter wavelength than radar, has much greater accuracy. Light waves are nanometer scale, while radio waves are meter size and larger. When you need to know the position of another vehicle or obstruction down to a few centimeters, it's going to be easier to do that with LIDAR.
 
Aside from cost, why not use both radar & lidar (and camera)?

Probably because of the size of the Lidar sensor. It's the rotating thingy on top of the roof, shooting laser beams in every direction to map out the environment. I don't think that will work well in a production car.

Although conceivably they probably could build some kind of invisible Lidar sensor into the nose or top windshield that only sees a sector up front.

googlecar-101~_v-videowebl.jpg


google-car-self-driving-car-8-638.jpg


LIDAR-sensor-view-of-the-BAIAS-show-floor.jpg
 
The radar used in cars is typically 77 GHz which is about 4 mm, so there's no significant issue with wavelength resolution. Angular resolution for radar is worse, but that's probably more because the radar beam is phase array steered, whereas most lidar systems use mechanical scanners and narrow beams. There are some phase array scanned lidar being developed, but I don't know what their angular resolution is.

My contention for a long time has been that Tesla needs to build a software model of what's around the car based on its radar and cameras and then maneuver the car based on that model, not what it senses instant by instant. Such a system would likely require some pretty serious compute power though. In addition to just maintaining the model, there would have to be very close coordination between the radar, camera, and image analysis software so it could recognize bicycles, jaywalkers, trucks turning left, etc.
 
Hello new to the forums,

I was hoping to have a discussion about Tesla autopilot hardware.

Specifically it seems as though Tesla is going an entirely different direction when it comes to self-driving technology.

From listing to Investor conference calls it seems as though Tesla's final self-driving system will rely on very fine maps created by GPS (SpaceX satellite cluster perhaps), and not a LIDAR system creating a 360 map of the vehicles surroundings. Essentially Tesla vehicle seem to be just following lines created by GPS combined with self learning from people actually driving a Tesla.
This description is a very inaccurate characterization of how the autopilot system works and also of how other autonomous systems work. I suggest you read up on the articles I link below, but I will briefly say where you went wrong.

The Tesla autopilot system also creates a model of the vehicle surrounding except only with a roughly 50 degree field of view (currently). It does this using the front video camera and radar sensor (supplemented by ultrasonic sensors around the car). That system is able to let the car know what part of the lane it can travel and also where vehicles/obstacles are in front of it by doing visual processing (like how your eye works) further confirmed by radar and ultrasonic sensor data. It does not rely on a separate pre-existing map to do this. Section 4 in the link below has a great picture of what it looks like to the system.
Exclusive: The Tesla AutoPilot - An In-Depth Look At The Technology Behind the Engineering Marvel

Where the high resolution maps come in are where lane markings are poorly marked. And they are not generated by "GPS" satellite imagery clusters as you suggest, but rather from data gathered from Tesla vehicles. That's how Tesla's system is able to "magically" stay at the center of poorly marked lanes while other lane keeping systems fail completely.
Tesla is mapping out every lane on Earth to guide self-driving cars

The most famous LIDAR based system would be Google's and they do a 360 degree model of the surroundings as you say. However, the system relies heavily on high resolution maps, more so than Tesla's system. It matches cues in the LIDAR model of the environment to high resolution maps in order to figure out where the lanes are (rather than like Tesla which uses the camera and lane markings to do so). Keep in mind what the LIDAR sensor gathers is a point cloud, which is pretty much like a 3D model with no textures. It doesn't "see" the lane markings (although lidar systems are being developed that can do this). The Google car also has cameras and radar, but they are used to track other vehicles and obstacles (not lane markings as Tesla's system is doing).

Under Google's system it is conceivable that it can come to road Google had not mapped before and not know where the lanes are, while the Tesla system would know by detecting lane markings.
How Google’s self-driving cars detect and avoid obstacles | ExtremeTech

Why are all other automakers installing LIDAR on their vehicles, while Tesla is alone with a simple sensor suite combined with GPS and front facing camera?

From what I can tell:

Sensor Suite Advantages: Can see through snow/rain/dust, cheaper, probably more standard off the shelf.

LIDAR Advantages: ?Laser beamz?...am I missing something?

Perhaps this is not the forum for a non-biased assessment, but does anyone know the advantage of LIDAR over the Tesla approach? Any information that you man have would be great.

Thanks very much in advance.

Mark
First of all, it should be made clear that not all LIDAR is created equal. The very first adaptive cruise control systems actually used LIDAR (a single beam unit). It was quickly abandoned for radar because it worked poorly in adverse weather or when the car it was tracking was dirty (making it non-reflective; side point: if Tesla goes ahead and adds matte paint as an option, it might cause issues with LIDAR).
Autonomous cruise control system - Wikipedia, the free encyclopedia

However, people seem to assume that when "LIDAR" is mentioned that it is the same as the one Google uses. That one is a $80,000 64 beam unit that can rotate 360 degrees and take 2 million readings a second with an accuracy of less than 2 cm. That has to be mounted on top of the car to get the 360 degree view.
HDL-64E

The single beam LIDAR units mounted in the front of the car used for ACC can be had for under $1500 as a spare part (actual cost is even lower), but obviously are significantly less capable (roughly on par or inferior to a typical front radar unit).
Parts.com® | Lexus SENSOR ASSY, LASER R PartNumber 8821050030

The main advantage of the camera/radar/ultrasonic sensor suite is that it works in all weather conditions (and also on non-reflective objects), fits within the footprint of the original car, and has affordable costs. It is able to basically detect things visually like an eye does (plus more). Disadvantage is that it doesn't have as high accuracy as lidar.

The main advantage of the lidar system (like Google's) is high accuracy modeling of the environment down to centimeters (in a point cloud manner like posted by @Pando). Negatives are basically the opposite of what I described above.
 
Last edited:
Aside from cost, why not use both radar & lidar (and camera)?

The reason is a clear "unnecessary" for a foreseeable future according to Elon Musk:

Elon Musk says that the LIDAR Google uses in its self-driving car ‘doesn’t make sense in a car context’

"That said, I don’t think you need LIDAR. I think you can do this all with passive optical and then with maybe one forward RADAR… if you are driving fast into rain or snow or dust. I think that completely solves it without the use of LIDAR. I’m not a big fan of LIDAR, I don’t think it makes sense in this context."
 
First of all, you just need to understand: tesla for now as a limited sensor suite, but it need to be upgraded for self-driving, they need radar all around the car and a camera in front.
Mobileye and George Hotz ( see video below ) and of course, elon.. don't see the need of a lidar, and i completely agree.
If you really need to see then, a camera is good enought and probably better at the task.

The radar are needed for distance judgment, and it's the only usefull thing they do, but they do it pretty well
The camera instead is user mainly for the path planning


It is nice that a radar can see through in inclement weather but what about in the best case of clear, sunny day, good weather with a big gigantic, tall and white tractor-trailer in Florida accident when the system could not brake itself?
you are clearly confused.. this is a merely software problem.
at the current level, it's best not to act if you are not sure at 100%, an hard-brake at high speed is more dangerous that a 'non break'.
You should watch in front of you, and if a 'gigantic white tractor-trailer' cross you path, you should see it if the car can't be sure of it.
The big problem in this case was also the camera wich is not good since it based on contrast-level and not color-level, but the next camera would probably fix it.

LIDAR's proponent says that's a best example to use it for that Florida case.
A non-monochromatic camera could solve this no problem, and also a better radar software.


An example of Tesla's limitation: it locks on a lead car and it follows that car. However, when there is a disabled car that's partially blocking the lane it still reacts to what it is programmed to do: follows the leader, centers the lane real good and as that result, it then crashes into the disabled car.
Again, this is a purely software issue.

In a LIDAR system, it maps out every thing surround the car in advance. It then plans how to successfully plot a path that avoids obstacles either by braking to a stop or to steer away.

Tesla system: limited reactive system: It can only react what it is programmed to do (to follow the lead car and ignore the obstacle.)

LIDAR system: real time planning system in advance before approaching an obstacle (even when not visualized by you because the lead car blocks your line-of-sight that there's an obstacle such as a mattress beyond the lead car ...)
You are very confuse.
You are mixing hardware with software.. a lidar can't reason out nothing. a lidar is like a camera or a radar, it give you info, what you do with the info.. well, it's another story.

And about the 'limited reactive system', reacting to anything means an AI, wich neither googler nor the competitor has.
Google, just since you have put the lidar in the game and you were refering to google, has more limitation than a tesla.
They need to map everything ( everything! ) for the car to move, it's the sistem with less "reacting to everything" that you can get now.


My contention for a long time has been that Tesla needs to build a software model of what's around the car based on its radar and cameras and then maneuver the car based on that model, not what it senses instant by instant. Such a system would likely require some pretty serious compute power though. In addition to just maintaining the model, there would have to be very close coordination between the radar, camera, and image analysis software so it could recognize bicycles, jaywalkers, trucks turning left, etc.
And that's exactly what they are doing. ( they are developing this, from elon tweet )
 
I really do not have much to add here apart from one rather simple and perhaps naive observation and that is that humans having been driving for a very long time using only stereographic vision. It would seem by example that this is all that is required.

Humans also get into accidents in part because they don't have enough information about the environment around them. Elon's goal is to have a system that is at least 1 order of magnitude better than a human driver and to do that, the car needs to be fully aware of its environment 360 degrees around it.
 
<snip>
The Tesla autopilot system also creates a model of the vehicle surrounding except only with a roughly 50 degree field of view (currently). It does this using the front video camera and radar sensor (supplemented by ultrasonic sensors around the car). That system is able to let the car know what part of the lane it can travel and also where vehicles/obstacles are in front of it by doing visual processing (like how your eye works) further confirmed by radar and ultrasonic sensor data. It does not rely on a separate pre-existing map to do this. Section 4 in the link below has a great picture of what it looks like to the system.
Exclusive: The Tesla AutoPilot - An In-Depth Look At The Technology Behind the Engineering Marvel
<snip>
From what I can see based on how the system performs, the model it builds is instantaneous, that is, once a target leaves it's field of view, it's out of the model. I also think the field of view is closer to 120 degrees or maybe a bit more.

Other than that, I totally agree with your comments.
 
  • Like
Reactions: Jaff
... what good is a self-driving car if it can't cope with snow and fog...

It is clear that LIDAR proponents are not anti-RADAR.

It has never about let's take away RADAR from current configuration.

They want to use as many different kinds of sensors as possible so that one would complement each other's weakness.

I think the issue is: Should Tesla add an additional sensors like LIDAR to solve some of the current limitations.
 
Google Lidar is very expensive but has very good resolution. The solid state cheaper variants have less resolution and less dispersion. I suspect in these aspects that they compare similarly to radar. This is typical engineering where the designer has to make decisions on priorities including cost. The decision will change over time as the sensors improve and their costs change.