Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla Sensor Suite vs. LIDAR

This site may earn commission on affiliate links.
lets take a hypothetical scenario of a tailor across a road, camera see it, lidar see it, radar does not see it

or another hypothetical scenario of a dust whirly on a road, camera does not see it, radar does not see it, Lidar saw it

or a bicycle crossing the road with the sun through it, radar see it, camera does not see it, lidar see it

1st is a false negative
2nd is a false positive
3rd is a false negative

false sensor readings are a fact of life
It takes a lot more than a sensor detecting something to be a useful part of an AP system. The system has to recognize what it is and put it into the overall model of the environment it builds for the car to navigate through. This is why I think the software that integrates and interprets the inputs from both the camera and radar is the key part. Lidar likely gives higher resolution position information than radar, but unless the overall system can recognize what it's sensing it's still not going to be useful.
 
You are at my side, only you doesn't recognize it! you made the perfect case.
If you exclude the lidar ( 2° case ), you SHOULD STOP EVERY TIME.

but you can't stop every time a vehicle sees a lump of steel across the road, This happens ALL the time.

near me, there is a overhead rail bridge, before which is a sign for maximum height, and an undulating road (road goes up and down)

when driving along the road, the road sign will appear to be a piece of steel lying across the road. My mind correctly interprets it, without me even trying, its a subconscious competency.

such signage is common, even bridges are common, roads that go up and down are common. False positives are a fact of life.
As software gets better, its more able to interpret this.

Trees are another one, rural road, tree appears in middle of lane.
(Actually road builders simply diverted the road around the tree)
or overhanging tree, and undulating road. There are many instances where ones forward sight requires a leaf of faith to keep driving because we don't have a clear view ahead. But we are very used to it, and do it instinctively.

Our minds interpret the road in context of whats around,
is that 40kg 4 legged lump of meat
2 children or a 1 large dog.

does that road sign have truck at the end of it?

is that dust whirly just some air? or is it a boulder on the road?

the more sensors the better, but false positives are a fact of life.
 
but you can't stop every time a vehicle sees a lump of steel across the road, This happens ALL the time.
You seriously have lumps of steel in the road all the time? That's messed up. Run over that at speed and if it flies up it might kill someone. If it pierces the battery compartment you might have a fire. If it shreds your tire you might lose control of the car and get hurt or hurt someone else. You need to write to your local government if it's such a huge issue because it's clearly a safety hazard.

Where I live I rarely see steel objects in the road especially ones that are safe to drive over... it's almost never.
 
You seriously have lumps of steel in the road all the time? That's messed up. Run over that at speed and if it flies up it might kill someone. If it pierces the battery compartment you might have a fire. If it shreds your tire you might lose control of the car and get hurt or hurt someone else. You need to write to your local government if it's such a huge issue because it's clearly a safety hazard.

Where I live I rarely see steel objects in the road especially ones that are safe to drive over... it's almost never.

He didn't say the steel was in the road, he said the steel was across the road. (Or in some cases it is over the road, but with the perspective may appear to be in the road until you get right up to it.)
 
but you can't stop every time a vehicle sees a lump of steel across the road, This happens ALL the time.

near me, there is a overhead rail bridge, before which is a sign for maximum height, and an undulating road (road goes up and down)

when driving along the road, the road sign will appear to be a piece of steel lying across the road. My mind correctly interprets it, without me even trying, its a subconscious competency.
Now, the radar has no problem with it of course, what you say is that the camera could have problem, but then, we are talking about stereoscopic camera ( 2 camera ) and they don't have problem with calculating the distance, so again, this is a case that can be easily solved

such signage is common, even bridges are common, roads that go up and down are common. False positives are a fact of life.
As software gets better, its more able to interpret this.
exactly.
the point is: why do you need a lidar? why adding a layer of complexity? the camera can see what your eyes can see, and this is enought, the lidar create simply more false-positive, of course it can be excluded, but then, in this case you will exclude completly the sensor ( sort of ) if it has problems with fog or rain or similar, so what good does it do?
if you can't count on a sensor, then it's useless and can only create problem

Trees are another one, rural road, tree appears in middle of lane.
(Actually road builders simply diverted the road around the tree)
or overhanging tree, and undulating road. There are many instances where ones forward sight requires a leaf of faith to keep driving because we don't have a clear view ahead. But we are very used to it, and do it instinctively.
yes.. but they need to be in the path planned for the car, and they won't be, so what's the problem?
 
Last edited:
Now, the radar has no problem with it of course, what you say is that the camera could have problem, but then, we are talking about stereoscopic camera ( 2 camera ) and they don't have problem with calculating the distance, so again, this is a case that can be easily solved


exactly.
the point is: why do you need a lidar? why adding a layer of complexity? the camera can see what your eyes can see, and this is enought, the lidar create simply more false-positive, of course it can be excluded, but then, in this case you will exclude completly the sensor ( sort of ) if it has problems with fog or rain or similar, so what good does it do?
if you can't count on a sensor, then it's useless and can only create problem


yes.. but they need to be in the path planned for the car, and they won't be, so what's the problem?

Just FYI, the quote was attributed to me, but it really came from @renim . I don't know how my ID got on that. No harm, no foul, just giving credit where it's due.
 
Nvidia has a report on its system which uses a camera and their latest computer to teach a car to drive. No radar, no lidar, just a camera and a human driver as they recorded a visual of the course. First they recorded a visual of a course under different conditions using three cameras, put that data without much interpretation into their computer, and then used one polychrome camera on a test car, not to repeat a map, but to use the computer's interpretation of what the car was "seeing" earlier. Roads were freeways, curvy, sharp curve unmarked, gravel with no clear boundaries.

dave2-edited2016-04-21-1.mp4.

Absolutely amazing. The computer learned how to drive a car in 100 miles of driving under varying conditions.

The actual report is here.

https://arxiv.org/pdf/1604.07316v1.pdf

On second reading, unless I missed it, they only taught it how to steer, not to brake or accelerate. Sorry. In principle I don't believe it would be much more difficult to judge braking and accelerating, especially with Tesla's archive of actual driving experience with autopilot.
 
Last edited:
  • Like
Reactions: Tam and GoTslaGo
LIDAR is good for building a global base case that more reliable lower cost array sensors can use for integration/translation.

Radars are good, but I think you need three emitters and 7 recievers. One emitter and reciever are as installed now, to follow traffic.

The other two emitters are in the A Pillars looking at quarters with overlap 50 feet forward of center. These emmiters light up stationary and slow moving objects on shoulders, in roundabouts and on cross streets.

The recievers look for directional reflection and Doppler shifts. In addition to the recievers in the top of the A pillar, there is reception in the headlight assembly and in the fog light assembly. These recievers are recessed enough to shade out ground clutter reflections. They are looking at phase shift on the two different emitted signals and direction.

This is low cost, has a good view factor, and is weather reliable.
 
Nvidia has a report on its system which uses a camera and their latest computer to teach a car to drive. No radar, no lidar, just a camera and a human driver as they took a visual of the course. First they mapped a course under different conditions using three cameras, put that data without interpretation into their computer, and then used one polychrome camera on a test car, not to repeat a map, but to use the computer's interpretation of what the car was "seeing" earlier. Roads were freeways, curvy, sharp curve unmarked, gravel with no clear boundaries.

dave2-edited2016-04-21-1.mp4.

Absolutely amazing. The computer learned how to drive a car in 100 miles of driving under varying conditions.

The actual report is here.

https://arxiv.org/pdf/1604.07316v1.pdf

This little car is making some mistakes, very far from autonomous. I'm not sure it's worth $15k though. Comma ai claims to do relatively the same thing for under $1000
 
Yes, they can, using only a camera ( that's the point i was making ) but ist's not good in a city, for this, you need a 360° coverage, like what we do when we check the lateral/back mirror, but you don't need to see exactly what's going on at 360°, you need only to know the distances and the relative speed, so radars are good enought
 
Yes, they can, using only a camera ( that's the point i was making ) but ist's not good in a city, for this, you need a 360° coverage, like what we do when we check the lateral/back mirror, but you don't need to see exactly what's going on at 360°, you need only to know the distances and the relative speed, so radars are good enought

That still sounds much better than what Tesla has now as Tesla doesn't do:

traffic cones,

wooden post guardrails,

Texan guardrails...
 
Nvidia has a report on its system which uses a camera and their latest computer to teach a car to drive. No radar, no lidar, just a camera and a human driver as they recorded a visual of the course. First they recorded a visual of a course under different conditions using three cameras, put that data without much interpretation into their computer, and then used one polychrome camera on a test car, not to repeat a map, but to use the computer's interpretation of what the car was "seeing" earlier. Roads were freeways, curvy, sharp curve unmarked, gravel with no clear boundaries.

dave2-edited2016-04-21-1.mp4.

Absolutely amazing. The computer learned how to drive a car in 100 miles of driving under varying conditions.

The actual report is here.

https://arxiv.org/pdf/1604.07316v1.pdf

On second reading, unless I missed it, they only taught it how to steer, not to brake or accelerate. Sorry. In principle I don't believe it would be much more difficult to judge braking and accelerating, especially with Tesla's archive of actual driving experience with autopilot.

If you watch it at 8:33, the driver is hanging out the side of the car banging on the door. Don't think he can be pushing the accelerator or brakes in that position, so I'm guessing the car does modulate the gas/brakes to some degree.

Thanks for posting! Especially liked the segment where they showed the pixelated images of the unmarked road (bug-eyed like...).
 
Yes.. and no, and yes.

Let me explain better my mind.

yes, it's better, it handle more things

but no, it's not better since it hasen't been tested and it work "by magic" ( sort of ) and you don't really know how well it's going to do, it's a different way, tesla actually ( and of course mobileye ) use a labelling function so "this is road" "this is a car" etc, and of course, everything that's not labelled it's an unknown objet and probably discarded ( again, sort of, i'm using a semplification ), so it has a good accuracy of what he know, but of course none of what it's unknown, so for what he knows it has no problem, but you left a huge hole for the unknow

and again, yes, it WILL be better (the nvidia system) it was just not possible before, and it will need a huge learning session and a very good testing, as said, it work sort of magic, you don't really know what the system is doing and what clue it's catching .. sort of, you know something, you can thinkering on some other thing, but the beauty of it is that since you don't need to hard-code every single case you cover a good many cases without work just putting the maching in 'learning mode' feeding a lot of data, data that of course with a tesla you don't really have problem getting, the problem is, since you don't really know what clue it's catching on the image sometime you could end up with the car doing some very stupid thing for no reason at all ( that you can understand ) so the system need to be tested harder thant he previus system, but it has huge potential

i think, and hope, that the AP2.0 will be based on this system, this is why, with the current HW you can get a lot of improvement, of course not level 4 since you don't have all the sensors you really need, but it could be a huge improvement
 
Thanks for posting! Especially liked the segment where they showed the pixelated images of the unmarked road (bug-eyed like...).
Oh, is that what the layered bit was about (and concern about the horizon). Does that permit depth calculation and speed interpolation? All I caught on reading was the concern about measuring turning radius and didn't understand at all the concern for the horizon. At two points in the video there was concern, first, about speed, second, a large truck looming ahead.

I guess I'll have to read again or others here can explain.
 
Tesla is working with Bosch for its improved radar software version.

In the mean time, Bosch is giving demonstrations of its LIDAR version among its 15 sensors:


tesla_ev_auto.jpg


First Drive: electric and autonomous Tesla S by Bosch on Driving the Nation



A glimpse at Tesla Autopilot 2.0 capabilities with a Bosch Model S prototype




 
For those who are interested in LIDAR, here's another interesting company from Canada. In some applications such as blind spot detection, it would cost less than $50. For 3D Point-cloud Assisted Driving, it's starting at less than $100.


It claims to work in inclement weather.

At 00:29:45, it shows it can easily identify a tractor trailer doing a Florida style "Lateral Turn Across Path (LTAP)"


tF7n3W3.jpg


Instead of using one very visible LIDAR at the roof of the car, the company hides them at the 4 corners of a car (headlight and tail light assemblies.)
 
Last edited:
For those who are interested in LIDAR, here's another interesting company from Canada. In some applications such as blind spot detection, it would cost less than $50. For 3D Point-cloud Assisted Driving, it's starting at less than $100.

This is wonderful. Good catch. Have you thought to post a reference to your post on other threads? Investor threads, for example? Watch out by acknowledging it is a promotional piece; you and I are interested in the technology and pricing!
 
This is wonderful. Good catch. Have you thought to post a reference to your post on other threads? Investor threads, for example? Watch out by acknowledging it is a promotional piece; you and I are interested in the technology and pricing!

Financial speculating and investing are interesting but I am more interested on how things work and how to make driving safer.

On the other hand, when Tesla says LIDAR is not necessary, that doesn't mean it can't have LIDAR.

Tesla Model S prototype with new sensors spotted near Tesla’s HQ – potentially lidar or GPS

Two months a go a conventional LIDAR was spotted:

tesla-model-s-lidar-e1473514250292.jpg




Recently, something look like newer LIDAR pucks were spotted:

tesla-lidar-or-gps.png


LIDAR pucks have been seen in 2014 at Las Vegas BMW self-driving drifting:

BMW builds self-drifting cars

One at front left roof:

Large%20Image_10364.jpg


The other at rear right roof:

Large%20Image%206_689.jpg
 
Last edited:
Financial speculating and investing are interesting but I am more interested on how things work and how to make driving safer.

On the other hand, when Tesla says LIDAR is not necessary, that doesn't mean it can't have LIDAR.
Snip.

I share your interests and follow the short term investor's thread though I haven't money to invest now. I think those members might be interested in these two posts by you especially on the weekend when there is a tendency to wander off topic. Would you mind if I reference them or would you like to do it?
 
Last edited: