Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon: "Feature complete for full self driving this year"

This site may earn commission on affiliate links.
I disagree. More information is always better, and lidar provides additional information that cameras do not.

Actually, LIDAR provides much, much less information than cameras, and potentially with much lower accuracy. First, you have to fuse that with camera info, and that fusion has the potential for not accurately mapping the distances onto specific pixels from the camera accurately.

Second, LIDAR doesn't sample a scene instantly; the distance measurement can be accurate, but if you've moved three feet between when you sampled two points that happen to be near one another because of the scanning pattern, you now have to apply math to guess at what the actual shape of the object is.

By contrast, with two camera images, you have a point cloud in which all points were effectively sampled at the same time. Ignoring the errors caused by timing skew between cameras (which should be roughly constant from frame to frame), you're going to end up with higher accuracy using that approach, not lower accuracy, and thus should have much lower rates of false object detection, assuming all else is equal.
 
  • Like
Reactions: willow_hiller
Not sure what's confusing about this:


Or this:


Really? It looks awful to me, I'd be pretty lost trying to drive in that. The isometric view just hides how bad it is, you get to see all the useless nearby buildings filling in the void of data. It makes the scene look less sparse, and hides the 64 lines of vertical resolution that you have to work with.

I could do it but I would struggle with any significant amount of traffic, the occlusion and falloff is just so harsh, it's like half of the world has been sucked into a black hole. Cresting a hill would become an act of blind faith. There's this near-blind spot that has a diameter of 2-3 car widths. You have nearby parked cars disintegrating into a handful of unrecognizable points.

It's like trying to drive around while the only light source in the entire universe is a flashlight mounted on your roof and somehow the laws of physics were changed to delete all ambient and indirect light. I think that's the biggest disadvantage, the falloff is just so harsh.

What happened to this stop sign? It's like the signal just exploded.

upload_2020-2-5_14-43-6.png
 
You have two watches and they disagree, which do you use to tell time?
If you trust one watch over the other, why do you have two?

I used to scuba dive. I always wore a second depth gauge. If they disagreed I believed the one that showed the shallower depth. If the computer thought I was shallower than the second depth gauge said, then I knew I could not trust the computer's NDL calculation and added a big extra margin of safety, or used the tables.

Lidar provides additional data. If there's a conflict between what the cameras are telling you and what the lidar is telling you, you rely on the one that gives you the more cautious recommendation. But in most cases, the data from one will complement the other.

And with a Tesla, you would get it OTA as soon as it is ready.
With Toyota, you need to wait till they perfect the SW and the HW, and integrate it into a car, and sell enough cars for you to buy one.

If Waymo cracks the autonomy nut and leases the technology to Toyota, there will be no need to wait. At that point, Tesla's OTA updates don't make a difference until they reach the level of the technology in the Toyota.

Actually, LIDAR provides much, much less information than cameras....

But I'm not talking about using lidar instead of cameras. I'm talking about adding an additional data set.

And as diplomat33 correctly points out, cameras are only as good as the computer program that interprets the images. And so far, computers are cr*p at interpreting images.
 
Really? It looks awful to me, I'd be pretty lost trying to drive in that. The isometric view just hides how bad it is, you get to see all the useless nearby buildings filling in the void of data. It makes the scene look less sparse, and hides the 64 lines of vertical resolution that you have to work with.

I could do it but I would struggle with any significant amount of traffic, the occlusion and falloff is just so harsh, it's like half of the world has been sucked into a black hole. Cresting a hill would become an act of blind faith. There's this near-blind spot that has a diameter of 2-3 car widths. You have nearby parked cars disintegrating into a handful of unrecognizable points.

It's like trying to drive around while the only light source in the entire universe is a flashlight mounted on your roof and somehow the laws of physics were changed to delete all ambient and indirect light. I think that's the biggest disadvantage, the falloff is just so harsh.

What happened to this stop sign? It's like the signal just exploded.

View attachment 508287
This is the definition of arguing in bad faith.

The isometric view just hides how bad it is
No the view actually hide how far the distance is, in that view, 100m can look like its 50 ft away, this is because of the FOV settings of the debug cam, making things look closer.

I could do it but I would struggle with any significant amount of traffic
No you wouldn't
luminar.gif


And here's more
A visualization of this self-driving car's view of the world, including a man walking his dog : interestingasfuck

the occlusion and falloff is just so harsh, it's like half of the world has been sucked into a black hole.

same occlusion you would get if you were behind a car with a camera. if that image was projected in 3d using an overhead 3d view. All you will see is black in the occluded areas.

Cresting a hill would become an act of blind faith.
You are literally just making stuff up without even thinking it through. All cars come with surround lidars (front,sides, back). And if your two eyes were replaced with lidars, you would see what's infront of you as your eye wouldn't be on top of your car.
Its hard to have a real discussion if people don't think things through.
There's this near-blind spot that has a diameter of 2-3 car widths.
Again another argument in bad faith as all cars have surround lidar and this is just a top lidar output.
 
Last edited:
  • Informative
Reactions: APotatoGod
The HD map Elon is describing is the same HD map a driver familiar with the road has in his head. Only crowdsourced from cars and digitalized.

You can perfectly well drive at a new location without maps by vision only. But it doesn't make sense for a human driver to drive into the same pothole every day when you know it's there. In the same way it doesn't make sense that other Tesla's need to drive into the pothole once a few cars before you has.

Maps help, and they should definitely not be required to drive. If the road is covered in snow, they help you know where the lanes split and merge, even if you don't see the marks on the road. In the same way a familiar driver would know, however a new driver would likely drive elsewhere because he doesn't know the lanes under the snow.
I swear you just copy pasted one of my post from reddit saying exactly this. lol.

And the response i got from all tesla fans just weeks ago was that it was completely useless, a waste of time and would provide nothing as you already have to detect it in real time.

I'm sure i can find you saying the same thing not too long ago.

The point is HD MAP is HD MAP and elon said Any one using HD Map is DOOMED and one of the two things people will STOP using was HD Map. This is what created a firestorm between outspoken people who had no idea what they were talking about but were championing whatever elon said and people with actual expertise or have done actual research.

You are basically just revisioning what he said rather than admitting he WAS WRONG!
 
Last edited:
  • Like
Reactions: croman
But I'm not talking about using lidar instead of cameras. I'm talking about adding an additional data set.

As am I. The problem is that using LIDAR with cameras involves basically taking good data from the camera and trying to map extremely noisy data from LIDAR onto that.

LIDAR is like the rolling shutter you'd get if you sampled an image sensor in a bizarre, haphazard zigzag pattern over the course of two seconds instead of 30–60 times per second. If you're sitting perfectly still, the accuracy of LIDAR is amazing, but if you're moving, LIDAR requires nontrivial effort to get a point cloud that's even halfway useful, much less accurate — with an error of potentially O(20cm) by the end of each sweep. And I think even that level of accuracy requires starting with detailed HD maps calculated from multiple prior drives through the area to use as a baseline.

There are ways to improve the accuracy of the LIDAR data, but parallax calculations on detected edges in stereo images and throwing away the outliers gives you comparable accuracy for a few hundred grand less per car.
 
And the response i got from all tesla fans just weeks ago was that it was completely useless, a waste of time and would provide nothing as you already have to detect it in real time.

From a security perspective, you can't trust data from a single car. However, it is somewhat useful to have crowdsourced data, as long as there is a minimum amount of corroboration, to use as a fallback. For example, speed limits detected by previous cars that come through can be useful if your view of the sign is blocked by a semi, but only if verified by a large number of independent vehicles.

Communication from one car to another, however, is useless, because there's no practical way for your individual car to validate that the data came from multiple independent vehicles, as opposed to a troll device. (It's hard enough to validate how independent devices are even if you have a data center at your disposal, much less a car trying to do so on its own.)


The point is HD MAP is HD MAP and elon said Any one using HD Map is DOOMED and one of the two things people will STOP using was HD Map.

Relying on != using. A car that requires HD maps to get accurate enough measurements is going to be in a world of hurt whenever anything changes. But that's not to say that they can't be useful as a way of mitigating the risk of missing things that you believe should still be there (e.g. traffic lights, potholes, etc.).
 
  • Like
Reactions: mongo
Read the last couple of pages regarding camera vs lidar, Tesla’s new self labelling system etc. My background, I did my master thesis on Lidar based localization and mapping, was supervising other research in the field and worked with developing self driving cars for a competitor. Imo Tesla’s approach seems great to me, it might set them back a few months while the rewrite the code, but it should produce a lot of high quality labels. Cameras are good enough, the labels usings 1000 frames and 8 cameras will be really good at whatever you want to measure. Distance to lane markings? Should be centimeter level. Drivable area, should be good enough. Traffic lights for sure. Lidars are cool and great, but 8000 images will contain tons of signals also.
 
As am I. The problem is that using LIDAR with cameras involves basically taking good data from the camera and trying to map extremely noisy data from LIDAR onto that.

LIDAR is like the rolling shutter you'd get if you sampled an image sensor in a bizarre, haphazard zigzag pattern over the course of two seconds instead of 30–60 times per second. If you're sitting perfectly still, the accuracy of LIDAR is amazing, but if you're moving, LIDAR requires nontrivial effort to get a point cloud that's even halfway useful, much less accurate — with an error of potentially O(20cm) by the end of each sweep. And I think even that level of accuracy requires starting with detailed HD maps calculated from multiple prior drives through the area to use as a baseline.

There are ways to improve the accuracy of the LIDAR data, but parallax calculations on detected edges in stereo images and throwing away the outliers gives you comparable accuracy for a few hundred grand less per car.

This is simply incorrect. From begining to end idk where to start.

Lidar is accurate to within 1cm. Camera doesnt have good data. It doesnt give you distance. You are left to actually guess it. But then you call lidar using math to get accurate distance guessing. But then call running camera data through NN which tells you "I'm 54% sure this is 30m away but also 35% sure its 40m away" accurate and the other guessing.

this is pure horsecrap and against all published research and expertise.
 
Last edited:
  • Funny
Reactions: willow_hiller
From a security perspective, you can't trust data from a single car. However, it is somewhat useful to have crowdsourced data, as long as there is a minimum amount of corroboration, to use as a fallback. For example, speed limits detected by previous cars that come through can be useful if your view of the sign is blocked by a semi, but only if verified by a large number of independent vehicles.

Communication from one car to another, however, is useless, because there's no practical way for your individual car to validate that the data came from multiple independent vehicles, as opposed to a troll device. (It's hard enough to validate how independent devices are even if you have a data center at your disposal, much less a car trying to do so on its own.)

You are making up another revision to history, elon and all his fans rejected crowdsourced HD map.


Relying on != using. A car that requires HD maps to get accurate enough measurements is going to be in a world of hurt whenever anything changes. But that's not to say that they can't be useful as a way of mitigating the risk of missing things that you believe should still be there (e.g. traffic lights, potholes, etc.).

Wrong again another history revision. Elon fully REJECTED the use of hd map. So did all of you. Now you are trying to change the narrative. Just admit he was wrong
 
Last edited:
What are you talking about?
You are making up another revision to history, elon and all his fans rejected crowdsourced HD map.

Wrong again another history revision. Elon fully REJECTED the use of hd map. So did all of you. Now you are trying to change the narrative. Just admit he was wrong

There may be a misunderstanding the objection to HD maps:

Autopilot will drive down an unmapped random dirt road
SuperCruise will not

That is the overreliance/ brittle that Elon was not a fan of. What full rejection in all possible aspects are you referring to?
 
  • Informative
Reactions: APotatoGod
There may be a misunderstanding the objection to HD maps:

Autopilot will drive down an unmapped random dirt road
SuperCruise will not

That is the overreliance/ brittle that Elon was not a fan of. What full rejection in all possible aspects are you referring to?

That's mis-leading, Supercruise doesn't use hd maps, its just PR marketing, it can't even localize in the lidar map its using. If you look up USHR you will see the different level of maps they provide, supercruise uses the lowest. They only use it for adjusting speed in road curvature (curves). They don't use it for actuation control. The ADAS cars that use TRUE HD Maps today also work outside of it.

Anyway, wow the revision is in full swing just as i expected.
No Elon and HIS FANS fully rejected HD maps.

"I think HD maps are a mistake. We actually had HD maps for a while and canned it. You either need Hd maps or you don't need Hd map so in which case why are you wasting your time doing HD Maps. HD maps are a really really bad idea. The two main crutches that should not be USED and in retrospect be obviously false and foolish are lidars and hd maps."

- Elon Musk Autonomy Day​

This of course began the war Tesla fans waged on anyone using Hd maps. I wonder where this guy got his talking points from

h9akaMr.png


Why do some people insist that LIDAR+HD Map == hardcoded solution with zero versatility? : SelfDrivingCars

Here's a thread we had in the /r/selfdrivingcars subreddit about it called "Why do some people insist that LIDAR+HD Map == hardcoded solution with zero versatility?"

and what's the reply you got from Tesla fans in the thread? "This is dumb. Like really dumb."

Ofcrouse the user who was the reason the thread was created in the first place has deleted his posts. But his posts was one of the highest upvoted comment in a /r/teslamotors thread. Which shows you the amount of misinformation spread and believed in the tesla community.

It included things like "Waymo’s systems just need to return a true or a false every milisecond to determine if the car should keep driving or not. The steering is all done via premapped hd maps and set paths."

Even more egregious things about HD Maps and waymo system. too bad he deleted his posts. But this is the portrait of Tesla fans after Autonomy Day.
 
Last edited:
  • Helpful
Reactions: mongo
That's mis-leading, Supercruise doesn't use hd maps, its just PR marketing, it can't even localize in the lidar map its using. If you look up USHR you will see the different level of maps they provide, supercruise uses the lowest. They only use it for adjusting speed in road curvature (curves). They don't use it for actuation control. The ADAS cars that use TRUE HD Maps today also work outside of it.

Anyway, wow the revision is in full swing just as i expected.
No Elon and HIS FANS fully rejected HD maps.

"I think HD maps are a mistake. We actually had HD maps for a while and canned it. You either need Hd maps or you don't need Hd map so in which case why are you wasting your time doing HD Maps. HD maps are a really really bad idea. The two main crutches that should not be USED and in retrospect be obviously false and foolish are lidars and hd maps."

- Elon Musk Autonomy Day​

This of course began the war Tesla fans waged on anyone using Hd maps. I wonder where this guy got his talking points from

placeholder_image.svg


Why do some people insist that LIDAR+HD Map == hardcoded solution with zero versatility? : SelfDrivingCars

Here's a thread we had in the /r/selfdrivingcars subreddit about it called "Why do some people insist that LIDAR+HD Map == hardcoded solution with zero versatility?"

and what's the reply you got from Tesla fans in the thread? "This is dumb. Like really dumb."

Ofcrouse the user who was the reason the thread was created in the first place has deleted his posts. But his posts was one of the highest upvoted comment in a /r/teslamotors thread. Which shows you the amount of misinformation spread and believed in the tesla community.

It included things like "Waymo’s systems just need to return a true or a false every milisecond to determine if the car should keep driving or not. The steering is all done via premapped hd maps and set paths."

Even more egregious things about HD Maps and waymo system. too bad he deleted his posts. But this is the portrait of Tesla fans after Autonomy Day.
I'm not seeing the confict in what Elon is saying then and now (not speaking to anyone else's interpretation).
Then: we are building the system to not need HD maps
Now: the system still does not need HD maps to function, and you could map objects to the data set for fleet use.
Human anecdote: I drove through a certain intersection in Indianapolis multiple time before I noticed that there was a traffic signal mounted to the underside of the skyway.

A system using dead reckoning for lane following is more brittle than one that can use other sources. Misunderstanding of how other systems work and whether that is the case probably feeds the confusion/ conflict. For instance, I don't know what other systems could handle a new road the first time it is ever driven.
 
  • Like
Reactions: Cirrus MS100D
As am I. The problem is that using LIDAR with cameras involves basically taking good data from the camera and trying to map extremely noisy data from LIDAR onto that.

LIDAR is like the rolling shutter you'd get if you sampled an image sensor in a bizarre, haphazard zigzag pattern over the course of two seconds instead of 30–60 times per second. If you're sitting perfectly still, the accuracy of LIDAR is amazing, but if you're moving, LIDAR requires nontrivial effort to get a point cloud that's even halfway useful, much less accurate — with an error of potentially O(20cm) by the end of each sweep. And I think even that level of accuracy requires starting with detailed HD maps calculated from multiple prior drives through the area to use as a baseline.

There are ways to improve the accuracy of the LIDAR data, but parallax calculations on detected edges in stereo images and throwing away the outliers gives you comparable accuracy for a few hundred grand less per car.

This is all very interesting, but then why is Waymo (and, I think, all other autonomous car developers) using lidar? Tesla seems to be the only company trying to do it without it.

... But this is the portrait of Tesla fans after Autonomy Day.

Please don't lump us all together. I am a total Tesla fan. I love what Tesla has done and is doing in developing, promoting, and selling electric cars and moving us toward electric transportation. The Roadster was possibly the greatest breakthrough in automotive technology since Ford introduced the assembly line. And I love my Model 3, which is far and away the nicest car I've ever owned, if less fun to drive than the Roadster, which I also loved. And EAP is just amazing. As long as I remain alert, EAP is a better driver than I am. The partnership of me and EAP make the car safer than me alone.

But I have disagreed with Elon on many items and I still do.

And there's nothing wrong with changing your mind in response to new evidence. In fact, the opposite (refusing to change your mind in response to new evidence) is very bad.
 
All this fighting and our cars can't even distinguish left turn arrows yet.

they can! have people not seen this yet? it surprised me. its a little hard to see in the photo but the traffic light is a left red arrow and its visible in the console! not sure if this came with the 2020 sw update or was part of the old .7 release.

left-arrow-20200131_091607.jpg