Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon: "Feature complete for full self driving this year"

This site may earn commission on affiliate links.
... What I don't quite understand is what NN rockstar that exceeds the high bar isn't already aware of Tesla and what they're doing? Either they've applied and got rejected or have no interest in the job, right?

Being aware of Tesla and what they're doing does not necessarily mean you're either already with them, have been rejected, or are not interested. Maybe you're aware of them but need some added encouragement to apply. Maybe you already have a job and didn't think they'd be interested in you. Maybe you didn't realize they were hiring. Maybe when you did apply, they weren't hiring. Maybe you've seen their code or otherwise know something about it and you think they were on the wrong track and you didn't want to hitch yourself to that train, but with a re-write scheduled, now you see the possibility of getting in on a fresh start.

There are many possible reasons a bright young programmer might never have considered applying but might be interested in a hack-a-thon.
 
Now Elon is confirming they are doing HD mapping again.

Elon Musk on Twitter

Just in case there is confusion:

His response was to the question
Is it possible that Tesla will be able to create a “micro map” of every road with all the details (stop sign, pot holes, etc.) that can be used by other Teslas when they drive along the same road?

Creating a data pool for use of later traffic is far different from requiring an HD data pool to drive at all.

Waze maps police locations, but you don't need Waze to drive. This is super Waze, get out of the lane half a mile before the pothole. Maybe even know about an accident around the next curve. However, this is in no way the mapping that SuperCruise requires.
 
  • Like
Reactions: Cirrus MS100D
Just in case there is confusion:

His response was to the question


Creating a data pool for use of later traffic is far different from requiring an HD data pool to drive at all.

Waze maps police locations, but you don't need Waze to drive. This is super Waze, get out of the lane half a mile before the pothole. Maybe even know about an accident around the next curve. However, this is in no way the mapping that SuperCruise requires.

No that's a HD map no matter how you try to undress it. There's nothing about it being traffic related. For example Mobileye's HD Map includes stop signs and potholes. To avoid a pothole, you need to know precisely where it is.

No surprise that Tesla fans are already contradicting themselves.
UesM53B.png
 
Last edited:
  • Like
Reactions: diplomat33
No that's a HD map no matter how you try to undress it. There's nothing about it being traffic related. For example Mobileye's HD Map includes stop signs and potholes. To avoid a pothole, you need to know precisely where it is.

No surprise that Tesla fans are already contradicting themselves.
UesM53B.png

"HD Maps" is a bit nebulous, but I think anything that has data with the resolution of individual lanes qualifies as an HD Map, like per-lane pathing splines or speeds or pot holes localized to an individual lane. It requires precise localization beyond what you can get from GPS/IMU, so it can't be implemented on a phone for traditional Google Maps/Waze, without using the camera anyway, you need a bunch of sensors and landmarks.

Elon pooped on HD Maps at the Autonomy Day thing, but he says they're for "tips and tricks". Tesla has had their "ADAS maps" for a long time, e.g. they've been using crowdsourced speeds for NoA on highway ramps for over a year. Every time I come home my car downloads map tiles from daws.tesla.services. And they routinely post job openings for the Autopilot mapping team.

There used to be an active thread here where folks were working on scraping and decoding the ADAS tiles, but Tesla eventually locked the server down and everything stopped: E.g. Tesla Autopilot maps
 
Last edited:
No that's a HD map no matter how you try to undress it. There's nothing about it being traffic related. For example Mobileye's HD Map includes stop signs and potholes. To avoid a pothole, you need to know precisely where it is.

No surprise that Tesla fans are already contradicting themselves.
UesM53B.png

Don't know what you are undressing, but my point was that the map (HD or otherwise) is supplimental to Tesla's self driving, not mandatory nor is it required to pre-exist.
 
they should also free up the 'eap' stuff and just give that to all the current AP owners.

eap has some usable stuff; pity that they killed that middle level. its actually the sweet spot for where 'ap' offers the best value.

Then nobody would buy FSD because it offers nothing. Tesla lumped those together to be able to sell FSD.

I agree. Tesla, to its credit, would rather miss a promised deadline than release something that is not safe.

They are behind and they still release unsafe software.

Elon pulled his Trump card :confused: Lan party for FSD

It's gonna be mostly a social gathering since compiling a nn takes hours :rolleyes:

I get that they always want talents but it's kinda alarming that they scrap the code again which will mean years of teething issues and the possible feature freeze of 2/2.5 which is nowhere near an acceptable state.
 
I get that they always want talents but it's kinda alarming that they scrap the code again which will mean years of teething issues and the possible feature freeze of 2/2.5 which is nowhere near an acceptable state.

I am not sure it is entirely accurate to say that Tesla is "scraping the entire code". They are rewriting the code yes but that does not mean that they are starting from scratch again.
 
I am not sure it is entirely accurate to say that Tesla is "scraping the entire code". They are rewriting the code yes but that does not mean that they are starting from scratch again.

They are of course not throwing away everything but a rewrite is scrapping the majority of it except assets like the training data and maybe some utility code etc. I work with software development on distributed and embedded system so that's what a rewrite means.

It will mean teething issues again
 
  • Helpful
Reactions: diplomat33
They are of course not throwing away everything but a rewrite is scrapping the majority of it except assets like the training data and maybe some utility code etc. I work with software development on distributed and embedded system so that's what a rewrite means.

It will mean teething issues again

I do agree about the teething issues. Hopefully, they will be fixed soon and FSD will be better for it.
 
  • Like
Reactions: emmz0r
Now Elon is confirming they are doing HD mapping again.

Elon Musk on Twitter

Is it possible that Tesla will be able to create a “micro map” of every road with all the details (stop sign, pot holes, etc.) that can be used by other Teslas when they drive along the same road?

When Musk answered "Yes" to the question "Is it possible," I interpret that as "Maybe." As in "Is it possible you could come to my party?" Answering "Yes" doesn't mean you will definitely come, it means it's possible that you could come.

Take Musk's reply in light of his supreme optimism. He believes pretty much anything is possible. That character trait made him the man who could lead the company that created the modern electric-car industry and gave us these great cars. But it also makes him a very unreliable predictor of the future.
 
P.S. Maybe with 5G connectivity and the entire world's server capacity Tesla could continuously gather pothole information from all the Teslas on the road and send it to all Teslas by location in real time. Otherwise you're probably looking at static maps updated monthly for major cities and once year everywhere else. Note: Potholes are continually forming and being repaired.
 
It seems that with this new Elon explanation about 3-D labeling, Elon is walking back his claims about lidar. Rather than labeling a 3-D representation of the world from a lidar point map, Elon is saying that the auto pilot team will now gather a 3-D representation of the world from the eight cameras and then labeling the 3-D objects from that.

The problem with Tesla's new approach is that lidar will create a much more accurate 3-D representation of the world than a neural-network-based 3-D estimation of the world created from a 360 camera...

I don't see promise in this new approach by the Tesla auto pilot team. It seems that they've pigeon-holed themselves into the current sensor suite and will be basing their strategy on an inherently less accurate technology.
 
Last edited:
  • Like
Reactions: rnortman and daniel
It seems that with this new Elon explanation about 3-D labeling, Elon is walking back his claims about lidar. Rather than labeling a 3-D representation of the world from a lidar point map, Elon is saying that the auto pilot team will now gather a 3-D representation of the world from the eight cameras and then labeling the 3-D objects from that.

The problem with Tesla's new approach is that lidar will create a much more accurate 3-D representation of the world than a neural-network-based 3-D estimation of the world created from a 360 camera...

I don't see promise in this new approach by the Tesla auto pilot team. It seems that they've pigeon-holed themselves into the current sensor suite and will be basing their strategy on an inherently less accurate technology.

My understanding is that they are labeling the derived objects in the generated 3-D scene. Those labels are then back propogated to all the frames that generated the scene. The NN still deals in 2-D frame data, not 3-D clouds.
For example, 8 seconds (350 feet at 30 MPH) of forward camera data is 720 frames (three cameras at 30 fps), plus whatever the side cams overlap. Each 3-D label then gets applied to up to 700+ images, even those where the object is hard to resolve due to distance. That's where the 2 to 3 order of magnitude improvement comes from.
 
  • Like
Reactions: Cirrus MS100D
My understanding is that they are labeling the derived objects in the generated 3-D scene. Those labels are then back propogated to all the frames that generated the scene. The NN still deals in 2-D frame data, not 3-D clouds.
For example, 8 seconds (350 feet at 30 MPH) of forward camera data is 720 frames (three cameras at 30 fps), plus whatever the side cams overlap. Each 3-D label then gets applied to up to 700+ images, even those where the object is hard to resolve due to distance. That's where the 2 to 3 order of magnitude improvement comes from.

I understand what Elon is saying.

I'm just saying that generating a 3D scene using a "2D" 360 camera is inaccurate. The distances and size of objects generated from a NN creating 3D objects from a 2D camera image are going to be inaccurate, whereas lidar will provide cm-accurate distances and sizes.
 
I understand what Elon is saying.

I'm just saying that generating a 3D scene using a "2D" 360 camera is inaccurate. The distances and size of objects generated from a NN creating 3D objects from a 2D camera image are going to be inaccurate, whereas lidar will provide cm-accurate distances and sizes.

Centimeter accuracy is for surveyors. When you're talking about a self-driving car traveling at even 35 miles per hour, the difference between one foot and two feet isn't usually meaningful, much less centimeter accuracy.
  • If something is far enough out to be able to act on it meaningfully, you just need to know the approximate distance within ± a few feet and whether it is flat enough to run over or not.
  • If it isn't, it's already too late and you're going to hit it anyway, so the accuracy doesn't buy you anything.
Never in my entire driving career have I even once wondered whether something was 750 or 751 centimeters away. If you don't need to know that information, there's a good chance your car doesn't, either.
 
the difference between one foot and two feet isn't usually meaningful, much less centimeter accuracy.

You're not thinking about the implications of inaccuracy. I'm not saying cm-accuracy is *required*. What I'm saying is that lidar provides ground-truth accuracy about the environment, whereas a NN-derived 3D scene from 2D images may actually have false positives and negatives (not only in terms of distances and sizes, but also false objects)... Also consider that a car moving at 40mph is moving at 50+ FEET per second. We can't expect a NN to accurately estimate something like that (I know the front radar + front camera can estimate distances, but I'm talking about the other cameras).
 
You're not thinking about the implications of inaccuracy. I'm not saying cm-accuracy is *required*. What I'm saying is that lidar provides ground-truth accuracy about the environment, whereas a NN-derived 3D scene from 2D images may actually have false positives and negatives (not only in terms of distances and sizes, but also false objects)...

I don't see how, unless you don't throw away outliers properly. The inaccuracy shouldn't be nearly that bad. For cameras a couple of feet apart, you can actually do almost as well as LIDAR. The Tesla cameras, being closer together, would likely give you reduced accuracy when measuring objects at a distance, but not dramatically reduced accuracy.


Also consider that a car moving at 40mph is moving at 50+ FEET per second. We can't expect a NN to accurately estimate something like that (I know the front radar + front camera can estimate distances, but I'm talking about the other cameras).

Why would you use a neural net to do the estimation? Traditional programmatic code is much better suited to the task.

Now that's not to say that the performance of HW3 is up to the task. I have no idea about that. If it isn't, then yeah, that's a problem. But that's a different and more readily solvable problem. :)
 
Elon has said that lidar is a crutch. But using lidar as an accurate way of measuring and labeling 3D objects generated from camera images is the best and most accurate approach for the AP team's new FSD strategy.

This new 3D labeling strategy sounds bad IMO. Labelers will need to label for their eyeballed distances (not accurate distances) as well as the object itself, whereas with lidar, the labeler can simply label the object and the lidar will accurately label the distance and size throughout the video input.

There's no known way to accurately produce a depth map using one camera (with exception of portrait photos on camera phones, but that's only for close distances). There are 3 front cameras, but any distance estimation from the side or back cameras will need to be made using one image.