Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta Videos (and questions for FSD Beta drivers)

This site may earn commission on affiliate links.
Why not just call them HD maps and call more detailed maps UHD maps? Better marketing!
This terminology is just to better distinguish the different map types available. "HD map" is typically used to describe both, but the level of detail and precision is completely different. Don't care at all about the marketing.

As per edit, can you quote the specific passage you are talking about from CA DMV, just so we are on the same page?
 
There are multiple things here
- Level of detail in maps. Normal / ordinary maps can have this data but may not for all roads. This is an optional metadata.
- How accurate the GPS is.
- How much detail the visualization shows.

Let me restate - I think Tesla uses maps data to figure out restricted roads/lanes and not road signs. May be they use some kind of combination (like for traffic signs) - but I doubt it.
They definitely have maps data on road signs, like speed limit signs. That's how they mark locations of existing road signs in general, and then the camera verifies what the sign actually says. Note however, the system is robust enough to also identify signs that aren't in the map data too.

Some tests here:
 
This terminology is just to better distinguish the different map types available. "HD map" is typically used to describe both, but the level of detail and precision is completely different. Don't care at all about the marketing.

As per edit, can you quote the specific passage you are talking about from CA DMV, just so we are on the same page?
Yeah there are tons of vendors selling "HD maps" these days and I assume they're all different levels of detail. I suspect that Tesla's maps will continue to get more detailed.
1632190927300.png

 
  • Informative
Reactions: AlanSubie4Life
There's a couple of different points being covered here. First what is definition of "HD Map" you are using? For context I'm using the definition of HD maps being centimeter accurate maps with virtually all road features mapped in detail. I'm calling the ones that just map lanes and the general traffic light/sign positions as "MD" Maps. These don't require nearly the same precision, only enough to know basically what lane the car is in and a very rough approximation of where various road signs/lights are at (and cameras need to figure out the rest).

Here's a conversation on this subject, where Tesla said they don't build HD maps, but they do build maps (aka MD maps).
Autonomous Car Progress

Another point I'm talking about "The maps Tesla is using for that area may not have the individual lanes marked to that specificity." We know years ago Tesla had MD maps of the Bay Area and in California:
Tesla Building Next Gen Maps through its Autopilot Drivers
But the key point is what about the rest of the USA where Tesla have not made their own MD maps?

Other companies instead are using third party map providers that already provide such data. For example China's Amap has this as a service which XPeng is using. It allows a map that has every lane laid out as below in 3D (not the simple lane guidance most nav system have), without needing XPeng or the car to generate it:
xmotors.jpg

XPeng to deploy Amap’s 3rd-gen in-car navigation system

We know Tesla is not using a similar service in China, but rather fairly basic Baidu maps. I'm questioning if Tesla is using a similar service to Amap in the US. Because if they are, it's certainly not showing up in the navigation.

Tesla LITERALLY on AI day said they build “HD MAPS” their words not mine.

Their HD map contains the same features that Mobileye and other camera based hd map has, lane lines, road markings, road edges, stop lines, driveable path, road height, path delimiters, drivable space, curbs, barrier/guardrail, etc

Every single hd map provider,mobileye, Tom Tom, Here, civil maps, lvl5, Nvidia, Carmera, etc, calls it HD map. The entire industry refer to it as HD maps. Even Tesla themselves refer to it as an HD map, but of-course to save Tesla from the obvious contradiction people have to invent a narrative that every single company are wrong and that these are actually not HD maps.
 
Why would the planner make some weird assumption like that ? They go by maps for such things ...
The map data Tesla uses is often missing data or even has incorrect data, so Tesla is currently attempting to have FSD work with neural networks predicting lane attribution data "for the first time" without relying on map data. The FSD Beta release notes included "New Lanes network with 50k more clips (almost double) from the new auto-labeling pipeline," which does seem to match up with the new 10.x behavior changes (some improved, some regressed) involving lane section. The auto-labeling pipeline is likely that which was presented at AI Day and uses higher accuracy map data aggregated from the fleet to label the Lanes training data without requiring this more complex map data to be deployed to each vehicle. Basically, the neural networks have more resources when "studying" offline, but when driving, FSD Beta vehicles still use the same plain navigation maps that the rest of the vehicles use.

Looks like the neural network needs to overcome wrong map data not only for Lanes but even the most basic map data of road connectivity:
wrong maps.jpg


Here FSD Beta tried to follow navigation wanting to turn left at a "street" that is blocked off with planters. And maybe it got confused by the dirt tracks?
street view.jpg
 
Yeah there are tons of vendors selling "HD maps" these days and I assume they're all different levels of detail. I suspect that Tesla's maps will continue to get more detailed.
View attachment 712204
Yep, that's MD maps as discussed here (marking of coordinates of traffic controls). The "true" HD maps are ones where the entire environment is mapped to centimeter level, will even have the height of the traffic control, as well as details of all the curbs, crosswalks, etc.
 
Tesla LITERALLY on AI day said they build “HD MAPS” their words not mine.
No they didn't. They say "again the point of this is not to just build HD maps or anything like that it's only to label the clips through these intersections so we don't have to maintain them forever as long as the labels are consistent with the videos that they were collected at". The example shown is using alignment of different clips to do auto labeling on the video clips which they then use to train the NN. This saves them from needing humans to do manual labelling for the road features for the training data. The NN would then be able to predict road features after being trained using that labeled video data without needing a detailed map in the car.

They never talk about building or using HD maps in the presentation. Rather instead of building maps like most companies are doing, where features are mapped into explicit items on a conventional map, they are building the NNs with weights such that they can predict features to a degree that functions "like an HD map".
At around 2:45:50 Andrej mentions (bold emphasis mine):
"actually in the limit you can imagine the neural net has enough parameters to potentially remember earth so in the limit it could actually give you the correct answer and it's kind of like an hd map back baked into the weights of the neural net".
He touched on a similar thing earlier in the presentation when talking about RNNs at around 1:09:15:
"but you can imagine there could be multiple trips through here and basically number of cars a number of clips could be collaborating to build this map basically and effectively an hd map except it's not in a space of explicit items it's in a space of features of a recurrent neural network".
Their HD map contains the same features that Mobileye and other camera based hd map has, lane lines, road markings, road edges, stop lines, driveable path, road height, path delimiters, drivable space, curbs, barrier/guardrail, etc
There is zero evidence they are using maps at that level of detail, even in the Bay Area and SF which they had the most detail even back in 2016 from the diagrams they released. Just look at a few FSD videos. Even the lane lines and road edges vary until the cameras can get enough of a glimpse of it. This is completely different than companies that even use "MD maps", much less "HD maps" (which would have things down to the cm, and can "sync" to the map so it'll always be very certain of road edges).
Here's a most recent example from Beta 10. You can see it doesn't figure out the intersection layout until it reaches it.
There is zero evidence map syncing going on, as you would expect if Tesla had and are using HD maps.
Every single hd map provider,mobileye, Tom Tom, Here, civil maps, lvl5, Nvidia, Carmera, etc, calls it HD map. The entire industry refer to it as HD maps. Even Tesla themselves refer to it as an HD map, but of-course to save Tesla from the obvious contradiction people have to invent a narrative that every single company are wrong and that these are actually not HD maps.
Again, I pointed out that people just collectively call all maps that have more detail than a regular navigation map, "HD maps" but that leaves such huge variation that it's almost meaningless. It's like just calling everything L2 and then not making any distinction with systems that do simple lane keeping, NOA equivalents, or end-to-end L2. Doesn't help the discussion at all if it can't be specified to greater degrees.
 
Last edited:

For a second I thought Tesla Joy got the Beta. Turns out Brandon is in LA and is giving her a ride, and she asked him to take a drive in Culver City and try one particular street where they have mini “roundabouts” and construction going on. Beta so far has done a surprisingly good job, but there have been a couple of disengagements. Brandon apparently hasn’t driven in LA much since he spent some time in the beginning marveling at how a 5-6 mile drive out to meet Joy took him 30+ minutes, lol.

Edit - Beta does a much, much worse job traveling the other way on that same road.
 
No they didn't. They say "again the point of this is not to just build HD maps or anything like that it's only to label the clips through these intersections so we don't have to maintain them forever as long as the labels are consistent with the videos that they were collected at".
No they did. They are building different things you are just omitting one part of it. He literally says "not JUST to build". Which means they build hd map but in addition also do something else. This is why he said they don't have to maintain them for ever. Those maps are absolutely there. The way tesla system works is that it collects data from various car and then aligns them to build the hd map, then labelers go in and correct the mistakes or add additional meta data, which he mentioned.
The example shown is using alignment of different clips to do auto labeling on the video clips which they then use to train the NN.
Your simply don't understand. That "auto labels" IS THE HD MAP. Its geolocated because that's the only way you can align the multiple drives. Not only that its cm accurate again because that's the only way you can align the drives. Karpathy have talked about in the past how they train the NN using HD map, this is the hd map he was referring to.
This saves them from needing humans to do manual labelling for the road features for the training data.
No they do still need manual labelleing. He even said it that humans goes in to correct mistakes or to add more meta data
The NN would then be able to predict road features after being trained using that labeled video data without needing a detailed map in the car.
This is simply not true, the NN uses a prior (which in this case is a hd map) to predict what its seeing that way its prediction is accurate. This has been proven by verygreen.

There is zero evidence they are using maps at that level of detail, even in the Bay Area and SF which they had the most detail even back in 2016 from the diagrams they released. Just look at a few FSD videos. Even the lane lines and road edges vary until the cameras can get enough of a glimpse of it. This is completely different than companies that even use "MD maps", much less "HD maps" (which would have things down to the cm, and can "sync" to the map so it'll always be very certain of road edges).
Here's a most recent example from Beta 10. You can see it doesn't figure out the intersection layout until it reaches it.
There is zero evidence map syncing going on, as you would expect if Tesla had and are using HD maps.

Mobileye also does that, 360 top down environmental modeling from all cameras. If a mobileye car made a UI using that. Would you say they don't also use HD map?
Again, I pointed out that people just collectively call all maps that have more detail than a regular navigation map, "HD maps" but that leaves such huge variation that it's almost meaningless. It's like just calling everything L2 and then not making any distinction with systems that do simple lane keeping, NOA equivalents, or end-to-end L2. Doesn't help the discussion at all if it can't be specified to greater degrees.

No its not, call it by its industry standard name HD Map. Just like we call everything L2. Stop catering to elon musk's bs. Before elon musks statement about hd map. HD map was the most hyped up thing by Tesla's fans.


But when elon came against it in 2019, all tesla fans came against it like sheeps.
 
This is what happens when you don't have that HD map being fed into the NN as a prior induced bias.
Notice how it keeps trying to predict things that ain't there and failing to predict even obvious things that are there.
In completely empty streets with no obstruction it struggles to determine basic road features like the road edge, stop line, cross walk, intersections, etc.
Complete night and day.

FSD Beta 9 in Ukraine

 
Last edited:
I don't see how FSD can plan routes if it doesn't know whether the lanes are turn only or not. You can't always go by road markings because they may be not visible due to traffic.
The not-visible road markings might be what the neural network was thinking when it very briefly wanted to turn left thinking there's no straight lane ahead (view blocked by lead vehicle):
predict left.jpg


One general solution when uncertain is to drive slower or give more distance to get a better view of the situation to add to what has been seen and make a better prediction. This happens to be what happened in this case as the vehicle was still accelerating from a red light and didn't require a disengagement:
predict straight.jpg


Notice the solid yellow line visualized ahead showing the neural network had a confident prediction once the lead vehicle wasn't blocking the view of the median.
 
  • Like
Reactions: powertoold
One general solution when uncertain is to drive slower or give more distance to get a better view of the situation to add to what has been seen and make a better prediction.
Seems like this partially is a programming & fusion issue. There are tons of visual & data cues here to work off of. Navigation says to go straight. Maps say there is a lane that exists. Vehicle in front is going straight and seems fine (obviously you need to be somewhat careful about this in general, but it is information, and the vehicle seems to be proceeding normally).

What's weird to me here is that the perception appears all broken. The vehicles are offset in the visualization from their actual locations in the lanes, and there's a missing lane on the far side of the intersection. There's both an offset in actual position, the drawn position of the vehicle makes no sense, and a missing lane. That can't be too helpful either. As you say, the blocking of the lead vehicle seems to have led the perception to determine there is just a super wide center median which the vehicle is driving directly towards.
Screen Shot 2021-09-21 at 12.01.12 PM.png

Agreed that if uncertain going slower is an option, but in this sort of situation there should be very low uncertainty. Going slower here would be very undesirable.

I wonder how long it will be before the perception is reliable enough to avoid making these mistakes?

When maps and perception disagree, which one do you believe? ;)
 
Last edited:
The not-visible road markings might be what the neural network was thinking when it very briefly wanted to turn left thinking there's no straight lane ahead (view blocked by lead vehicle)

I've noticed the blue path "very briefly" does a lot of crazy things, but it usually ends up making the right decision, as it proceeds forward and gets a better understanding of the environment. I don't think it'll ever be perfect, and it doesn't need to be, and that's why the videos showing "graceful failures" are so interesting.
 

Interesting. It looked like there was more than enough room to comfortably make that turn. I would have thought keeping a foot ready to hit the go pedal if the beta hesitated would have been a much better intervention than aborting like he did.
Agreed. Can’t fault him. Hopefully I’ll get the opportunity to try this out firsthand in a few weeks, and I’m sure I’ll intervene all sorts of times due to feeling uncertain.

But I agree, some of the interventions might be better handled by intervening with a a press on the accelerator. This was a good example of such an opportunity, as you said.
 
Agreed. Can’t fault him. Hopefully I’ll get the opportunity to try this out firsthand in a few weeks, and I’m sure I’ll intervene all sorts of times due to feeling uncertain.

But I agree, some of the interventions might be better handled by intervening with a a press on the accelerator. This was a good example of such an opportunity, as you said.
I am curious though, who's fault would it be if he did override it with the go pedal but still get hit? Or if he had stepped on the brake a fraction second late and had the car poke out too much that the incoming cars cannot avoid. Would that still be the driver, or would there actually be occasions where the car is at fault because it puts the driver in an no-win situation? I think these issues need to be resolved before giving more people access to FSD Beta.