Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
Clearly I’m making a joke about how when Tesla uses HD maps they will be called something else. AV maps are HD maps.
Now I’m not sure exactly how the HD maps Tesla told the DMV they use for the stoplight feature works. To me it would be an HD map if it shows the car where to look for the light in a more precise way than just knowing the intersection has a light.
So your definition is basically any map more detailed than a SD map is an HD map, as traffic light/road sign mapping is in use pretty widely already (mainly related to speed/red light camera and speed limit detection). Under that definition Tesla is already using HD maps, since they have used maps of traffic lights and sign locations for a long time already (and not just at intersections, also in middle of roads).

Plenty of tests done on this already. There are maps of signs and traffic lights that they used previously (then cameras detect the light status). It's known it's based on maps since unmapped traffic lights didn't show up (and incorrect mapped signs show up).
Then 2020.36 they added the ability to read speed limit signs. And from the testing it can read signs that are real, signs modified, and also fake signs put in locations that didn't have a real sign. There seems to be some sanity checks (like ignoring weird nonstandard limits) and also seems to ignore school limits signs completely.
It also sometimes reads other signs incorrectly as a speed limit sign.
https://www.reddit.com/r/teslamotors/comments/pnlsmk/tesla_vision_incorrectly_identifying_the_speed/
Not sure if there was ever a follow up test done if the traffic lights code was similarly updated (so it can recognize unmapped traffic lights). But there are later accounts of FSD visualization thinking the moon is a traffic light:
Tesla's Full Self Driving System Mistakes the Moon(!) for Yellow Traffic Light
Also when it was driving behind a truck carrying traffic lights:
Watch Tesla Autopilot Get Bamboozled by a Truck Hauling Traffic Lights

That kind of map obviously not the definition of HD maps the person in the video is using. He's talking about maps that allow syncing the car to the road in a very precise way, like pretty much all AV companies are doing as well as some L2 cars. I've posted an L2 example from XPeng, which has very detailed maps (even has elevation, it's completely 3D, and makes for easy presentation of some of the more complex overpasses in China) that allow an unwavering visualization (you see none of the constant variation as seen in the Tesla system), although their system does not work at all in areas that don't have such maps. That's something Tesla is obviously not doing at the moment if you look at the visualizations. Maybe Tesla will do that in the future if they determine their current approach is not working, but they aren't doing that now obviously. I think that is the main definition most people are using when they say Tesla isn't using HD Maps at the moment. I don't think that's quite an unreasonable definition.

Of course as mentioned in previous pots, you want to keep trolling instead of having honest discussions, that's fine also.
 
Last edited:
  • Funny
Reactions: AlanSubie4Life
Congrats for getting a reply. If disengagement is not a good metric, it'll be interesting to see what kind of metric the industry will come up with as a measure of reliability. Obviously there will continue to be "competition" in regards to this.

I don't think there is a single metric that can be used to perfectly measure readiness. More likely, the industry will use several different metrics together to determine readiness. Disengagements will just be one of several metrics used.

Automated Vehicle Safety Consortium Best Practices recommends the following metrics:

Safety performance metrics:

SvgcDKu.png


Predictive metrics:

wwsVGkN.png
 
Last edited:
I can’t imagine another metric so I would definitely be curious what other companies do.

Here are some other safety metrics that Automated Vehicle Safety Consortium Best Practices recommends:

SvgcDKu.png


IMO, these 5 safety metrics, when taken together, would provide an excellent safety measurement. You should also use counterfactual simulation of disengagements to determine if a disengagement represents as a real safety issue or not. That way, you can ensure you are getting an accurate measurement of these safety metrics.
 
Last edited:
So your definition is basically any map more detailed than a SD map is an HD map, as traffic light/road sign mapping is in use pretty widely already (mainly related to speed/red light camera and speed limit detection). Under that definition Tesla is already using HD maps, since they have used maps of traffic lights and sign locations for a long time already (and not just at intersections, also in middle of roads).
Tesla told the DMV they use HD maps. So Tesla could be using an HD map of stoplights and stop signs but is clearly not using HD maps for other features.
But there are later accounts of FSD visualization thinking the moon is a traffic light:
Tesla's Full Self Driving System Mistakes the Moon(!) for Yellow Traffic Light
Also when it was driving behind a truck carrying traffic lights:
Watch Tesla Autopilot Get Bamboozled by a Truck Hauling Traffic Lights
Seems prudent to stop for lights that aren't on the map! This doesn't have anything to do with HD maps.
That's something Tesla is obviously not doing at the moment if you look at the visualizations. Maybe Tesla will do that in the future if they determine their current approach is not working, but they aren't doing that now obviously. I think that is the main definition most people are using when they say Tesla isn't using HD Maps at the moment. I don't think that's quite an unreasonable definition.
That's fine. Clearly Tesla is making very limited use of HD maps, if at all. My prediction is that they will eventually decide to use maps with precise locations of all road features and driving paths. Whether you want to call that an HD map or not is up to you.
 
IMO, these 5 safety metrics, when taken together, would provide an excellent safety measurement. You should also use counterfactual simulation of disengagements to determine if a disengagement represents as a real safety issue or not. That way, you can ensure you are getting an accurate measurement of these safety metrics.
Violation of any of those performance metrics would be cause for a disengagement so it's sort of the same thing.
 
Here are some other safety metrics that Automated Vehicle Safety Consortium Best Practices recommends:

SvgcDKu.png


IMO, these 5 safety metrics, when taken together, would provide an excellent safety measurement. You should also use counterfactual simulation of disengagements to determine if a disengagement represents as a real safety issue or not. That way, you can ensure you are getting an accurate measurement of these safety metrics.
While these are great internal measures, we don’t have these for average human drivers and can’t compare. So, we still need crash or likely crash data.
 
As I mentioned, previously the industry called any map more detailed than a standard navigation map an "HD map" as a catch-all phrase (even if it only offers things like speed limit and traffic light info, which most L2 cars are using already). If your definition is that, then obviously it matches, but it's not useful for most discussion without further segmentation. It's like calling all semi-autonomous/autonomous cars "Self-Driving" (a definition that many people still use, example from NYT recently). The industry obviously saw the need to come up with levels and further differentiation between them.
This NYT Story About A 'Self-Driving Road Trip' Is Full Of Dangerous Misunderstandings UPDATED
A completely clueless journalist calling a L2 ADAS a self driving car is not "the industry".
Most tesla fans do the same thing. The definition from "the industry" about what constitutes a "Self driving car" hasn't changed. Infact the term has been disallowed due to its misuse by non industry people.
And how do you explain the points raised on Mobileye's blog about them being different?
The Crucial Differences Between AV Maps and HD Maps
This actually shows you are misinformed about the AV industry in so much that you would fall for Mobileye's marketing.

First of all as I have noted, AV Map is Mobileye's new branding. Its nothing but marketing.
Secondly, all the pointed listed in that article IS provided by a "conventional HD map". Remember that "HD Map" isn't a general mapping, gps or surveillance industry term, its an AV industry term. The term HD map started off or got popular in reference to AV right from the days of Darpa Grand and Urban challenge.

These maps from the very beginning had semantic information in them. For example where traffic lights where, which traffic light refers to which lane, stop sign, stop line, driving path, etc.

Its not just localization, although mapping started out historically as just localization. For AVs, it evolved to include semantic information.

Lets go over each points in that article:

- A conventional HD map can’t tell an AV which traffic light is relevant to the path on which it’s driving (IT DOES)
- Where is the best place to stop for an unobstructed view at an intersection (IT DOES)

The key difference between Mobileye's HD map and others are
  1. Its camera based.
  2. Its fully self-contained and doesn't rely on any other map.
  3. Its completely crowdsourced and generated from scratch without any base map from a local fleet.
  4. Its fully automated.

That's it.


So you are saying Mobileye's maps offer the same order of features as the HD Maps used by leading AV competitors? The previous example was from Waymo, where it even marks semi-dynamic things like mannequins in their HD map (from other blogs they also map purely static things like curb heights, driveways, fire hydrants). The idea being it can do a static vs dynamic comparison very easily given it has all the static elements mapped (regardless if they are high value like traffic lights/signs). From Mobileye's blog, they map only what is strictly necessary and leave out everything else.
Who defines what's necessary? So Mobileye makes the rules on what's necessary to drive and you are the arbitrator of the rules?

I just went through every point raised by Mobileye. Other companies have similar semantic details. They might not have all of them but whose to determine what semantic detail qualifies you to be a HD map or not.

Secondly, your point about the mannequin you keep raising is completely mute because. This wasn't apart of Waymo's HD map in the past. And if you look at what constituted as HD map in the past, its always been a map that has enough detail for you to localize in.

For example take a look at Waymo's definition of HD map from 2016.


Again i ask you, whose to decide what's necessary for driving?
Should the bollards that AIADDict ran into be added or excluded? What does that say about Mobileye's AV map if it does or doesn't have it? Heck with EyeQ5 equipped fleets Mobileye will add even more data to their REM Map.

For starters the accuracy of their map will increase and they will prioritize data coming from EyeQ5 fleet as being more accurate than EyeQ4 fleet. In addition, with EyeQ5 being more powerful it runs a semantic segmentation network. they can add more semantic detail to their REM map and probably already have such data like : curb, guardrail, concrete barrier, flat surface, raised surface, driveway (parking in/parking out), row of poles.

Bringing it more in-line with what Waymo has.

Of course for people that want to just continue the trolling and call anything more detailed than a standard map, an "HD map", then go right ahead.
Again calling this trolling shows you don't understand the history of HD map.
Again i repeat, an HD map is any map that you can localize in within a certain close accuracy (usually 1-10cm).
This has always been the definition.

  • Its not whether your map is crowdsourced or not.
  • Its not whether your map is automatically generated or not.
  • Its not whether your map is accurate within 100 miles vs the next 300 meters.
  • Its not whether your map has certain semantic features or not.
  • its not even whether your map is lidar based, camera based or even RADAR based. (Yes there are Radar based HD map)

“By coupling these two inputs [radar and GPS], Bosch’s system can take that real-time data and compare it to its base map, match patterns between the two, and determine its location with centimeter-level accuracy.”

Its whether you can localize in it to a certain cm accuracy.
If you can't understand these basic facts, ofcourse you would think its trolling
 
Last edited:
  • Like
Reactions: eli_ and diplomat33
While these are great internal measures, we don’t have these for average human drivers and can’t compare. So, we still need crash or likely crash data.

Of course we still need crash or likely crash data. That is why it is the first metric on the list. But it is helpful to have internal metrics to measure progress during development. And these metrics all relate to safety. We know that traffic violations like speeding or running a red light contribute to accidents. Driving too close to other road users (safety envelope violation) also contributes to accidents as the driver is not able to brake in time to avoid a collision. Sudden braking or sudden last minute lane changes can also cause accidents. Bad reaction times can mean not braking in time to avoid a collision. So improving these metrics will reduce your crash rate. If your AV follows traffic laws, maintains safe distance from other road users, avoids unsafe sudden or jerky accelerations, and has excellent OEDR reaction time, it will have a lower crash rate. Yes, they are internal metrics but that's the whole point. The idea is to give companies useful metrics they can use to measure their progress towards their safety goals during development.
 
Last edited:
To me the main thing is can you automate these or do you have to manually analyze and classify. Because if it’s the latter, it only works for tiny fleet.

Yes, they can be automated. We know this because Tesla automatically measured 2 of those metrics, safety envelope violation and jerky longitudinal and lateral acceleration (Tesla called it unsafe following distance, hard braking and aggressive turning) in the safety score. OEDR reaction time can also be automated since you can automatically measure the reaction time.
 
Yes, they can be automated. We know this because Tesla automatically measured 2 of those metrics, safety envelope violation and jerky longitudinal and lateral acceleration (Tesla called it unsafe following distance, hard braking and aggressive turning) in the safety score. OEDR reaction time can also be automated since you can automatically measure the reaction time.
"Compliance with traffic regulations" would be the difficult to automate.
 
"Compliance with traffic regulations" would be the difficult to automate.

Possibly. But assuming you have a safety driver since you are testing your AV, the safety driver should disengage for traffic violations. And you would record those disengagements.

It is also worth nothing that you should design your driving policy to follow traffic laws. So you would have confidence that your vehicle is following traffic laws based on how your designed the driving policy.
 
So you would have confidence that your vehicle is following traffic laws based on how your designed the driving policy.
The whole point of tracking bugs is that the code doesn't follow the design.

BTW, you are still thinking in terms of Waymo etc, I'm thinking in terms more pertinent to this forum - in terms of Consumer AVs. So, if there are 100k disengagements per day - how do you track all these numbers ?
 

And, semi-unrelated is the below:
While doing a Google search, I stumbled across ‘Villages in the City’: WeRide Self-Driving on China’s Unique Urban Roads accompanying the above, that I hadn't read until now.

Chinese autonomous vehicle startup WeRide scores permit to test driverless cars in San Jose – TechCrunch
While checking WeRide's YouTube channel, I stumbled across
posted Aug 24, 2021. They claim "2-Hour Self Driving Test Drive with NO Human Intervention, Evening Rush Hour & Heavy Rain". I've only had a chance to check out the 1st few minutes.
 
  • Like
Reactions: diplomat33
If it can handle heavy traffic with moderate rain why test in San Jose? Why not Houston, Birmingham. Memphis or Atlanta
A lot of software and hardware engineering talent is in Silicon Valley which SJ is the heart of. So, at least engineers can easily go for a ride along to see their changes in action and to repro problems discovered during AV testing. Or, they just might want to go out to the problem locations to analyze and document.

That's not as easy if the AV testing is happening far away or if you're instead trying find and recruit software engineers of the same caliber in say Birmingham.

Just look at all the chatter there was here on TMC in the fatal Walter Huang Model X crash at a gore point in Mountain View, CA (Model X Crash on US-101 (Mountain View, CA)). I used to work near there long ago. Many many folks here on TMC go by there/are familiar with that area. That puts locals at a huge advantage over those who are hundreds or thousands of miles away.
 
Last edited:
HD map is a term that came up to describe maps that had much more information than lane-level maps like those in navigation. There is no formal definition for it. One of the first things people put in HD maps was a 2D road image. Other companies put in locations of 3D objects, or the width of the road. Almost everybody put in semantic information as well. MobilEye, to dis its competitors, says you don't need the road image, and they aren't putting in the right semantic info. There is legitimate debate about how much you need to put absolute coordinates in your map. You don't, strictly, though it helps if you are trying to combine info from multiple sources.

The road image is not hard to generate or even all that large, but it's very useful for localization and for detecting changes to the road.

Companies will refine what they do, and add and subtract techniques. ME is correct that you want lots of semantic information, but everybody knows that.

Just not Tesla. I mean Tesla does have maps with semantic info, but they are not doing a good job of it.
 
Almost everybody put in semantic information as well. MobilEye, to dis its competitors, says you don't need the road image, and they aren't putting in the right semantic info.

Can you expand on this ? What kind of semantic information that everyone puts and what kind of info ME is not putting it ?

To me the interest is mainly in consumer AVs and three players seem to be serious here. Tesla, ME and GM. It would be interesting to see how each of them is handling various parts.

Just not Tesla. I mean Tesla does have maps with semantic info, but they are not doing a good job of it.
What Tesla does with maps is bit of a mystery. A lot of FSD problems can be traced to simple mapping issues but there is no way to get them quickly corrected. Tesla needs to be more transparent about this.