Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
HD maps are a joke imo. Elon is making more and more sense as time goes by. HD maps are great for demos and raising capital. They seem suited for creating standardized routes for freight or mass transit. They'll never get to level 5 with HD maps. Using HD maps essentially locks you into level 4 by definition. HD maps aren't even good training wheels for FSD. Their purpose is to trick people (and investors) into believing autonomous cars are coming soon. These silly companies have been tinkering with HD maps since 2017 or before and still haven't come with a single profitable route or region or mass transit application. It's a farce.
 
HD maps are a joke imo. Elon is making more and more sense as time goes by. HD maps are great for demos and raising capital. They seem suited for creating standardized routes for freight or mass transit. They'll never get to level 5 with HD maps. Using HD maps essentially locks you into level 4 by definition. HD maps aren't even good training wheels for FSD. Their purpose is to trick people (and investors) into believing autonomous cars are coming soon. These silly companies have been tinkering with HD maps since 2017 or before and still haven't come with a single profitable route or region or mass transit application. It's a farce.

wtsx8.jpg
 
  • Funny
Reactions: 1 person
How is that safe behavior to rely on HD maps when vision does not see the stop line? If these turn lanes are removed because of a new traffic pattern, following the HD map to "stop precisely where it should" would place the vehicle right in the way of other vehicles potentially leading to an accident.

View attachment 568702
In fact the street view screenshot shows construction of what seems to be a subway line taking up additional space for workers, so after construction completion, the turn lanes may very well be moved to a different location. Vision failing to see the stop line could be because the line is now somewhere else from reopened lanes, so relying on HD maps is less safe in this case.

You don't rely on just HD maps. You use both HD Maps + Perception to make the best driving decision. Because having 2 independent sensors is better than relying on only one sensor.

There are different types of HD maps but the basic principle behind HD maps is quite simple: you provide the car with rich map data that will help the car drive better. Even Tesla includes map data on traffic lights and stop signs to make the traffic light control feature more reliable. Tesla does not use cm level lidar maps but the basic principle is the same. I don't think it is arguable that having rich map data makes autonomous driving safer.

I am just listening to the experts who are doing reliable autonomous driving. They all say that HD maps are critical to adding safety. And they've clearly figured out what to do in cases where HD maps are wrong. They all have vision that could do self-driving without HD maps and yet they all still use HD maps. I am thinking that these companies know what they are doing and that HD maps make the FSD safer overall. We can debate what type of HD map is best but there really should be no question that HD maps make FSD safer.
 
You don't rely on just HD maps. You use both HD Maps + Perception to make the best driving decision.
Specifically for the example you gave of vision/perception not finding faded/removed stop lines, the vehicle could rely on HD maps to "stop precisely where it should," but that would be unsafe wrong behavior because perception sees no vehicles ahead thinking it could wait at a no-longer-existent stop line assuming it would be safe from cross traffic. Perhaps you've misunderstood HD maps and overstated their capabilities?

They all have vision that could do self-driving without HD maps and yet they all still use HD maps.
Woah, that's quite the leap from interpreting Waymo's response to mean their vehicles can drive without HD maps to saying *all* could work without HD maps. I suppose a very broad interpretation is that at minimum a self-driving vehicle designed specifically to rely on HD maps should be able to slowly creep around blindly without HD maps, but perhaps again you've misunderstood these approaches and overstated their capabilities?
 
Is the car able to determine that the HD map is accurate without using its full processing power? Because it sounds like these cars need to be constantly vigilant for areas of the HD map that are out of date.

To me it sounds like they would be running at full processing power regardless of the presence of an HD map.

I imagine you'd want to use all your processing power all the time because you're never 100% perfectly safe. The goal is not perfection because perfection will never happen. The first goal is to achieve better safety than a human driver, and after that the continuing and never-ending goal is to be as safe as you can be. By the same token, a smart driver gives all his/her attention to the road. Of course most people don't do this and that's when accidents happen. A computer can use all its processing power to achieve the highest level of safety it can.

HD maps can mean that it starts one (or many) steps up the ladder.
 
Specifically for the example you gave of vision/perception not finding faded/removed stop lines, the vehicle could rely on HD maps to "stop precisely where it should," but that would be unsafe wrong behavior because perception sees no vehicles ahead thinking it could wait at a no-longer-existent stop line assuming it would be safe from cross traffic. Perhaps you've misunderstood HD maps and overstated their capabilities?

No. I think you might be confused.

Let me try to explain again how it works: The car approaches an intersection. The stop lines are faded so the vision is not sure where the stop line is. But the HD map has the correct stop line labeled so the car knows where to stop. But the car also uses its perception (vision, lidar, radar) to detect lane lines, other cars, pedestrians etc. So the car knows the correct stop line and is able to stop at the right spot before turning and it also sees other cars and objects. The Planner and Driving Policy take all the information in from the HD map and Perception and formulates the correct action to avoid hitting other objects and follow the traffic rules correctly.

Woah, that's quite the leap from interpreting Waymo's response to mean their vehicles can drive without HD maps to saying *all* could work without HD maps. [/QUOTE]

Well, I said "could do self-driving". I did not specify which SAE level or how reliable. ;)

Can Waymo, Cruise, Zoox, Aurora, Mobileye, etc do self-driving with just vision? Sure, I think so. They have vision that can detect lane lines, vehicles, pedestrians, traffic lights, stop signs, etc... I know that Waymo, Cruise, Mobileye have really good camera vision. But would it be super reliable enough to be L5? Maybe not. That's why they use other sensors like HD maps, lidar, and radar.

I suppose a very broad interpretation is that at minimum a self-driving vehicle designed specifically to rely on HD maps should be able to slowly creep around blindly without HD maps, but perhaps again you've misunderstood these approaches and overstated their capabilities?

Why would they creep around blindly? They have perception sensors like cameras, lidar and radar that can see the world around them.

Again, I think you are confused. Waymo cars don't drive based only on the HD map. If you take away the HD map, the cars don't go "I'm lost. I don't know what to do". They have a perception stack of cameras, lidar and radar that can see the world around them. That's why the Waymo tweet said that the cars can still navigate safely if the HD map is wrong.

Again: you can do some self-driving with just vision and no HD maps. In fact, that's what Tesla is trying to do. Basically, Tesla's approach is to get full self-driving done with just vision ("feature complete") and then work to make it more reliable. But the question is how reliable will it be with just vision. Can you get to L5 with no steering wheel or pedals with just vision and without HD maps?

Think of it this way: HD maps add a lot of extra reliability so experts believe that they are necessary to get to L4 or L5 with no steering wheel and pedals.

In fact, we see that even companies just doing "hands-free L2" like GM's Supercruise use HD maps because of the increased reliability and safety they provide.

To quote my article again: “Level 2 vehicles are on the road today without a HD map, but all of the car manufacturers are looking to integrate these maps to make their systems smoother, and ultimately, safer."

If L2 cars are going to use HD maps to increase safety, you can bet that fully autonomous cars, will need HD maps too if they want to be reliable enough.
 
Last edited:
Let me try to explain again how it works: The car approaches an intersection.
In case you forgot what your claim was:
HD maps can tell the car where to stop in the middle of an intersection if they have to wait before completing an unprotected left turn. That is not information you can directly get from vision.
The HD-map-reliant vehicle is stopping in the *middle* of an intersection because the map data said it was safe to stop there even though perception sees no lines there and no other vehicles ahead and believes it's safe to wait there. That behavior is definitely *not* safer as the vehicle is in the path of cross traffic that will be approaching from a newly opened post-construction lane.
 
In case you forgot what your claim was:
The HD-map-reliant vehicle is stopping in the *middle* of an intersection because the map data said it was safe to stop there even though perception sees no lines there and no other vehicles ahead and believes it's safe to wait there. That behavior is definitely *not* safer as the vehicle is in the path of cross traffic that will be approaching from a newly opened post-construction lane.

Yes. I was basing it on this example from the video I shared:

zfTnvWC.png
 
Have you ever noticed how it's easier to drive on familiar roads? It's not impossible to drive on unfamiliar roads, it's just a bit more effort.

That's what HD maps are for. They have got Waymo to L4. Tesla hasn't even mastered L2, and is still lying about L5 happening in the next five months!
 
Here are two interesting edge cases:
1) The shadow of a tunnel completely hides a woman who was pushing a stroller inside the tunnel.
2) Driving at night, the bright headlights of an upcoming car blind and make it hard to see a pedestrian walking on the road or the parked car on the side of the road.


I think cameras alone would probably have a hard time with these scenarios. We see in the video that the stroller and the pedestrian only become visible to the naked eye much later. Tesla's front radar should in theory detect the person but would AP brake in time?

I think cases like is why camera only is not reliable enough for L5 and why lidar is critical. Lidar would not be affected by the darkness in the tunnel or the blindness from the headlights and would definitely detect the stroller and the person in time to brake. The WeRide robotaxi is able to brake in time because it has a suite of cameras, radar and lidar.
 
Have you ever noticed how it's easier to drive on familiar roads? It's not impossible to drive on unfamiliar roads, it's just a bit more effort.

That's what HD maps are for. They have got Waymo to L4. Tesla hasn't even mastered L2, and is still lying about L5 happening in the next five months!

This is why I'm a huge proponent of maps even to the point where I think the government has the obligation to provide HD maps to its people, and construction companies have the obligation to update it.

Where all the car had to do was to use the vision to align the maps, and to add any inconsistencies to a blocklist. The blocklist would be online, and constantly sync'd (for offline).

I would even call them living maps because of the frequency of the updates. Where everyday you'd see some area fall into a blocklist, and other areas getting removed.

The biggest reason for maps of this nature is because mistakes are not allowed with autonomous driving.

If I look back on mistakes I've made while driving a fair amount of them were simply because I didn't see something or understand something map related. Like there is an onramp in Seattle that that has a yield sign that I missed on two separate occasions. Why did I miss it? It was placed well before when you're supposed to merge, and my brain simply forgot it was there by the time the merge actually happens.

There are lots of cases in life where the decision you make isn't based on what you see, but on previous knowledge of an area.

A map is nothing more, and nothing less than what was seen before. I would even go further with the map like common problems, common speeds, etc. I would also include pot hole information.

Sure there are downsides to a map centric approach. People don't like the idea of a blocklist, but most of us have blocklist whether we admit it or not. Places where we simply don't feel comfortable driving in.

The entire point to autonomous driving is for the liability to shift from you to the solution provider so you have to give the solution provider the ability to block out areas with too much liability.
 
Not this again. Does Waymo have robotaxis that the public can ride? Yes, thousands so not a "marketing stunt". Are those robotaxis classified as SAE L4? Yes! So have they delivered some L4? Yes. It is not perfect L4. It is not L4 on consumer cars. But it is L4. It is a not a false claim to say that Waymo has L4.
Sorry L4, is not a robotaxi with a safety driver that has to constantly monitor the car. That is L2. Very much a false claim. If were a true claim we would have more than one video this year. It is true in the sense that they have done a few marketing stunts. False, if you are implying it is available to the general public.
 
Sorry L4, is not a robotaxi with a safety driver that has to constantly monitor the car. That is L2. Very much a false claim. If were a true claim we would have more than one video this year. It is true in the sense that they have done a few marketing stunts. False, if you are implying it is available to the general public.

Seriously, are you trolling? This is getting really old.

L4 means the car is doing all the driving within a limited ODD. And that is exactly what Waymo cars do. So based on the SAE levels, Waymo has L4. And last year, Waymo reported a disengagement rate around 13k miles per disengagement. That means that Waymo safety drivers spend most drives just sitting there and don't need to do a thing.

The "why don't we see more videos" and "it's all a marketing stunt" cliches are getting old. Waymo has provided plenty of evidence but you refuse to accept it. You are playing this cute game of demanding proof that you've already decided will never be good enough. And 20M autonomous miles is not a marketing stunt!

And, yes, Waymo is open to the general public in Chandler, AZ. Last time I checked, "non employees" are part of the general public, no? And non-employees can sign up, and once accepted, can use Waymo robotaxis anytime they want. In fact, thousands of people from the general public have used Waymo robotaxis.
 
L4 means the car is doing all the driving within a limited ODD. And that is exactly what Waymo cars do. So based on the SAE levels, Waymo has L4. And last year, Waymo reported a disengagement rate around 13k miles per disengagement. That means that Waymo safety drivers spend most drives just sitting there and don't need to do a thing. L2 means the driver has to do some driving tasks because the car can't do them. That's not what Waymo safety drivers do. Waymo is not L2.

Just to be clear, if there's a possibility that a human will need to intervene at some point, it's no longer level 4, correct? The "fallback performance of dynamic driving task" is a human if there is a human in the driver seat responsible for overseeing the system.

So where Waymo has done driverless rides, those are level 4. But I would think that vehicles with safety drivers are automatically no higher than level 3 by definition.
 
Just to be clear, if there's a possibility that a human will need to intervene at some point, it's no longer level 4, correct? The "fallback performance of dynamic driving task" is a human if there is a human in the driver seat responsible for overseeing the system. So where Waymo has done driverless rides, those are level 4. But I would think that vehicles with safety drivers are automatically no higher than level 3 by definition.

No. It depends on what the safety driver does. Put simply, is the safety driver required or optional? Just the mere presence of a safety driver, does not automatically bump it down in levels. Only if the safety driver is required to perform some part of the DDT or DDT fallback, would it be L2 or L3. But if the ADS is capable of performing the entire DDT and DDT fallback then it is L4/L5 depending on the ODD. There can still be a safety driver present but they are optional.

From the SAE document:

At levels 4 and 5, the ADS must be capable of performing the DDT fallback and achieving a minimal risk condition. Level 4 and 5 ADS-equipped vehicles that are designed to also accommodate operation by a driver (whether conventional or remote) may allow a user to perform the DDT fallback if s/he chooses to do so. However, a level 4 or 5 ADS need not be designed to allow a user to perform DDT fallback and, indeed, may be designed to disallow it in order to reduce crash risk (see 8.9).

So L4/L5 must be capable of performing the DDT fallback but a user (ie a safety driver) can still perform the DDT fallback by choice.

In the case of Waymo, the ADS is capable of performing the entire DDT and DDT fallback in its ODD. We've seen driverless rides and we know most rides, the safety driver does not need to do anything. Obviously, Waymo is not perfect L4 yet. I am not claiming that it is. But I believe Waymo would still be L4.
 
My understanding was:

L1: Cruise control. Car can maintain a set speed.

L2: The driver must be constantly aware and ready without notice to take over the driving on his/her own discretion. The car may turn over control to the driver but is not required to. The driver has ultimate responsibility. At a minimum, the car can perform two driving functions, such as lane keeping and traffic-aware speed control, but may be able to perform more.

L3: The driver must be in the driver's seat and available to take over driving upon being alerted by the car. The car is responsible for making the decision to hand over control to the driver. The driver need not remain alert to road conditions.

L4: The driver need not be in the driver's seat. The car must be able to stop safely out of traffic if it cannot handle the driving conditions safely, so that the driver can move to the driver's seat and take control. The driver may be in the driver's seat but is not required to be. The essential point is that if the driver ever needs to take control without significant advance warning from the car, then it's not L4. Geofencing is permitted.

L5: No driver is required ever. The car may have driving controls for people who prefer to do the driving themselves, but does not need them.
 
My understanding was:

L1: Cruise control. Car can maintain a set speed.

One correction: Adaptive CC is also L1. For example, TACC is L1.

L1 means that the car can only automate one driving control (steering OR braking/acceleration) but not both. So in theory, L1 could be a system that only does lane keeping and the driver has to handle braking and accelerating. But I don't think we see many cars like that. In practice, I think most companies have settled on adaptive cruise control as the 1 control that the car handles so that is the most common way we see L1.

L2: The driver must be constantly aware and ready without notice to take over the driving on his/her own discretion. The car may turn over control to the driver but is not required to. The driver has ultimate responsibility. At a minimum, the car can perform two driving functions, such as lane keeping and traffic-aware speed control, but may be able to perform more.

L2 means that the car can handle both controls (steering AND braking/accelerating) at the same time. L2 is like double L1. But L2 cannot fully monitor the environment. For example, L2 usually won't be reliable at responding to hazards on the road. Hence, why the driver needs to pay attention at all times.

L3: The driver must be in the driver's seat and available to take over driving upon being alerted by the car. The car is responsible for making the decision to hand over control to the driver. The driver need not remain alert to road conditions.

Correct.

L4: The driver need not be in the driver's seat. The car must be able to stop safely out of traffic if it cannot handle the driving conditions safely, so that the driver can move to the driver's seat and take control. The driver may be in the driver's seat but is not required to be. The essential point is that if the driver ever needs to take control without significant advance warning from the car, then it's not L4. Geofencing is permitted.

That sounds about right. L4 means that the car is capable of handling all the driving but its ODD is limited (the most common limitation is geofencing). And by "all driving", I mean everything from steering/braking but also monitoring the environment, obeying traffic laws and pulling over safely if it needs to do.

I would add that car can be operating at Level 4 and the safety driver can still choose to take control in certain circumstances. For example, I've heard of cases where a FSD car was taking a bit too long to decide when to make an unprotected left turn. So the safety driver decided to take over. In this example, the car might have successfully completed the unprotected left turn if the safety driver had waited some more, but the safety driver decided to interrupt the ADS since it was taking longer than they wanted it to. The car is still L4 since it is capable of performing unprotected left turns and probably would have done it successfully if the safety driver had waited some more.

L5: No driver is required ever. The car may have driving controls for people who prefer to do the driving themselves, but does not need them.

Correct.
 
  • Helpful
Reactions: willow_hiller
Tesla's front radar should in theory detect the person but would AP brake in time?

Radar will probably ignore them. It's extremely low resolution means that to be useful it has to filter out anything that is moving very slowly, like pedestrians. Otherwise it would constantly be thinking it was about to crash and braking because of things like people and signs beside the road. Phantom braking is enough of an issue as it is.
 
  • Like
Reactions: diplomat33
Radar will probably ignore them. It's extremely low resolution means that to be useful it has to filter out anything that is moving very slowly, like pedestrians. Otherwise it would constantly be thinking it was about to crash and braking because of things like people and signs beside the road. Phantom braking is enough of an issue as it is.

Thanks. And that just confirms that the current hardware is not good enough for reliable L5 because there will be cases where the cameras won't see well enough and the radar will be useless, where our cars will hit objects if the driver is not alert and paying attention. So our cars won't be able to respond appropriately to these cases on their own.