Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
Turns out there's a big range in "FSD." On the one end, a car that can drive itself on closed streets with actors occasionally crossing in front of it. Then there are cars that can drive themselves in real-life situations, when they have centimeter-scale maps, in neighborhoods with good streets and not too much traffic, in perfect weather. At the other end is what I think most of us really think of as FSD: A car that can drive itself anywhere, and in any conditions, that a typical human driver could.

Full Self Driving is not defined by any government or agency, but I like the way Elon Musk described what my car would some day be able to do if I paid for the "FSD" package: Drive itself unattended across country, function as a robotaxi, take the kids to school and then come home, drive to the school to pick the kids up and take them home or to soccer practice, etc.

The "Full" in Full Self Driving is the sticking point. Is my car "fully" driving itself when in EAP mode on the highway and my only input is to keep some pressure on the wheel so it knows I'm paying attention, and disengage it when I decide it's necessary? Or is it only "Full" self-driving when the car can do everything a typical human driver could?

I say that only Level 5 is really full self-driving.

I am encouraged by all the progress being made, and I applaud the makers of cars that can operate in a couple of square miles of Phoenix. But IMO it is hyperbole to call that "Full" self-driving. I just want truth in advertising. Tell us clearly what your car can and cannot do today. And then, in a separate, clearly-marked paragraph, tell us what your goals for your car are. I get angry when they describe their concept car as if it were a fait accompli.

I think you are downplaying Zoox and Waymo.

Zoox is capable of much more than just driving on a closed street with actors. That is just part of how they test their FSD. Zoox's FSD is capable of much more.

For example, here is Zoox doing 1 hr autonomous driving on public roads, both city streets and highways, in Las Vegas, with zero interventions:


And here is Zoox driving autonomously in SF for 1 hour on public roads, with zero interventions:


Likewise, Waymo has done millions of autonomous driving all over the US. The geofenced area in Phoenix is just where they have launched a public ride-hailing service so far. Waymo's FSD is capable of more than just driving a few miles in Phoenix. .

Yes, I consider them FSD because Waymo and Zoox cars do all the dynamic driving tasks (DDT) and fallback in their ODD. The human can be a passenger in the back seat. That's FSD to me. I define FSD as L4 or L5. But it is also possible to define FSD as only L5 if we take a more strict definition of FSD and argue that "full" must be both the entire DDT and the unconditional ODD.

But this is why we probably should not use terms like FSD because there is no formal definition of FSD. Everybody can have their own made up definition of FSD and it causes confusion. We should stick with the SAE levels because they are the formal industry definitions for autonomous driving.
 
Last edited:
  • Like
Reactions: Microterf
I am, indeed, very impressed by what they have done. The videos above are amazing. But until I can buy one, and sleep in the back while it drives me anywhere I'd have driven myself, then I still regard it as a work in progress. Maybe the reason they won't sell a consumer car yet is that they had to go through many hours of trials to find one solid uninterrupted hour?

Why only robotaxis and not consumer cars? Because the tech is not yet ready for prime time. Yes, it's extremely impressive, but there's a very long way yet to go. I wonder if the Zoox car could drive on South Kihei Rd. I'm inclined to think it could not. And that is a major road. It runs the length of Kihei, and is one of only two roads that does, the other being Pi'ilani Highway. Depending on where I'm going, it's often the best choice. It's also more scenic. But in places along the northern half of the road there are no shoulders, and there are cyclists and pedestrians who are, of necessity, in the traffic lanes.

I'd be interested to know, also, if we're watching an L4 system, or an L2 one? They've got an hour without interventions, but was there a safety driver in the driver's seat who was responsible for taking over if needed? Or was the human (if present) in the back seat where s/he could only take over after the car parked safely?

Amazing, amazing, amazing! But for me, true full self driving means I can buy one and sleep in the back seat. (Or I can buy one and it has no driver's controls.)
 
I am, indeed, very impressed by what they have done. The videos above are amazing. But until I can buy one, and sleep in the back while it drives me anywhere I'd have driven myself, then I still regard it as a work in progress. Maybe the reason they won't sell a consumer car yet is that they had to go through many hours of trials to find one solid uninterrupted hour?

Why only robotaxis and not consumer cars? Because the tech is not yet ready for prime time. Yes, it's extremely impressive, but there's a very long way yet to go. I wonder if the Zoox car could drive on South Kihei Rd. I'm inclined to think it could not. And that is a major road. It runs the length of Kihei, and is one of only two roads that does, the other being Pi'ilani Highway. Depending on where I'm going, it's often the best choice. It's also more scenic. But in places along the northern half of the road there are no shoulders, and there are cyclists and pedestrians who are, of necessity, in the traffic lanes.

I'd be interested to know, also, if we're watching an L4 system, or an L2 one? They've got an hour without interventions, but was there a safety driver in the driver's seat who was responsible for taking over if needed? Or was the human (if present) in the back seat where s/he could only take over after the car parked safely?

Amazing, amazing, amazing! But for me, true full self driving means I can buy one and sleep in the back seat. (Or I can buy one and it has no driver's controls.)

Zoox is L4.

Yes, I get that you want a full self-driving car you can actually buy. I think we all want that. But nobody has a true full self-driving car that the consumer can buy yet. Nobody has solved FSD yet. So the best we have is autonomous driving that works in some cases and not in other cases. Companies are working on solving those remaining cases. Robotaxis have the advantage that you can release them sooner because you can geofence them to where the autonomous driving works reliably while you work to solve the rest. FSD consumer cars kinda have to work everywhere since that is what consumers want. So you can't really release FSD cars until you've solved FSD. That is why the consumer cars we have so far are L2 or partial self-driving.
 
  • Like
Reactions: Microterf
Zoox is L4.

Thanks for the clarification. Of course, as I've commented before, the problem with the L4 classification is that the ODD can be arbitrarily limited.

... Nobody has solved FSD yet. ...

My point exactly. They've made extraordinarily impressive progress, but they're not there yet. "Real" FSD does not yet exist. I hope they succeed! I'd like Tesla to be first because Tesla makes the best cars. But whoever is first, I'll buy one, if I'm still upright when it happens.

And I do appreciate your updates, since I'm too lazy to search out the latest news myself. So, thanks.
 
  • Like
Reactions: diplomat33
Turns out there's a big range in "FSD." On the one end, a car that can drive itself on closed streets with actors occasionally crossing in front of it. Then there are cars that can drive themselves in real-life situations, when they have centimeter-scale maps, in neighborhoods with good streets and not too much traffic, in perfect weather. At the other end is what I think most of us really think of as FSD: A car that can drive itself anywhere, and in any conditions, that a typical human driver could.

Full Self Driving is not defined by any government or agency, but I like the way Elon Musk described what my car would some day be able to do if I paid for the "FSD" package: Drive itself unattended across country, function as a robotaxi, take the kids to school and then come home, drive to the school to pick the kids up and take them home or to soccer practice, etc.

The "Full" in Full Self Driving is the sticking point. Is my car "fully" driving itself when in EAP mode on the highway and my only input is to keep some pressure on the wheel so it knows I'm paying attention, and disengage it when I decide it's necessary? Or is it only "Full" self-driving when the car can do everything a typical human driver could?

I say that only Level 5 is really full self-driving.

I am encouraged by all the progress being made, and I applaud the makers of cars that can operate in a couple of square miles of Phoenix. But IMO it is hyperbole to call that "Full" self-driving. I just want truth in advertising. Tell us clearly what your car can and cannot do today. And then, in a separate, clearly-marked paragraph, tell us what your goals for your car are. I get angry when they describe their concept car as if it were a fait accompli.
Overall, agree. However, I believe that Level 4 would satisfy the vast majority of consumer expectations, if the design limitations solely were based on SITUATIONAL parameters as detected by the vehicle, such as no private/dirt/gravel roads; no road surfaces visible due to sand, snow or other reduction in visibility; or whiteout conditions such as snowstorms, sandstorms or extremely heavy rain.

Artificial map boundaries, however, would not meet consumer expectations.
 
Last edited:
Thanks for the clarification. Of course, as I've commented before, the problem with the L4 classification is that the ODD can be arbitrarily limited.

I should probably clarify that the SAE levels are probably best used to describe deployment IMO. Zoox, Cruise, Waymo and others are L4 because that is how the autonomous driving is currently deployed on public roads. But they are certainly trying to solve all of FSD (L5). They are just not there yet.

And I do appreciate your updates, since I'm too lazy to search out the latest news myself. So, thanks.

You are so welcome.
 
I do love all of the progress being made by all of the companies, but at the end of the day, people only care when it affects them. In my small city in Ohio, we'll be one of the last ones to get access to robotaxis that require HD mapping etc. There is however a decent chance that within 6 months from now, my Tesla will be able to drive me to work, to the store, and home with zero interventions regularly (at L2 of course. )
 
Overall, agree. However, I believe that Level 4 would satisfy the vast majority of consumer expectations, if the design limitations solely were based on SITUATIONAL parameters as detected by the vehicle, such as no private/dirt/gravel roads; no road surfaces visible due to sand, snow or other reduction in visibility; or whiteout conditions such as snowstorms, sandstorms or extremely heavy rain.

Artificial map boundaries, however, would not meet consumer expectations.

Again, it depends on what situations are excluded. That's the problem with the L4 designation. It allows the builder to exclude the majority of roads and still claim L4. L4 robotaxis can be made to operate only where the conditions are favorable to the system.

Why only robotaxis and not consumer cars?

I've thought of one possible answer to my own question: These cars are so expensive to build that they're not marketable to the public, and do not make any money for the operators. They're actually not really a taxi service: They're a development platform. The technology is actually so far away from being practical that it's a mistake to view robotaxis as a sign that autonomous cars are on the horizon.

As for Tesla being able to drive some distance without interventions, Level 2 is qualitatively different than Level 4: At Level 4 you can take a nap or work on the computer or read a book: Your time in the car becomes productive. At level 2 you're still 100% involved with driving, even if it's only in a monitoring capacity. On the highway I find it relaxing to be relieved of the steering and speed control, and for the most part there's plenty of time to react to situations. But in the city everything happens much more suddenly and you have to be ready to react at an instant's notice. In my opinion, Level 2 is not suitable in the city. (Which is why I've quit engaging autosteer in the city.)
 
Overall, agree. However, I believe that Level 4 would satisfy the vast majority of consumer expectations, if the design limitations solely were based on SITUATIONAL parameters as detected by the vehicle, such as no private/dirt/gravel roads; no road surfaces visible due to sand, snow or other reduction in visibility; or whiteout conditions such as snowstorms, sandstorms or extremely heavy rain.

Artificial map boundaries, however, would not meet consumer expectations.
I strongly agree with this basic philosophy for L4, and I believe that, particularly for personally-owned cars, it could be reasonably left up to the user to set the acceptable predicted risk level for L4 disengagement causing delay, return-home or white-flag surrender i.e. need for third-party rescue to get the vehicle back into operation. Note that I did not say "risk of collision/injury", nor "risk of the vehicle blocking traffic or unsafe stopping", all of which must by definition be very small risks for L4.

The trip-planning application would consider weather, time of day, reported traffic conditions, reported construction activity etc. (as e.g. Google Maps does today) and tell the user, in simple terms, the likelihood of a normal successful trip. If a storm is brewing or there's a rush-hour tie-up, the probability of delays and problems goes up, and the user can accept or postpone the trip - the same kind of decisions we make all the time.

I feel that there's a tendency on TMC and elsewhere to overemphasize the rarest edge cases (where, in reality, humans put their plans and sometimes themselves at risk quite frequently), and then deem Self-Driving impossible or a general failure, if it can't guarantee success in all these cases. As long as we're not talking about crashing or creating a major disturbance, I say let the customers decide how adventurous they want to be.

Again, it depends on what situations are excluded. That's the problem with the L4 designation. It allows the builder to exclude the majority of roads and still claim L4. L4 robotaxis can be made to operate only where the conditions are favorable to the system.
The solution to overuse of exclusions etc. is simply competition. An L4 car (or a robotaxi provider) with more trip-planning restrictions and/or higher incidence of disengagements will fail in the market vs. a more capable competitor. There's no need for the government or even an industry committee body to mandate a highly granular standard; an attempt at that would inevitably be unsatisfying and soon fail to reflect fast-evolving technology. Again the market will decide and customers will learn and swap information re the pros and cons of various competitors.

I don't even have my Tesla yet, but it didn't take me long to figure out that I probably will use Autosteer but not downtown, probably won't be using automated parking, and even less likely Smart Summon. These are "advanced L2" features, but the same expectation-feedback mechanisms will apply to L4 features.
 
The solution to overuse of exclusions etc. is simply competition. An L4 car (or a robotaxi provider) with more trip-planning restrictions and/or higher incidence of disengagements will fail in the market vs. a more capable competitor. There's no need for the government or even an industry committee body to mandate a highly granular standard; an attempt at that would inevitably be unsatisfying and soon fail to reflect fast-evolving technology. Again the market will decide and customers will learn and swap information re the pros and cons of various competitors.

This is true, once we have available L4 cars. I was referring to the present day, when "L4" comes up, not in reference to cars you can buy or even in robotaxis that most of us could actually take, but in hype from developers regarding their progress. "We are operating L4 robotaxis" could mean cars capable of driving most anywhere under most conditions where a human would drive, or it could mean a car that only operates in a one square mile area of suburban Phoenix.

The L4 designation tells us very little. We need a huge amount of additional information beyond that designation if we are to assess the actual capabilities of a car. When it comes time to buy an autonomous car (and I do believe that time will come, but not as soon as some think it will) the L4 designation will be almost meaningless. We will need much more information to make a purchasing choice.
 
  • Helpful
Reactions: mikes_fsd
There are some city streets where autopilot can be used without serious problems.

Sure, if there are city streets where it's absolutely impossible for a pedestrian to suddenly step into the street. Because at L2 you, the driver, are responsible for reacting instantaneously, just as if there were no autopilot.

L2 can be as safe in the city as if you were driving yourself, provided that you are as alert and ready to react as you would be if you were driving. My point is that I can relax (while remaining alert) with autosteer on the highway. I cannot relax with autosteer in the city.

Everything changes once we have L3 or better, because then the car is responsible. And if the system is actually capable of being responsible, at a safety level equal to or better than a human driver, then you can relax.
 
  • Like
Reactions: ohmman
Everything changes once we have L3 or better, because then the car is responsible.
No, the driver will be responsible. Liability gets hazier at true L4 or L5, tending to move towards the vehicle manufacturer being responsible.

We are a long way from the latter cases, at least with Tesla. I believe Tesla may get to L4, but not for a good long while. L4 would likely satisfy any reasonable person's expectations.

I would be happy with L3 in city driving. And I will take the responsibility of using it correctly. As I have, uneventfully, through AP1 and AP3 hardware, following the instructions on taking over and being alert.

Those who claim it's the car that's responsible for driving under trailers or into emergency equipment are copping out and excusing irresponsible drivers.
 
  • Disagree
Reactions: Daniel in SD
No, the driver will be responsible. Liability gets hazier at true L4 or L5, tending to move towards the vehicle manufacturer being responsible.

We are a long way from the latter cases, at least with Tesla. I believe Tesla may get to L4, but not for a good long while. L4 would likely satisfy any reasonable person's expectations.

I would be happy with L3 in city driving. And I will take the responsibility of using it correctly. As I have, uneventfully, through AP1 and AP3 hardware, following the instructions on taking over and being alert.

Those who claim it's the car that's responsible for driving under trailers or into emergency equipment are copping out and excusing irresponsible drivers.
 
Quoting a Waymo evangelist doesn't hold much credibility to me. You disagree that in L3 applications, the driver is NOT responsible? Please provide direct (e.g. from SAE documents) evidence.
 
Quoting a Waymo evangelist doesn't hold much credibility to me. You disagree that in L3 applications, the driver is NOT responsible? Please provide direct (e.g. from SAE documents) evidence.
Table on page 19. System is responsible for object event detection and response for L3-L5. User must be ready to take over if system requests for L3 but is not responsible for knowing when to take over. For example you can watch a movie or read and write emails in a L3 vehicle but you can't fall asleep or be drunk.
 

Attachments

  • SAE J3016.pdf
    1.4 MB · Views: 55
Quoting a Waymo evangelist doesn't hold much credibility to me. You disagree that in L3 applications, the driver is NOT responsible? Please provide direct (e.g. from SAE documents) evidence.

The Primer is not me "evangelizing" anything. It is highlights from the SAE document itself. it is simply a reference guide.

The SAE says that when L3 is on, the driver is not responsible for performing the DDT but is responsible for the fallback when the L3 requests it.

From the SAE Document, Table 2 on page 28:

Yiv87wr.png
 
Table on page 19. System is responsible for object event detection and response for L3-L5. User must be ready to take over if system requests for L3 but is not responsible for knowing when to take over. For example you can watch a movie or read and write emails in a L3 vehicle but you can't fall asleep or be drunk.
Please read Section 3.14. Carefully. Note 1: "Thus, a level 3 ADS, which is capable of performing the entire DDT within its ODD, may not be capable of performing the DDT fallback in all situations that require it and thus will issue a request to intervene to the DDT fallback-ready user when necessary"

You can be reading or watching a movie and be fallback-ready? Is that what you're saying? I certainly don't interpret the clear examples to mean you can be totally distracted with L3 ADS. Can you show me where the words "inattentive," "writing emails," or "watching movies" is described within the parameters of L3 in that document.

3.14, Note 3: "At level 3, an ADS is capable of continuing to perform the DDT for at least several seconds after providing the fallback-ready user with a request to intervene. The DDT fallback-ready user is then expected to achieve a minimal risk condition if s/he determines it to be necessary."

You assume that one engaged in watching a movie or hands-on with their cell phones can within "several seconds" become situationally aware and take corrective action?

4.0, bullet point 2: "If the driving automation system performs the entire DDT, the user does not do so. However, if a DDT fallback-ready user is expected to take over the DDT when a DDT performance-relevant system failure occurs or when the driving automation system is about to leave its operational design domain (ODD), then that user is expected to be receptive and able to resume DDT performance when alerted to the need to do so. This division of roles corresponds to level 3."

I don't see that as being able to read or be deeply distracted. Why do you?

And who, if not the fallback-ready user, is liable for that user not taking over due to inattention?
 
Last edited:
Please read Section 3.14. Carefully. Note 1: "Thus, a level 3 ADS, which is capable of performing the entire DDT within its ODD, may not be capable of performing the DDT fallback in all situations that require it and thus will issue a request to intervene to the DDT fallback-ready user when necessary"

You can be reading or watching a movie and be fallback-ready? Is that what you're saying? I certainly don't interpret the clear examples to mean you can be totally distracted with L3 ADS. Can you show me where the words "inattentive," "writing emails," or "watching movies" is described within the parameters of L3 in that document.

3.14, Note 3: "At level 3, an ADS is capable of continuing to perform the DDT for at least several seconds after providing the fallback-ready user with a request to intervene. The DDT fallback-ready user is then expected to achieve a minimal risk condition if s/he determines it to be necessary."

You assume that one engaged in watching a movie or hands-on with their cell phones can within "several seconds" become situationally aware and take corrective action?

4.0, bullet point 2: "If the driving automation system performs the entire DDT, the user does not do so. However, if a DDT fallback-ready user is expected to take over the DDT when a DDT performance-relevant system failure occurs or when the driving automation system is about to leave its operational design domain (ODD), then that user is expected to be receptive and able to resume DDT performance when alerted to the need to do so. This division of roles corresponds to level 3."

I don't see that as being able to read or be deeply distracted. Why do you?
You've identified the biggest problem many people see with Level 3. How long does it take the driver to regain situational awareness? I think this issue is overblown because most emergency situations resolve themselves within seconds so the system can't rely on the driver to handle them. I'd like to hear an example of a situation where the vehicle gives the driver 5 seconds of warning and the driver prevents a collision.
Practically speaking I think that handoffs will only occur when the vehicle leaves its ODD (i.e. leaving the freeway or in the case of currently announced L3 vehicles, traffic exceeding "traffic jam" speeds) or after the vehicle has already completely an emergency stop (which takes about 3 seconds!).
Once the system is activated, a driver can also watch movies or use the navigation on the screen, helping to mitigate fatigue and stress when driving in a traffic jam, Honda said in a statement.
Your initial claim was that the driver is responsible for collisions while a L3 system is active which is clearly not the case. You're thinking of a L2 system like Autopilot.
 
You've identified the biggest problem many people see with Level 3. How long does it take the driver to regain situational awareness? I think this issue is overblown because most emergency situations resolve themselves within seconds so the system can't rely on the driver to handle them. I'd like to hear an example of a situation where the vehicle gives the driver 5 seconds of warning and the driver prevents a collision.
Practically speaking I think that handoffs will only occur when the vehicle leaves its ODD (i.e. leaving the freeway or in the case of currently announced L3 vehicles, traffic exceeding "traffic jam" speeds) or after the vehicle has already completely an emergency stop (which takes about 3 seconds!).

Your initial claim was that the driver is responsible for collisions while a L3 system is active which is clearly not the case. You're thinking of a L2 system like Autopilot.
And when a fallback-driver does not take over in time? Who is liable?
 
Last edited: