Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

What will happen within the next 6 1/2 weeks?

Which new FSD features will be released by end of year and to whom?

  • None - on Jan 1 'later this year' will simply become end of 2020!

    Votes: 106 55.5%
  • One or more major features (stop lights and/or turns) to small number of EAP HW 3.0 vehicles.

    Votes: 55 28.8%
  • One or more major features (stop lights and/or turns) to small number of EAP HW 2.x/3.0 vehicles.

    Votes: 7 3.7%
  • One or more major features (stop lights and/or turns) to all HW 3.0 FSD owners!

    Votes: 8 4.2%
  • One or more major features (stop lights and/or turns) to all FSD owners!

    Votes: 15 7.9%

  • Total voters
    191
This site may earn commission on affiliate links.
I find these discussion boards very informative, and the autonomy discussions like this especially so.

However, I always fall back (as I am not an engineer) on what is in my garage and what it does. As in "my Model 3 is parked next to a whole range of devices, including old computers and in a house with significant electronics and electric motors".

A geofenced self driving car is very much, to me, a pure research project as of today. If the research project that is Waymo gets to a certain level, I suppose their next move will be to expand their geofenced area. I don't know if any plans to sell Waymo cars to the public. And, ride hailing companies like Uber and Lyft depend on such an inexpensive cost structure that I can't see how adding a $500,000 car to the mix can make up for a minimum wage human driver. Considering how difficult it is for established auto manufacturers to even make an EV, let alone produce a Waymo car in numbers, I don't think that the geofenced automonous research is really comparable, on an apples to apples basis, to the Tesla in my garage.

So many of these discussions end up comparing existing Teslas to, well, NOT to another actual car, but to what Tesla publicly hopes to achieve.

I think that is FAIR, becuase Tesla has asked us to pay for both the car and the system. So you pays your money and you gets your opinion, or something like that.

But its not analytically rigorous. Each feature that has been rolled out that I have seen, improves the overall system. In particular, the smart summon, which is the first, and only publicly available self driving feature in which the car can come to the end of a row of a parking lot, and look to see if cars or objects are in the way, and then proceed, is just an amazing development. I don't think pointing out that it is not yet foolproof is useful. Similarly, the recognition of traffic cones, and distinguishing cones from fire hydrants and bollards and other similar objects is a game changing first step. If you can recognize orange cones, stop signs are surely next.
 
I'm curious, what successful technology product wasn't originally a research project?
To me it makes sense that the first successful self-driving cars will be very expensive prototypes just like any other invention.

Don't get me wrong, I'm not saying anything is wrong with either research projects in general or Waymo in particular. I was only saying that to compare Waymo to Tesla at this point, since as a consumer I cannot buy either a Waymo car or use it as a service, is analytically lacking.

As a consumer, research is great, but the difference between an actual product you can get and a product which someday you may be able to get is really so large that comparing the two has its limits.

There is also something about making cars for profit that seems way, way more difficult that it would appear. If you look at how "nimble" legacy automakers are in terms of adjusting their products fundamentally the only conclusion is that they are not nimble in any way.
 
No, only L3 and L4 have limited ODD. L5 has no limits on ODD. So a system that can drive anywhere and any time a human can drive is L5. So what Tesla hopes to build is L5.



L3 is defined as full self-driving when the system is on meaning that when the system is on, the driver does not need to pay attention. So when the system is on, the "driver" can take their hands off the wheel, eyes off the road, read a book, do whatever. But if the system encounters a problem, it must give the "driver" enough time to reengage with what is going on and take back control. I don't the think the SAE defines a specific amount of time, it just says enough time. So the system just needs to give a reasonable amount of time for the human to stop what they were doing and reengage with what is going and take back control.

This is one reason why many auto manufacturers, I think, are mostly ignoring L3 and trying to go straight to L4 or L5. The idea is that it is safer to cut out the human completely and just have an autonomous car that can either handle everything itself in its ODD or safely pull over if it can't handle something.



No, what you are describing is exactly L3.



If it can drive anywhere and any time with no geofencing and can handle its own fallback by pulling over to the side of the road like you describe when it encounters a problem, then it is L5.

In basic terms, the levels of autonomy are this:

Level 3: Full self-driving in a limited ODD and human may be asked to take over if needed.

Level 4: Full self-driving in a limited ODD but car does not need to ask the human to take over except when exiting its ODD. Car can safely pull over on its own if it needs to.

Level 5: Full self-driving with no limits on ODD and car does not need to ask the human to take over. Car can safely pull over on its own if it needs to.

Hope that helps.

Thanks. Yes, that helps.

My question is about "limited ODD" (Operational Design Domain) I read that as including the possibility of geofencing. Am I misunderstanding the term? As I understand it, ODD refers to the conditions and geographical range, where the system will function. Waymo is limited geographically. Tesla is not. My system is only EAP, but it can operate on any street or road with well-marked lane lines. Waymo can do far more, but it cannot do anything at all where I live. How does the terminology deal with this? Can a car company promise me an L3 or L4 car and then when it's delivered it turns out that Maui is outside its ODD and the autonomy system will never engage?

For many years into the future I don't expect to have a car that drive on every kind of road in all weather conditions, but I want terminology that differentiates between a system that's geofenced and one that's not, for any given list of features.
 
Thanks. Yes, that helps.

My question is about "limited ODD" (Operational Design Domain) I read that as including the possibility of geofencing. Am I misunderstanding the term? As I understand it, ODD refers to the conditions and geographical range, where the system will function.

Yes, ODD includes geofencing. But it is important to understand that ODD is not limited to just geofencing. ODD is the entire spectrum of conditions AND geo locations where an autonomous car can operate in.

ODD includes:
1) Geo location
2) Road types (je highways or city streets)
3) Speed
4) Time of Day (day or night)
5) Weather conditions (clear weather, rain, snow for example)
6) Other constraints

So for example, you could have an autonomous car that is not geofenced at all but can only operate at certain speeds, or only operate during the day or only operate in clear weather. Those would also be restrictions on the car's ODD that don't involve geofencing.

Waymo is limited geographically. Tesla is not. My system is only EAP, but it can operate on any street or road with well-marked lane lines. Waymo can do far more, but it cannot do anything at all where I live. How does the terminology deal with this? Can a car company promise me an L3 or L4 car and then when it's delivered it turns out that Maui is outside its ODD and the autonomy system will never engage?

Don't confuse capabilities with ODD. So, Waymo's capabilities are greater than EAP: the cars can do more like yield for emergency vehicles which EAP cannot do. But Waymo's ODD is arguably more restrictive than EAP's ODD since Waymo cars only operate in small geofenced areas whereas our cars can operate in much wider areas.

I think the terminology deals with it by using the levels. So for example, I would describe a Waymo car as a L4 autonomous car with a ODD restricted to the Phoenix area. L4 tells me that it is a fully autonomous car so that tells me what it is capable of. The ODD tells me that it is geofenced. I would describe a Tesla car has a L2 driver assist car with a ODD of every road, speed from 0-90mph, day and night. The L2 tells me that the capabilities are less since it is only a driver assist but the ODD tells me where it can operate.

Presumably, car manufacturers should disclose any geofencing or other ODD restrictions when they describe the driving features of the car. So they might say something like "autonomous driving features not available in Maui" if that were the case so that you would know that the features won't work there.
 
  • Helpful
Reactions: strangecosmos2
Yes. And that is my point. Assigning the same label to a car that can drive itself only in Phoenix and a car that can drive itself anywhere is, in my opinion, not a useful label.

It is not the same label. A car like waymo that can only drive in Phoenix is L4. A car that can drive anywhere is L5. Different labels!
 
So, what's the difference between L3 and L4? And how would they distinguish a car that's eyes-off-the-road (L3) in Kalamazoo Michigan only, from a car with the same capabilities but no geographical limitations? I always thought L4 meant that you could sleep in the back seat and the car would park and wake you if you were needed. But how do they distinguish a car that can do that only in Idaho vs. one that can do that anywhere?

My shorthand definitions were:
L2 - Driver must be constantly aware and responsible for taking over when needed;
L3 - Driver need not be constantly aware, but must be awake and ready to take over when alerted by the car;
L4 - Driver can be asleep in the back seat;
L5 - No driver need be present.

But then a car is claimed to be L3 when it can only operate on the freeways of certain cities, which is less useful than Tesla's L2 EAP. And it takes them 35 pages of tech-speak to lay out the levels and the don't even address the significant difference between a severely geofenced system and one that's not geofenced at all.
 
And FWIW, Tesla used to define FSD as Level 5, but now seems to define FSD as a limited set of features that will operate at Level 2 for the foreseeable future and which they expect will reach Level 5 some day.

I'm not complaining about what Tesla is doing. I think what they're doing and have done is wonderful. I'm complaining about unrealistic promises and selling something they cannot deliver within the time frame that the buyer will be able to use it. They promised a car that needs no driver and it's become clear that the cars they promised this for will never reach that level.
 
The SAE Levels of Automation admit all kinds of weird edge cases. What about an autonomous tractor than can drive up and down a single dirt road, relying on high-precision GPS beacons? Would that technically be Level 4? What about an autonomous shuttle at an office park that is fenced off from pedestrians? Is that Level 4?

Also, if you take Level 5 to the extreme, it would seem to imply a Level 5 vehicle should be able to drive in places like narrow mountain roads that the average human driver couldn't drive. (Is it anywhere a typical human can drive? Or anywhere any human can drive?)

What about a car that can drive autonomously without human supervision on 99.9%+ of public roadways and parking lots in the contiguous United States? Isn't that technically Level 4? But it's what most people picture in their heads as Level 5. A geofence that spans a continent won't feel to users in the contiguous U.S. like much of a geofence at all.
 
Last edited:
  • Informative
  • Like
Reactions: daniel and nepenthe
Yes, I believe Tesla will eventually do away with maps completely when the camera vision is perfect enough.

How can that ever happen? I can't imagine a time when every road is imaged well enough that there is no need for maps. Heck, there are still plenty of places where I have trouble seeing what is going on.

You know how people slow down passing an accident scene because they can't resist looking as they go by? Will the car also do that, but be looking not at the people, but at the cars!
 
  • Funny
Reactions: drtimhill
So, what's the difference between L3 and L4?

L3 will ask the driver to take over with enough advance warning whereas L4 can handle its own fallback (like pulling over to the side) and therefore does not need to ever ask the driver to take over.

And how would they distinguish a car that's eyes-off-the-road (L3) in Kalamazoo Michigan only, from a car with the same capabilities but no geographical limitations? I always thought L4 meant that you could sleep in the back seat and the car would park and wake you if you were needed. But how do they distinguish a car that can do that only in Idaho vs. one that can do that anywhere?

The car that can only do it in Idaho would be L4 since it geofenced. The car that can do it everywhere would be L5 since it is not geofenced.

My shorthand definitions were:
L2 - Driver must be constantly aware and responsible for taking over when needed;
L3 - Driver need not be constantly aware, but must be awake and ready to take over when alerted by the car;
L4 - Driver can be asleep in the back seat;
L5 - No driver need be present.

You are using the wrong definitions for the levels. Thinking of the driver being "eyes off" or asleep is the wrong way to look at levels.

The correct way to look at levels according to SAE is this:

L3 = full self-driving but limited ODD and driver is asked to take over.
L4 = full self-driving but limited ODD and driver is NOT asked to take over.
L5= full self-driving with no limits on ODD and driver is NOT asked to take over.
 
  • Helpful
Reactions: strangecosmos2
From the SAE:

j3016-levels-of-automation-image.png


(source)
 
Here's an alternate classification system I invented just for fun. Instead of L0, L1, L2, L3, L4, and L5, there's D1, D2, D3, D4, and D4R (D4 with remote monitoring). I don't think this is a better system or that it should replace the SAE system. It's just an alternate way of categorizing things.

gehwdQx.png


Notes:
  • This classification scheme is agnostic to driving environments. An autonomous vehicle system can be D2, D3, or D4 on the highway, but D1 everywhere else. A system can be D4 in a geofenced area.

  • A higher number isn’t necessarily better or more autonomous. A D4 farming vehicle on remote dirt roads probably needs less advanced technological capabilities than a D3 vehicle in a city’s downtown.

  • A fifth category is D4R: Driving 4 (Remote). This refers to when an otherwise D4 system is monitored by a remote human operator, or when a remote human operator is available to help upon request.

  • This classification scheme is intended to supplement others like the SAE Levels of Driving Automation, not replace them.
Examples:

D1: Autopilot.

D2: Navigate on Autopilot (with lane change confirmations).

D3: Navigate on Autopilot (with unconfirmed lane changes); a feature complete version of Tesla’s Full Self-Driving Capability product.

D4: Full Self-Driving Capability with the only occupant sleeping in the backseat; Waymo One without safety drivers (D4R); Cruise Anywhere without safety drivers; Nuro grocery delivery (D4R).
 
And FWIW, Tesla used to define FSD as Level 5, but now seems to define FSD as a limited set of features that will operate at Level 2 for the foreseeable future and which they expect will reach Level 5 some day.

Elon predicts that by the end of 2020 that HW3 Teslas will be technically capable of safely driving anywhere within no one in them. He predicts that sometime after (not specific) regulators will approve their use as robotaxis.

What Elon's envisioning might be limited to, say, the contiguous U.S., so that would make it technically Level 4. But not geofenced to a particular city.

Of course, Elon's predictions have been wrong before and he could very well be wrong this time too. I'm just conveying what he's said.
 
Your "low" goal is not as easy as you think. To make a car drive between two points you choose, the car needs to be able to drive in all situations without human intervention, in all conditions.

That might be a little too strong a statement. I think I can safely say that to make a car drive between San Francisco and Los Angeles by any reasonable route, my car does not need to know how to, for example, install snow chains, nor handle European-style highway speed signs. :)

But I get your point.
 
  • Informative
Reactions: pilotSteve
Tesla will also perform a DDT fallback. Let go of the wheel while on AP and see what it does. Once it's done screaming at you, it performs a fallback function that turns on the hazard indicators and brings the vehicle to a stop.

No, that's not a valid DDT fallback, for three reasons:
  1. "Human driver paying attention" is not a legitimate autonomous ODD; it is, in fact, the opposite of an ODD.
  2. Stopping in the middle of the highway is not generally safe, just safer than the alternative.
  3. The car emits "Take Over Immediately" disconnects when it exceeds the limits of its ODD, and does this frequently; when it does, it does not do any sort of DDT fallback.
If Tesla could cut the "Take Over Immediately" disconnects to approximately zero by pulling the car to the side of the road in the event of a failure and could at least detect all conditions that exceed their ODD (e.g. presence of construction, emergency vehicle, inability to find the lanes, and other situations that AP doesn't handle) and consistently pull over safely when those conditions occur, then Tesla AP would qualify as Level 4 with an ODD of "highways only". It's a long way from there, though, I think.


L3 will ask the driver to take over with enough advance warning whereas L4 can handle its own fallback (like pulling over to the side) and therefore does not need to ever ask the driver to take over.

The car that can only do it in Idaho would be L4 since it geofenced. The car that can do it everywhere would be L5 since it is not geofenced.

To be pedantic, assuming the passenger does not want to get out of the car and walk the rest of the way, even a L4 car could still ostensibly ask a human driver to take over when it exceeds a geofencing-based or highway-only ODD. The key difference is that it asks the human driver to take over in a safe way, having already placed the vehicle in a location where it is unlikely to get struck by another vehicle if the human driver fails to take over in a timely manner, as opposed to doing so while the vehicle is rolling down the highway at 75 MPH. :)
 
  • Informative
Reactions: pilotSteve