Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

What will happen within the next 6 1/2 weeks?

Which new FSD features will be released by end of year and to whom?

  • None - on Jan 1 'later this year' will simply become end of 2020!

    Votes: 106 55.5%
  • One or more major features (stop lights and/or turns) to small number of EAP HW 3.0 vehicles.

    Votes: 55 28.8%
  • One or more major features (stop lights and/or turns) to small number of EAP HW 2.x/3.0 vehicles.

    Votes: 7 3.7%
  • One or more major features (stop lights and/or turns) to all HW 3.0 FSD owners!

    Votes: 8 4.2%
  • One or more major features (stop lights and/or turns) to all FSD owners!

    Votes: 15 7.9%

  • Total voters
    191
This site may earn commission on affiliate links.
This has been an ongoing argument. I think that city NoA will be a beta Level 5 system because that is clearly the design intent. That doesn't mean it can be used without a safety driver! Just like every other self driving prototype (other than Waymo) needs a safety driver.

You are correct that a safety driver is not the critical difference between a driver assist and an autonomous system. From the SAE document, the critical difference between a driver assist (L2) and an autonomous system (L3, L4, L5) is whether it can handle the entire OEDR (monitor the environment outside the car). So that is what will determine if Tesla's "city NOA" is a driver assist or an autonomous system. Specifically, if Tesla finishes the NN such that "city NOA" can do the entire OEDR, then it will be a beta autonomous system. As you say, it will still require driver supervision. But if Tesla leaves out part of the OEDR (for example, leaves out detecting hazardous road debris) then it will still be a driver assist and be L2.

At this point, I don't think we really know the answer to that yet. We know that Karpathy's team is working on the entire OEDR. Karpathy has talked about it many times. So I think we can surmise that Tesla's intent is certainly to have a complete OEDR and therefore to have a fully autonomous prototype. We just don't know yet what Tesla will actually be able to deliver to customers or when.
 
You are correct that a safety driver is not the critical difference between a driver assist and an autonomous system. From the SAE document, the critical difference between a driver assist (L2) and an autonomous system (L3, L4, L5) is whether it can handle the entire OEDR (monitor the environment outside the car). So that is what will determine if Tesla's "city NOA" is a driver assist or an autonomous system. Specifically, if Tesla finishes the NN such that "city NOA" can do the entire OEDR, then it will be a beta autonomous system. As you say, it will still require driver supervision. But if Tesla leaves out part of the OEDR (for example, leaves out detecting hazardous road debris) then it will still be a driver assist and be L2.

At this point, I don't think we really know the answer to that yet. We know that Karpathy's team is working on the entire OEDR. Karpathy has talked about it many times. So I think we can surmise that Tesla's intent is certainly to have a complete OEDR and therefore to have a fully autonomous prototype. We just don't know yet what Tesla will actually be able to deliver to customers or when.
Not sure we need to have this argument again. Even Waymo is missing features though. Just look at the example of the moving truck above. If a vehicle can't properly react to a moving truck (is it double parked? about to move? unloading? prediction of the actions of other vehicles and people around a moving truck that are dependent on its state?) does that mean it's not Level 4? I think the answer has to be somewhat subjective.
 
I have not seen anyone argue that Tesla has L4 autonomy now.

Exactly my point, but by the criteria being laid out here, they easily qualify for Level 4.

In fact, we don't need to guess. The SAE document that you are using has a very nice logic flow diagram (Fig 9 on page 20) for determining the level of autonomy of a vehicle:

You really need to stop using diagrams and images to make your points, because they miss a lot of the nuance and details. The phrase "The devil's in the details" exists for a reason, after all. I'll demonstrate.

Does Tesla's Autopilot perform the entire DDT? No.

By the SEA definition of DDT, yes it absolutely does. AP controls lateral and longitudinal movement, monitors the driving environment, and performs OEDR. That's the actual definition of DDT. OEDR is satisfied by pedestrian detection, cone detection, and occasionally detecting traffic control devices. If we use the flow charts instead of the words, we could in effect say that any recognition and reaction to any object satisfied the requirement. See: Figure 1, Section 3.13.

Does a Waymo car perform the complete DDT and DDT fallback within a limited ODD? Yes. So Waymo cars are L4.

Tesla will also perform a DDT fallback. Let go of the wheel while on AP and see what it does. Once it's done screaming at you, it performs a fallback function that turns on the hazard indicators and brings the vehicle to a stop.

So it's not really a command to the car, it's just adding information."

When the vehicle needs intervention while driving, it also requires the remote operator intervention. Not just when stuck behind a vehicle it can't decide it should pass or not, but also when it encounters a traffic situation it can't handle. Basically, Waymo's DDT fallback is human intervention by a remote operator. How they give control input into the vehicle isn't really the important part here. I don't care if they have a wheel, joystick, or keyboard to tell the car to force a maneuver or continue retrying, the fact is that the control input is there.

That doesn't mean it can be used without a safety driver!

I think the sticking point between us is the role of that "safety driver". Certainly L1-L4 are clearly defined as allowing a human presence for intervention in some or all cases, depending on how low on the autonomy scale you go. But for actual L5, the role of the safety person appears to be about development. It is my opinion, based on reading the entire recommended practice, that this is in regards to the development phase. I believe, personally, that it would be disingenuous to call a product you sell to people Level 5 autonomy (something Elon says Tesla will not be delivering anyway), while it is under active development. I concede that your quote above about level definition makes room for this, but to be frank, that's bullshit. That's clearly referencing a traditional factory development process, and not when actual users are using the actual product. At that point, in my opinion, that's Level 4.
 
Exactly my point, but by the criteria being laid out here, they easily qualify for Level 4.

What criteria?

You really need to stop using diagrams and images to make your points, because they miss a lot of the nuance and details. The phrase "The devil's in the details" exists for a reason, after all. I'll demonstrate.

So SAE's own diagrams are not good enough?

By the SEA definition of DDT, yes it absolutely does. AP controls lateral and longitudinal movement, monitors the driving environment, and performs OEDR. That's the actual definition of DDT. OEDR is satisfied by pedestrian detection, cone detection, and occasionally detecting traffic control devices. If we use the flow charts instead of the words, we could in effect say that any recognition and reaction to any object satisfied the requirement. See: Figure 1, Section 3.13.

Our cars do not have complete OEDR. We don't even have traffic light or stop sign response yet. And our cars can't make turns at intersections. So no, our cars do NOT have have complete DDT. So they can't be autonomous yet.
 
When the vehicle needs intervention while driving, it also requires the remote operator intervention. Not just when stuck behind a vehicle it can't decide it should pass or not, but also when it encounters a traffic situation it can't handle. Basically, Waymo's DDT fallback is human intervention by a remote operator. How they give control input into the vehicle isn't really the important part here. I don't care if they have a wheel, joystick, or keyboard to tell the car to force a maneuver or continue retrying, the fact is that the control input is there.
The distinction is that the car is responsible for driving and only requesting input from the remote operator when it gets stuck. That's very different from the job of a safety driver.
I think the sticking point between us is the role of that "safety driver". Certainly L1-L4 are clearly defined as allowing a human presence for intervention in some or all cases, depending on how low on the autonomy scale you go. But for actual L5, the role of the safety person appears to be about development. It is my opinion, based on reading the entire recommended practice, that this is in regards to the development phase. I believe, personally, that it would be disingenuous to call a product you sell to people Level 5 autonomy (something Elon says Tesla will not be delivering anyway), while it is under active development. I concede that your quote above about level definition makes room for this, but to be frank, that's bullshit. That's clearly referencing a traditional factory development process, and not when actual users are using the actual product. At that point, in my opinion, that's Level 4.
Deployed L3-L5 systems do not require a safety driver. A Level 3 system requires a driver to be in the car but they are not responsible for monitoring the system. Just because Tesla plans on having consumers monitor the system does not mean it is a Level 2 system in my opinion.
Elon said FSD will be feature complete "level 5 no geofence" by the end of this year so he is promising Level 5, at least to investors. Note that doesn't mean it can operated without safety driver. That will happen by the end of next year (robotaxis :p).
 
I don't think it's terribly useful to argue about the definitions of L3, L4, and L5. What matters to me is what the car can do. The Waymo car is very severely geofenced. This makes it a gimmick. I do think that Waymo is the company to watch for significant milestones in autonomy. But a geofenced system that only operates on certain streets in a few mainland cities is useless to me. The great thing about Tesla is that they're trying to develop a system so generalized that it's limited only by the kind of roads/conditions, and not limited to a short list of routes that have been carefully mapped and hardwired into the system. My car only has EAP, but that works anywhere the car can see clearly-marked lane lines.
 
Not sure we need to have this argument again. Even Waymo is missing features though. Just look at the example of the moving truck above. If a vehicle can't properly react to a moving truck (is it double parked? about to move? unloading? prediction of the actions of other vehicles and people around a moving truck that are dependent on its state?) does that mean it's not Level 4? I think the answer has to be somewhat subjective.

It is not subjective, since Level is a manufacturer definition. Either the manufacturer defines the Level or they do not. Prototypes are specifically included in the Level. Waymo designates their cars as Level 4, so their prototypes are Level 4.

Tesla has stated goal of Level 5 but so far have not stated any definition beyond Level 2. I guess in theory they might have Level 5 prototypes in the labs though.
 
  • Like
Reactions: DrDabbles
The distinction is that the car is responsible for driving and only requesting input from the remote operator when it gets stuck. That's very different from the job of a safety driver.

Deployed L3-L5 systems do not require a safety driver.

But Level 4-5 prototypes can require safety drivers and still be Level 4-5.

SAE is specific about this.

It is all in the standard. Just read it.
 
@tomc603 @diplomat33

No matter if Teslas perform the DDT on the ODD, they do not perform MRC and that is why they are no way Level 4-5 functional at this stage. DDT/ODD without MRC is Level 3 at best and not even that given that the DDT can not be performed without driver monitoring.

Whether or not Tesla has a Level 4-5 designated prototype somewhere is an open question, though, given that prototypes are not actually required to have all the functionality as long as manufacturer designates them at the Level.
 
Your "low" goal is not as easy as you think. To make a car drive between two points you choose, the car needs to be able to drive in all situations without human intervention, in all conditions. Basically what I'm saying here is Level 4/5 autonomy may not actually be a solvable problem. We simply do not know, because we do not know how complex a system would need to be to solve this massive problem set.

We know the upper bound .. it's the thing between your ears. I'm not saying we know, even approximately, how much smarts we need to throw at such a system. Just that it's technically solvable even if not economically viable. And I'm not saying level 4/5, just a basic "I can cope with the ordinary driving, and stop in emergencies, but anything beyond that I'm handing back to yo." type of thing.
 
But Level 4-5 prototypes can require safety drivers and still be Level 4-5.

SAE is specific about this.

It is all in the standard. Just read it.
That's why I said "deployed" systems don't require a safety driver, perhaps there's a better word? I would like to have a copy of SAE J3016 but as far as I've been able to tell you've got to pay for it.
It is not subjective, since Level is a manufacturer definition. Either the manufacturer defines the Level or they do not. Prototypes are specifically included in the Level. Waymo designates their cars as Level 4, so their prototypes are Level 4.

Tesla has stated goal of Level 5 but so far have not stated any definition beyond Level 2. I guess in theory they might have Level 5 prototypes in the labs though.
I think that their "production design intent" for city NoA is level 5.
 
There is going to be a point at which the insurers won’t want self driving because it will drive down the premiums they can charge, because there are so few accidents, at which point their business revenue is going drop due to competition, we could actually see them going against self drive in a kind of behind the back of the hand way

The end-game (assuming we DO one day get to level 4/5, with a nod to the unknowns here), is that most cars will be self-driving, and at some point many people won't even bother getting a driving license (who knows how to ride a horse these days?). And yes, THEN the insurance companies will try to cling onto a dead business model with every legal/political trick they can think of.
 
I was on the Edens going southbound where the Wilson Street ramp merges. There are no dashes at this merge. A pickup truck raced down the ramp with the idea to pass me on the right and continue along - illegally - along the shoulder. At about the same time the truck was passing me, my car decided it was time to automatically center in the big fat merge lane contrary to any normal human behavior. I held the wheel to maintain the current trajectory so as not to run into the truck nor to freak out the truck driver and cause him to do something weird, and that caused AS to disengage. It SO BADLY want to veer to the right and hit that truck and get in the center of that big, wide, fat, juicy lane!

What level is that? Stupid level? I love my car, but the thing doesn't actually know how to drive. It can go straight in a line and maybe pull a couple of tricks out here and there, but let's reel it back a bit. This thing is nowhere close to actually understanding how to drive.
 
So SAE's own diagrams are not good enough?

Engineering by powerpoint has cost aircraft, spacecraft, and human lives. This is why they work so hard on the words in these documents, and are sure to define and detail all terms. They're good enough for a C level meeting, we're talking definitions so no pictures aren't worth a single word here.

I also don't get my news or political views in meme form, in case you're curious. :D

Our cars do not have complete OEDR.

Certainly not. But the SAE definitions don't detail to what level they need to handle OEDR. And specifically, I'll say that they can not define that requirement, because there's currently no computer that will identify and classify every object they could possibly come upon. This is the argument I have against the concept that a NN will ever be 100% Level 5 capable in the first place.

Note that doesn't mean it can operated without safety driver. That will happen by the end of next year (robotaxis :p).

BaZING! lol Yeah, we should start a betting pool on robotaxi release date.

What matters to me is what the car can do.

That's literally what the SAE definitions are telling you. That's the entire purpose of the document, and the levels it describes.

Just that it's technically solvable even if not economically viable.

See, this is the problem. If you're comparing a computer to a human brain, then no. We have no idea how complex, nor what would be required, nor how to make it work. The idea that the human (or any) brain is just a large neural network, or even a large collection of large neural networks was debunked in the 1950s. We do not know how large a network would need to be to be able to handle the task of object and event detection and classification.

And the actual control output to the components of the car is all traditional code. Not just in Tesla's case, but in all of these companies. The only task(s) that the NNs do is detection and prediction.
 
Engineering by powerpoint has cost aircraft, spacecraft, and human lives. This is why they work so hard on the words in these documents, and are sure to define and detail all terms. They're good enough for a C level meeting, we're talking definitions so no pictures aren't worth a single word here.

I also don't get my news or political views in meme form, in case you're curious. :D

I think you are being a little silly. A technical diagram is not a meme. And I am sure you are aware that academic and technical papers do use diagrams or charts to illustrate or summarize information. They are visual aids. They don't replace word definitions.

Certainly not. But the SAE definitions don't detail to what level they need to handle OEDR. And specifically, I'll say that they can not define that requirement, because there's currently no computer that will identify and classify every object they could possibly come upon. This is the argument I have against the concept that a NN will ever be 100% Level 5 capable in the first place.

I don't think anyone is suggesting that a computer needs to identify and classify every single object. But there are typical objects that are critical to identify if you want to your autonomous car to operate safely. And I think we can pretty easily figure out what those objects might be:
1) Moving vehicles in your lane or adjacent lanes such as other cars, trucks, motorcycles, cyclists etc...
2) Static vehicles in your path such as a stopped car, stopped truck, etc...
3) Pedestrians including adults or children, moving or static, in the vicinity of your car.
4) Static non-vehicle objects in your path such as road debris.
5) Lane Markings
6) Road Markings
7) Traffic Lights
8) Road signs
9) Environmental factors (rain, ice, etc)

I think if your autonomous vehicle can identify and track objects in these 9 categories, you'd have a pretty complete OEDR, wouldn't you say?
 
It is not subjective, since Level is a manufacturer definition. Either the manufacturer defines the Level or they do not. Prototypes are specifically included in the Level. Waymo designates their cars as Level 4, so their prototypes are Level 4.

I don't understand how you can say that a car is whatever level a manufacturer defines it to be. The levels are defined by the SAE, not by the car maker. A car must do what the SAE says to be able to call itself a particular level.

The end-game (assuming we DO one day get to level 4/5, with a nod to the unknowns here), is that most cars will be self-driving, and at some point many people won't even bother getting a driving license (who knows how to ride a horse these days?). And yes, THEN the insurance companies will try to cling onto a dead business model with every legal/political trick they can think of.

I highly doubt it. There will always be need for insurance. Collisions are only one aspect of insurance. There's theft, vandalism, hailstorms, and there will continue to be collisions for a very long time. There will be court battles over who is liable, and therefore who must pay the insurance premiums, but insurance will be needed and insurance companies will do just fine. Note that you have home insurance even though you never drive your home into the side of another home.

The folks who will suffer when cars drive themselves are the owners and employees of body shops. And undertakers will have to wait a bit longer for your business because a lot of people will live a little longer because they're not killed prematurely in auto accidents.

Here it is.

Wow. Thanks for posting that, but it is more than I can chew. As near as I can tell, levels 3, 4, and 5 can have a limited ODD (be geofenced) so we need a term to describe what Tesla is trying to build: A system that can drive anywhere a car is capable of going.

Also, the definitions include that level 3 (which I've been describing as eyes off the road) must alert the driver "in a timely manner" when it is necessary for the driver to intervene. I was not able to read the whole document so I don't know if "timely" is defined. But I cannot tell if this means the driver can watch a movie, and the system will alert her/him in time to take over, or if "timely" allows the system to alert the driver to a hazard so close that the driver has to have his/her hands on the wheel at all times in order to respond in time.

The car I'd like, realistically, is one that does what my EAP does now, but would allow me to take my eyes off the road, and rather than me being responsible for knowing when I need to disengage the system, the car will alert me, say ten seconds before I need to take over. I've been calling this Level 3. Is level 3 something different now, and if so, how do I refer to a car that could do this? A 35-page document is more than I can digest to try to answer this.

How about a car that can drive anywhere on-road while the driver sleeps in the back. No geofencing, but it may occasionally be necessary for the car to park and wake up the driver to take over. I was calling this Level 4, but apparently Level 4 is now much more broadly defined. What do I call a car that could do this?

I was on the Edens going southbound where the Wilson Street ramp merges. There are no dashes at this merge. A pickup truck raced down the ramp with the idea to pass me on the right and continue along - illegally - along the shoulder. At about the same time the truck was passing me, my car decided it was time to automatically center in the big fat merge lane contrary to any normal human behavior. I held the wheel to maintain the current trajectory so as not to run into the truck nor to freak out the truck driver and cause him to do something weird, and that caused AS to disengage. It SO BADLY want to veer to the right and hit that truck and get in the center of that big, wide, fat, juicy lane!

What level is that? Stupid level? I love my car, but the thing doesn't actually know how to drive. It can go straight in a line and maybe pull a couple of tricks out here and there, but let's reel it back a bit. This thing is nowhere close to actually understanding how to drive.

AP/EAP is not intended to "know how to drive." AP/EAP is a driver assist feature that keeps your car centered in the lane and adjusts its speed to the flow of traffic, within the maximum you set, and subject to speed limits on most roads. That's all it is and all it does, and is why it's only Level 2 and requires your constant attention. I have to disengage EAP from time to time for things (very roughly) like you describe. Pedestrians or bicycles very close to or in the lane, poorly-marked lanes, too-wide lanes at merge points, etc.

Public service announcement: Tesla cars today perform a certain small number of driver-assist tasks, some of them pretty well and others just so-so. They are not "self-driving" by any common-sense definition.
 
Wow. Thanks for posting that, but it is more than I can chew. As near as I can tell, levels 3, 4, and 5 can have a limited ODD (be geofenced) so we need a term to describe what Tesla is trying to build: A system that can drive anywhere a car is capable of going.

No, only L3 and L4 have limited ODD. L5 has no limits on ODD. So a system that can drive anywhere and any time a human can drive is L5. So what Tesla hopes to build is L5.

Also, the definitions include that level 3 (which I've been describing as eyes off the road) must alert the driver "in a timely manner" when it is necessary for the driver to intervene. I was not able to read the whole document so I don't know if "timely" is defined. But I cannot tell if this means the driver can watch a movie, and the system will alert her/him in time to take over, or if "timely" allows the system to alert the driver to a hazard so close that the driver has to have his/her hands on the wheel at all times in order to respond in time.

L3 is defined as full self-driving when the system is on meaning that when the system is on, the driver does not need to pay attention. So when the system is on, the "driver" can take their hands off the wheel, eyes off the road, read a book, do whatever. But if the system encounters a problem, it must give the "driver" enough time to reengage with what is going on and take back control. I don't the think the SAE defines a specific amount of time, it just says enough time. So the system just needs to give a reasonable amount of time for the human to stop what they were doing and reengage with what is going and take back control.

This is one reason why many auto manufacturers, I think, are mostly ignoring L3 and trying to go straight to L4 or L5. The idea is that it is safer to cut out the human completely and just have an autonomous car that can either handle everything itself in its ODD or safely pull over if it can't handle something.

The car I'd like, realistically, is one that does what my EAP does now, but would allow me to take my eyes off the road, and rather than me being responsible for knowing when I need to disengage the system, the car will alert me, say ten seconds before I need to take over. I've been calling this Level 3. Is level 3 something different now, and if so, how do I refer to a car that could do this? A 35-page document is more than I can digest to try to answer this.

No, what you are describing is exactly L3.

How about a car that can drive anywhere on-road while the driver sleeps in the back. No geofencing, but it may occasionally be necessary for the car to park and wake up the driver to take over. I was calling this Level 4, but apparently Level 4 is now much more broadly defined. What do I call a car that could do this?

If it can drive anywhere and any time with no geofencing and can handle its own fallback by pulling over to the side of the road like you describe when it encounters a problem, then it is L5.

In basic terms, the levels of autonomy are this:

Level 3: Full self-driving in a limited ODD and human may be asked to take over if needed.

Level 4: Full self-driving in a limited ODD but car does not need to ask the human to take over except when exiting its ODD. Car can safely pull over on its own if it needs to.

Level 5: Full self-driving with no limits on ODD and car does not need to ask the human to take over. Car can safely pull over on its own if it needs to.

Hope that helps.
 
Last edited: