Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
The ability to drive on it's own is not in the software? Then what drove my car to lunch today, making the turns?

As it looks to me, L3 is anything above lane centering and Adaptive Cruise Control (L2).

Automatically turning at an exit sounds like L3 to me.

My 2018 (refreshed model) Leaf had lane centering and Adaptive Cruise Control. It wouldn't drive me to lunch.
I kinda agree, if via FSDb the car drives itself from point A to point B and say we could remove the human from the car that takes over and watches, then technically FSDb without a human in the car (if that was possible via tricking it or the software didnt care, and it was legal), it got from A -> B in one piece, it is L4 at least no matter how bad it is, is it not?
 
Then you should probably read the actual J3016 document to understand the several ways in which your statement is factually wrong.

Sorry, I was reading the "obviously wrong" infographic that SAE put out.

Since you have such an in-depth understanding of J3016, why not point out some quotes and your explanations?

About the only thing that I'm seeing is around the term DDT fallback-ready user need not supervise Level 3 ADS while it is engaged but is expected to be prepared to either resume DDT performance when the ADS issues a request to intervene or to perform the fallback and achieve a minimal risk condition if the failure precludes continued vehicle operation.

And in that case, it tends to become a definition of "need not monitor"
Remember, Tesla does NOT classify the current operation as L3 and so therefore the users are required to constantly monitor.
But it could be possible that Tesla classifies it as L3 and they could then change the disclaimer to only "must be able to accept control"

But they aren't going to do this. They seem to have no desire in doing this. But that has little to do with if the software can perform the function.

Tesla could have a L2 car that can drive you anywhere around the country and you are never required to touch the wheel, brakes or accelerator. But you are required to monitor it, ONLY because they don't call it a L3, L4 or L5. Just because it can do it doesn't mean that Tesla has to classify it that way.
 
Sorry, I was reading the "obviously wrong" infographic that SAE put out.

No, you were reading the "obviously a very high level summary" infographic.

You'll need to read J3016 to understand why you reached wrong conclusions based on only having looked at the graphic though.


Since you have such an in-depth understanding of J3016, why not point out some quotes and your explanations?

I have repeatedly done so in the past. It never seems to help though.

Folks actually reading and understanding J3016 before trying to comment on it WOULD help though.


About the only thing that I'm seeing is around the term DDT fallback-ready user need not supervise Level 3 ADS while it is engaged but is expected to be prepared to either resume DDT performance when the ADS issues a request to intervene or to perform the fallback and achieve a minimal risk condition if the failure precludes continued vehicle operation.

Well, yes, that's certainly part of it.

ALso the fact that to PERFORM the complete DDT you need an OEDR capable of doing so. Tesla does not have one- and explicitly says that in the CA DMV emails.

And adds the final version of FSDb won't have one either... and thus can not be higher than L2.


Remember, Tesla does NOT classify the current operation as L3 and so therefore the users are required to constantly monitor.
But it could be possible that Tesla classifies it as L3 and they could then change the disclaimer to only "must be able to accept control"

No, that is NOT possible with the current code. Per Tesla.

It's also not possible with FSDb even in a final version. Again per Tesla.

Because it lacks features and abilities required in an L3 or better system.


Tesla could have a L2 car that can drive you anywhere around the country and you are never required to touch the wheel, brakes or accelerator.

Sure.

They don't though.

And they tell you they don't. They have a system that does a really good job performing some, but not all, of the required elements of the DDT- and is not capable of performing the others.

Tesla themselves said:
we do not expect significant enhancements in OEDR or other changes to the feature that would shift the responsibility for the entire DDT to the system. As such, a final release of City Streets will continue to be an SAE Level 2, advanced driver-assistance feature.

Why do you not believe them?
 
Last edited:
  • Like
Reactions: caligula666
Answer my question then. Who drove my car to lunch?

You did.

Using an L2 ADAS to control lateral and velocity. Which are only 2 parts of, but not ALL parts of, the dynamic driving task.

Tesla, SAE, and the actual law, all make this super clear.

Anything above L2 requires the car to be capable of performing the entire DDT. Which Teslas system is incapable of doing. By Teslas own admission.

They hope to develop and test ANOTHER system in the future that CAN do the whole DDT. FSDb isn't it.




I kinda agree, if via FSDb the car drives itself from point A to point B and say we could remove the human from the car that takes over and watches, then technically FSDb without a human in the car (if that was possible via tricking it or the software didnt care, and it was legal), it got from A -> B in one piece, it is L4 at least no matter how bad it is, is it not?

No, it's not.

"Managed to make one trip without crashing" isn't the SAE definition of anything.

If I roll a regular car down a hill in neutral with no driver, and it safely gets from point A (the top) to point B (the bottom) and stops due to gravity and friction-- that's ALSO not L4.


Again please read J3016 (or even what Tesla themselves say in the CA DMV emails) for the many reasons why.
 
  • Like
Reactions: caligula666
ewoodrick said:
Answer my question then. Who drove my car to lunch?


You did.

Using an L2 ADAS to control lateral and velocity. Which are only 2 parts of, but not ALL parts of, the dynamic driving task.

No I didn't. I want to hear you say, in very specific words, who drive the car.

Come on, stop dodging the question. We suspect that we know why you are dodging. That's because you can't bring yourself to saying that the car did, because you know that L2 can't drive the car. As the "robot" was known to say "It does not compute"

Who drove my car???
 
Well, yes, that's certainly part of it.

ALso the fact that to PERFORM the complete DDT you need an OEDR capable of doing so. Tesla does not have one- and explicitly says that in the CA DMV emails.

And adds the final version of FSDb won't have one either... and thus can not be higher than L2.


OEDR (Object and Event Detection and Response)

The subtasks of the DDT (Dynamic Driving Task) that include monitoring the driving environment (detecting, recognizing, and classifying objects and events and preparing to respond as needed) and executing an appropriate response to such objects and events (i.e., as needed to complete the DDT and/or DDT fallback).

Can FSD Beta monitor the driving environment?
  • detecting? Yes
  • recognizing? Yes
  • classifying objects? Yes
  • and events? Yes
  • preparing to respond as needed? Yes
  • executing an appropriate response to such objects and events? Yes
What part of OEDR does Tesla not have?
 
Who said one trip? I didn't see it in the OPs post. There hare numerous instances where FSD has taken trips from A to B without intervention or crashing.
You would have to take thousands of trips.

Let’s say you want to claim on that route FSDb is better than humans. You need to show there are no more than 10 “small accidents” in some 100,000 miles of driving. To prove its better at more serious accidents, need to drive even more.

That’s how statistics work.
 
You would have to take thousands of trips.

Let’s say you want to claim on that route FSDb is better than humans. You need to show there are no more than 10 “small accidents” in some 100,000 miles of driving. To prove its better at more serious accidents, need to drive even more.

That’s how statistics work.
No problem, FSD beta has done millions of miles.
I think that they released newer number this week, these are older.


Inasmuch as FSD Beta is about five times safer than the most recently available US average, the system’s safety stats are not as impressive as Autopilot’s recent results. As per Tesla’s Vehicle Safety Report for Q3 2022, the company reported one crash for every 6.26 million miles driven in which drivers were using Autopilot. Teslas operating with FSD Beta seem to be safer than cars not using Autopilot, however, as such vehicles recorded one crash for every 1.71 million miles driven.

And that's the results of statistics.
 
No problem, FSD beta has done millions of miles.
Not in that route. Not without disengagements, interventions etc.

Infact overall crowdsourced disengagement rate is 1 in 10 miles. Human accident rate is 1 in 10,000 miles. Lets chat again when the disengagement rate is in thousands of miles per disengagement, instead of single/double digits.
 
  • Like
Reactions: COS Blue
Tesla, SAE, and the actual law, all make this super clear.

Anything above L2 requires the car to be capable of performing the entire DDT. Which Teslas system is incapable of doing. By Teslas own admission.

They hope to develop and test ANOTHER system in the future that CAN do the whole DDT. FSDb isn't it.

You're misunderstanding the definitions again. I'll post the explanation, although there's no real point, because you're the type of person who never admits defeat, especially on topics you're fully engrained in.

DDT = SAE tries to break down all the tasks related to driving (including human-based driving)

Level 1 and 2: by virtue of requiring the driver to constantly monitor the environment, by definition, means that SOME part of the OEDR is left to the driver. That's why it's limited OEDR. Because for level 1 and 2, the driver is EXPECTED to intervene for any missing OEDR he/she deems important.

Level 3-5: this is OEDR "complete" *NOT* because the system can respond to every environmental situation known in the universe. It means the driver can EXPECT that the system respond to every OEDR item while the feature is enabled.

You have to always look at the levels through the lens of AUTOMATION and EXPECTATION, not some level 5 super-human singularity driver.

It's really crazy how subtle and unclear the levels are for 95% of people, even people who've read that damn thing multiple times.

Some text from the definition below:

A driving automation system feature equipped on a conventional vehicle that either:
  1. Supports the driver by executing a limited set of lateral and/or longitudinal vehicle motion control actions sufficient to fulfil a specific, narrowly defined use case (e.g., parking maneuver), while the driver performs the rest of the DDT and supervises the Level 1 or Level 2 feature’s performance (i.e., Level 1 or Level 2 driver support features);
    or
  2. Executes a limited set of lateral and longitudinal vehicle motion control actions, as well as associated object and event detection and response (OEDR) and all other elements of the complete DDT in order to fulfil a specific, narrowly defined use case without human supervision (Level 3 or 4 ADS features).
....

NOTE: A Level 2 driver support feature is capable of only limited OEDR, meaning that there are some events that it is not capable of recognizing or responding to. Therefore, the driver supervises the feature’s performance by completing the OEDR subtask of the DDT. See Figure 2 (discussing the three primary subtasks of the DDT).
 
Last edited:
  • Helpful
Reactions: scottf200
Answer my question then. Who drove my car to lunch?
You did.

If the car had crashed, you’d be legally responsible. If you had gone too fast, you’d be fined.

Same thing as when a FSD or AP car crashes and Tesla says « Tesla is not responsible, it is very clear per the license agreement that the driver must remain attentive at all time and ready to take control ». You cannot have it both ways.
 
  • Like
Reactions: COS Blue
Got a chance to try out the driverless Cruise inside of the nighttime public-availability geofence over in San Francisco. Thought I'd jot down a few impressions that stood out, especially with FSDb's typical behavior in mind:
* Came across several 4-way stop interactions with other cars; the Cruise hesitated just about as much as FSDb would in the same situations. One of the times the other car got impatient and took the right of way. One time there was literally no other car traveling in the same direction (making a right turn) and the Cruise still hesitated an extra 5-10 seconds past what was reasonable
* I counted another few times where the car slowed down or hesitated for no obvious reason - once at a "keep clear" sign, other couple times probably on random turns where the road was completely clear in all directions.
* Speed bumps are pretty bad - the current software brakes hard and takes every speed bump at about 8mph.
* One unprotected left with a couple oncoming cars - the Cruise kept moving into the intersection even as the first oncoming car approached. The oncoming car was too close for comfort to roll through the turn so the correct action would've been to stop fully and wait. The oncoming car honked and rightfully so. The Cruise stopped only after the first car passed, and waited correctly for the second car to pass.
* At one red light the Cruise kept jerking forward as if it was really eager to go - pretty strange
* The computer in the trunk is very loud. Also no climate control accessible to the rider (that I could find?) and the overall atmosphere inside was kind of chilly even with the 45 degree weather outside.
* Currently no way to end the trip after you get out of the car. I accidentally skipped that step and then had to mess with the route to get the car to unlock just so I could press the button on the screen (there's also no "unlock" button after pickup). Not sure what would've happened if I just left the car in that state.

Beyond the few awkward interactions, the overall steering/braking/acceleration was reasonable (other than the sudden slowdowns for speed bumps which were very rapid). Steering remained well-damped even while making jerky mid-turn corrections once in a while. Probably fine if you tune out the road and ignore what the car is doing, but definitely feels like a beta if you pay any attention.
 
Last edited:
Got a chance to try out the driverless Cruise inside of the nighttime public-availability geofence over in San Francisco. Thought I'd jot down a few impressions that stood out, especially with FSDb's typical behavior in mind:
* Came across several 4-way stop interactions with other cars; the Cruise hesitated just about as much as FSDb would in the same situations. One of the times the other car got impatient and took the right of way. One time there was literally no other car traveling in the same direction (making a right turn) and the Cruise still hesitated an extra 5-10 seconds past what was reasonable
* I counted another few times where the car slowed down or hesitated for no obvious reason - once at a "keep clear" sign, other couple times probably on random turns where the road was completely clear in all directions.
* Speed bumps are pretty bad - the current software brakes hard and takes every speed bump at about 8mph.
* One unprotected left with a couple oncoming cars - the Cruise kept moving into the intersection even as the first oncoming car approached. The oncoming car was too close for comfort to roll through the turn so the correct action would've been to stop fully and wait. The oncoming car honked and rightfully so. The Cruise stopped only after the first car passed, and waited correctly for the second car to pass.
* At one red light the Cruise kept jerking forward as if it was really eager to go - pretty strange
* The computer in the trunk is very loud. Also no climate control accessible to the rider (that I could find?) and the overall atmosphere inside was kind of chilly even with the 45 degree weather outside.
* Currently no way to end the trip after you get out of the car. I accidentally skipped that step and then had to mess with the route to get the car to unlock just so I could press the button on the screen (there's also no "unlock" button after pickup). Not sure what would've happened if I just left the car in that state.

Beyond the few awkward interactions, the overall steering/braking/acceleration was reasonable (other than the sudden slowdowns for speed bumps which were very rapid). Steering remained well-damped even while making jerky mid-turn corrections once in a while. Probably fine if you tune out the road and ignore what the car is doing, but definitely feels like a beta if you pay any attention.

Thanks for the report. I do think that the issues you report plus the stalls that we've seen show that Cruise's autonomous driving is just not super reliable yet. I think Cruise rushed to do driverless because they did not want to appear too far behind. And I am sure there was a lot of pressure from GM to show results. So as soon as they could do semi reliable driverless in a limited ODD, they deployed so they could tout that they have driverless. But I think the software still has as a lot of room for improvement.
 
You're misunderstanding the definitions again

I'm not, of course.


DDT = SAE tries to break down all the tasks related to driving (including human-based driving)

Level 1 and 2: by virtue of requiring the driver to constantly monitor the environment, by definition, means that SOME part of the OEDR is left to the driver. That's why it's limited OEDR. Because for level 1 and 2, the driver is EXPECTED to intervene for any missing OEDR he/she deems important.

So far so good (and no different than what I wrote)


Level 3-5: this is OEDR "complete" *NOT* because the system can respond to every environmental situation known in the universe. It means the driver can EXPECT that the system respond to every OEDR item while the feature is enabled.

Again- not any different from what I wrote.

Teslas fsdb system, per tesla lacks an OEDR that is capable of being expected to respond to enough things that it could ever be above L2.

And that is not expected to change in the final release of fsdb.



You have to always look at the levels through the lens of AUTOMATION and EXPECTATION, not some level 5 super-human singularity driver.

You appear to have made up an argument I didn't actually make- then told me this imaginary argument was wrong- all while agreeing with everything I actually said.

Fascinating.


NOTE: A Level 2 driver support feature is capable of only limited OEDR, meaning that there are some events that it is not capable of recognizing or responding to. Therefore, the driver supervises the feature’s performance by completing the OEDR subtask of the DDT. See Figure 2 (discussing the three primary subtasks of the DDT).


Yes. And that is literally what Tesla told us as the reason FSDb was, is, and will remain in final release L2.

It must be exhausting telling me I'm wrong then writing 10 paragraphs all proving me right :)