Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
You're correct, obviously the current product is intended to be L2 (as per all the regulatory filings, etc), but ... the software changes to make it 'L5' are pretty trivial and consist mainly of turning off the nags, etc.
Maybe. I think a lot of people would agree with you and believe that, once "safe enough," moving from L2 to L5 is just turning off steering wheel nags. But my intuition is that it's not that easy.

The structure of a system that must take complete responsibility for the driving task in all environments and conditions, including all failure modes, seems to me to be very likely different from one designed to operate as an L2 driver assist or even L3 autonomous system. Specifically, L2 and L3 systems are designed to have the driver available and ready to take responsibility in failure modes and out-of-ODD conditions. Accordingly, they can operate at lower tolerances and drive parametrically, because they know another rules-based system (the driver) is there to take over if necessary. Even if the error rate (measured, e.g., by interventions per mile) becomes incredibly low, there will always be a mathematic possibility of error for which the system will depend on driver intervention. An L5 system, on the other hand, must have all the non-parametric (i.e., hard-and-fast) rules built in in order to drive safely and retain responsibility in all normal and failure modes, and therefore the fundamental design seems different, IMO.
 
  • Like
Reactions: Knightshade
Bill Gates got a demo in the Wayve autonomous car. For those who don't know, Wayve is a small company developing autonomous driving based on "end-to-end" learning.


And a reddit post by Bill Gates about the demo, confirming it did have multiple "disengagements".

How out of touch must Bill Gates be if he thinks Wayve's approach is new and different. Tesla has had this approach since 2015, and they aren't the only one. Mobileye and NVIDIA both have systems that are based on general classification and driving models and not HD maps and pre-planned routes, like Waymo and Cruise.
 
Maybe. I think a lot of people would agree with you and believe that, once "safe enough," moving from L2 to L5 is just turning off steering wheel nags. But my intuition is that it's not that easy.

The structure of a system that must take complete responsibility for the driving task in all environments and conditions, including all failure modes, seems to me to be very likely different from one designed to operate as an L2 driver assist or even L3 autonomous system. Specifically, L2 and L3 systems are designed to have the driver available and ready to take responsibility in failure modes and out-of-ODD conditions. Accordingly, they can operate at lower tolerances and drive parametrically, because they know another rules-based system (the driver) is there to take over if necessary. Even if the error rate (measured, e.g., by interventions per mile) becomes incredibly low, there will always be a mathematic possibility of error for which the system will depend on driver intervention. An L5 system, on the other hand, must have all the non-parametric (i.e., hard-and-fast) rules built in in order to drive safely and retain responsibility in all normal and failure modes, and therefore the fundamental design seems different, IMO.
I don’t think that’s that big a difference. Firstly, the objective will be to reduce the number of scenarios that the system can’t handle down as low as possible - certainly by the time the system is end-to-end feature complete you’d expect that to occur much less frequently than it does at the moment.

Then all you need to work out is what the car should do when it encounters that scenario. TBH I think the SAE taxonomy has washed its hands of this a bit, because it seems to imply that a lvl 5 system can drive everywhere under all conditions, but that’s impossible - or certainly impossible to prove, because you can’t create every single scenario, and there are lots of scenarios that no human could drive in.

So assuming that we need to temper that to ‘safely handle all scenarios’ then all you need to do is pull the car over safely and wait for the condition that prevented driving to clear, and as a problem that lies somewhere between smart summon and whatever NASA calls what their rover was doing on Mars in 2013 (ie, purely driving the terrain with no regard for rules or road marking/signage). Smart summon may not be working properly yet but I don’t think there’s anything in that problem space that is very different than what FSD is now demonstrating, with the possible exception of high granularity mapping of parking environments (and I’d posit that this is the genesis of the decision to move to vision based parking).
 
  • Like
Reactions: rlsd
I disagree I'm still running on 2022.44.30.10 it has to be one of the worst regression update i have had in a long time. Ive had 4 sudden hard brakes on the highway at speeds of 70 MPH on both trips from Atlanta and to Miami. I had friends on board and it was so embarrassing when all this happened and also on one drive i wanted to show how the car reacts when there are bikers or people walking on the side of the road but the car never veer away from them as in the past never even tried and not just once but 3 incidents same results not sure what is happening or if it is my computer failing had no error and reboot no help.. question here for the board i have always have lag on my repeater camera even on reverse picture seems sometimes freezes for a second. I have had issues with my Spotify freezing where i have to go to streaming then back to Spotify for it to work. I have hardware 3 on a 2019 standard range plus. Any comments will be helpful to see if i have to put in a ticket to tesla service. PS. Dont remember last time my car recognize a open parking space.
 
Then all you need to work out is what the car should do when it encounters that scenario. TBH I think the SAE taxonomy has washed its hands of this a bit, because it seems to imply that a lvl 5 system can drive everywhere under all conditions, but that’s impossible - or certainly impossible to prove, because you can’t create every single scenario, and there are lots of scenarios that no human could drive in.

This is a misunderstanding of SAE L5.

It does not require the system to be able to drive in ALL conditions.

It requires the system to be able to drive in all conditions a human could



So assuming that we need to temper that to ‘safely handle all scenarios’ then all you need to do is pull the car over safely and wait for the condition that prevented driving to clear, and as a problem that lies somewhere between smart summon and whatever NASA calls what their rover was doing on Mars in 2013 (ie, purely driving the terrain with no regard for rules or road marking/signage). Smart summon may not be working properly yet but I don’t think there’s anything in that problem space that is very different than what FSD is now demonstrating, with the possible exception of high granularity mapping of parking environments (and I’d posit that this is the genesis of the decision to move to vision based parking).

If there's situations it can't handle (but it CAN entirely drive in others) then you're L3 or L4.

The primary difference being that L3 requires a human to handle the "exiting the ODD where the system can drive" while L4 has a system capable of handling that and failing safely to a minimum risk condition.

I agree the failing safely part isn't THAT hard if you trust the system to be able to normally do the entire DDT--- but you probably need some more hardware redundancy than exists in Teslas system in that case.... for example right now the entire system (both nodes) is needed to run a single instance of the full (still only L2) stack-- so a driving computer crash of a single node is an absolute failure of the system if there's no human to take over. Likewise if you're someplace you need to change across several lanes to be able to safely pull over, and you lost the ONE side-facing cam on the side you need to move toward- how do you fail safely?)
 
  • Like
Reactions: OxBrew
How out of touch must Bill Gates be if he thinks Wayve's approach is new and different. Tesla has had this approach since 2015, and they aren't the only one. Mobileye and NVIDIA both have systems that are based on general classification and driving models and not HD maps and pre-planned routes, like Waymo and Cruise.

No, Tesla has not had this approach since 2015. Tesla does not do "end-to-end". In fact, according to Elon, they still have some code that still needs to be moved over to NNs so Tesla has not been doing full NN for the entire stack since 2015. AFAIK, Wayve is the only one trying to do pure end-to-end for autonomous driving. Doing general classification and driving models and not using HD maps is not the same thing as "end-to-end". "end-to-end" is a very specific approach where you just have one NN that controls the car directly from vision, with no intermediate NN. Companies like Tesla, Mobileye or Nvidia do general classification and driving models and don't use HD maps, but they have multiple NNs for different tasks so it is not "end-to-end".

Also, what do you mean by "pre-planned routes"? Waymo and Cruise use HD maps so routes are pre-mapped but they do not pre-plan any routes, the car can pick any route within a geofence. The routes are not predetermined ahead of time.
 
Last edited:
This is a misunderstanding of SAE L5.

It does not require the system to be able to drive in ALL conditions.

It requires the system to be able to drive in all conditions a human could
I’ll grant you I based that on the summary chart and didn’t look any deeper so you’re probably right.

That’s still a massive margin though. Which human? A rally driver at the top of their field or grandma who probably shouldn’t be on the road but is anyway? Lots of humans start voluntarily limiting their ‘ODD’ as they get older ‘I don’t drive at night’ etc.

Re the hardware, again you might be right (though I note that humans do not have redundant hardware either, and periodically have collisions as a result of a medical issue). I think this is where I draw a separation between the current Teslas that are on the road and the Tesla methodology and direction of travel. I have high confidence that Tesla are going to succeed with their objective at some point.. I have far less confidence that a software update will turn a 2022 Y in to a robotaxi. Not impossible but much less certain.

I think a very advanced L2/3 is pretty certain, L5 requires that gambles made several years ago on hardware prove to have been correct. Of course, it it did come down to something simple like HW3 doesn’t have enough processing grunt then Tesla could build an upgraded processing unit as a drop in replacement.
 
I’ll grant you I based that on the summary chart and didn’t look any deeper so you’re probably right.

That’s still a massive margin though. Which human? A rally driver at the top of their field or grandma who probably shouldn’t be on the road but is anyway? Lots of humans start voluntarily limiting their ‘ODD’ as they get older ‘I don’t drive at night’ etc.

We get the answer if we just read the text. J3016 defines the human as a "typically skilled human". It also lists white-out snow storm, flooded roads, glare ice as examples of conditions that would be outside the ODD of L5. So the SAE is specifically talking about extreme conditions that most drivers would admit are not driveable. Your grandma would not be a "typically skilled driver". Also, the night is not an extreme environment that is undriveable for typically skilled humans.

Here is the relevant paragraph on page 32:

NOTE 1: “ Unconditional/not ODD-specific” means that the ADS can operate the vehicle on-road anywhere within its region of the world and under all road conditions in which a conventional vehicle can be reasonably operated by a typically skilled human driver. This means, for example, that there are no design-based weather, time-of-day, or geographical restrictions on where and when the ADS can operate the vehicle. However, there may be conditions not manageable by a driver in which the ADS would also be unable to complete a given trip (e.g., white-out snow storm, flooded roads, glare ice, etc.) until or unless the adverse conditions clear. At the onset of such unmanageable conditions the ADS would perform the DDT fallback to achieve a minimal risk condition (e.g., by pulling over to the side of the road and waiting for the conditions to change).
 
No, Tesla has not had this approach since 2015. Tesla does not do "end-to-end". In fact, according to Elon, they still have some code that still needs to be moved over to NNs so Tesla has not been doing full NN for the entire stack since 2015. AFAIK, Wayve is the only one trying to do pure end-to-end for autonomous driving. Doing general classification and driving models and not using HD maps is not the same thing as "end-to-end". "end-to-end" is a very specific approach where you just have one NN that controls the car directly from vision, with no intermediate NN. Companies like Tesla, Mobileye or Nvidia do general classification and driving models and don't use HD maps, but they have multiple NNs for different tasks so it is not "end-to-end".

Also, what do you mean by "pre-planned routes"? Waymo and Cruise use HD maps so routes are pre-mapped but they do not pre-plan any routes, the car can pick any route within a geofence. The routes are not predetermined ahead of time.
I guess I should have listened to the whole thing.
 
  • Like
Reactions: diplomat33
We get the answer if we just read the text.
Well, I did try but 'just reading the text' doesn't appear to be super obvious on their website, unless you want to sign up to the fan club. I don't suppose you have a freely downloadable link to it, do you? Seems like there was plenty of space in the box on that diagram for them to qualify 'all' a little more clearly if it was that significant.

Also, the night is not an extreme environment that is undriveable for typically skilled humans.
In light (excuse the pun) of the context you've added above, no disagreement. I think it's plausible that the cameras in the car are already capable of handling darker conditions than the software currently allows, and probably a higher degree of weather interference too. A NN can be trained to recognise what an object looks like through raindrops just the same as it can with no rain - the only hard and fast limitation would be where the viewable distance is limited by very heavy weather or by the camera getting fully obscured (eg snow settling on it).

Stepping back to where this tangent started:
Even if the error rate (measured, e.g., by interventions per mile) becomes incredibly low, there will always be a mathematic possibility of error for which the system will depend on driver intervention. An L5 system, on the other hand, must have all the non-parametric (i.e., hard-and-fast) rules built in in order to drive safely and retain responsibility in all normal and failure modes, and therefore the fundamental design seems different, IMO.

If L5 just requires that a car handle all 'normally drivable' conditions to a standard of a typical human, but gives you the bail out that extreme scenarios do NOT have to be handled, then I remain of the opinion that it's fundamentally the same problem that Tesla are working towards now and their route forwards is fairly simple (once the totally trivial task of getting FSD finished is taken care of :D).

All you need is that instead of the 'take over now' fallback the car needs to do a 'reverse smart summon' and put itself somewhere safe if it does find itself exiting the very-broad-but-not-unlimited L5 ODD, then the (internal to Tesla at least) certification process is just having telemetry that shows that the vehicles have covered a significant number of miles on sufficiently diverse environments with no interventions.

There are interesting questions about regression testing though. Lets say you certify v1 based on massive miles covered with no interventions, but then you come up with v2. What's your testing platform now that all the vehicles are driving around on v1? Automated testing only works if you can simulate the full range of inputs the system is going to face and that (speaking as an IT professional) is really hard for systems that take complex inputs. I know they do a certain amount of testing in simulators, I wonder if they are intending on digitally replicating every disengagement scenario to enable new versions to be regression tested.
 
Well, I did try but 'just reading the text' doesn't appear to be super obvious on their website, unless you want to sign up to the fan club. I don't suppose you have a freely downloadable link to it, do you? Seems like there was plenty of space in the box on that diagram for them to qualify 'all' a little more clearly if it was that significant.

My apologies if I came across as a bit rude. Yes I do have a link to the full text. See if this link works: J3016_202104.pdf

I have also attached the file.
 

Attachments

  • J3016_202104.pdf
    922.6 KB · Views: 39
  • Helpful
Reactions: Knightshade
Re the hardware, again you might be right (though I note that humans do not have redundant hardware either

Sure they do.

Humans have two eyes, and more importantly a neck that can point them in any direction (and several mirrors that help too while we're at it)

Losing any single sensor for a human does not create a blind spot making up 1/4 or more of drivable space around the car- but these cars do. Not to mention, simply raining does not create a blind spot like that for a human because all their sensors are in the dry car interior-- while the cars are all, except the forward cameras- outside exposed to elements and frequently get blinded by rain on the tiny non-redundant lenses.



I have high confidence that Tesla are going to succeed with their objective at some point.. I have far less confidence that a software update will turn a 2022 Y in to a robotaxi. Not impossible but much less certain.

I think we mostly agree here... current HW is insufficient but it's certainly possible the overall approach with better HW could get you there eventually. One major issue remains their promise CURRENT HW would do it and that's clear to many it simply won't happen at this point.


I
I think a very advanced L2/3 is pretty certain, L5 requires that gambles made several years ago on hardware prove to have been correct. Of course, it it did come down to something simple like HW3 doesn’t have enough processing grunt then Tesla could build an upgraded processing unit as a drop in replacement.

Many expected exactly that-- just as FSD owners got HW3 upgraded for free... The issue being Elon already told us upgrading people to HW4 was cost prohibitive (and the form factory we've seen for it isn't backward compatible either).
 
One major issue remains their promise CURRENT HW would do it and that's clear to many it simply won't happen at this point.
I sympathise with that but all those promises predate my involvement with Tesla by a long margin and aren't technically interesting so I leave that to others. I wish them luck though - I think there's a strong chance Tesla are going to be a highly significant player in the car market for a generation and I don't like the precedent it sets if they can fail to deliver on commitments and get away with it.

Many expected exactly that-- just as FSD owners got HW3 upgraded for free... The issue being Elon already told us upgrading people to HW4 was cost prohibitive (and the form factory we've seen for it isn't backward compatible either).
Well there's a middle ground where an upgrade to full HW4 might not be feasible but an in-place upgrade to a 'HW3+' might be. I know that's not on the agenda now but if it suddenly became a make-or-break factor in delivering FSD to the millions of cars already in circulation it is feasible. Mind you, nobody seems 100% certain exactly what hardware a HW4 3/Y/X/S is going to have out of the possibilities hinted at by the examples spotted in the wild.
 
My apologies if I came across as a bit rude. Yes I do have a link to the full text. See if this link works: J3016_202104.pdf

I have also attached the file.
Thank-you, not rude at all. And I could have just signed up and downloaded it to be fair, it just pokes one of my buttons when websites do this 'this is free and we promise that we won't abuse your data..but we still want to collect it all anyway' thing.
 
  • Like
Reactions: diplomat33
Well there's a middle ground where an upgrade to full HW4 might not be feasible but an in-place upgrade to a 'HW3+' might be. I know that's not on the agenda now but if it suddenly became a make-or-break factor in delivering FSD to the millions of cars already in circulation it is feasible. Mind you, nobody seems 100% certain exactly what hardware a HW4 3/Y/X/S is going to have out of the possibilities hinted at by the examples spotted in the wild.


I think another issue is-- if we believe Tesla was ever honest- that they really DID think HW2.x was "enough"....and then later thought HW3 was "enough".... and now think HW4 is "enough"....the lesson we learn, though it has been obvious from the start, is nobody knows how much is enough for actual self driving until someone actually achieves it.

Which would present some basis for choosing not to continually upgrade existing FSD-bought cars until you have a real answer to that question.
 
There are so many threads about FSD, autopilot, EAP, autonomous cars, that I'm not sure really where the best place to post this will be, but here goes.

Bought my first Tesla in December. I've had a Kia Niro EV for one year. It does basic lane keep assist(worse than Tesla IMO) and adaptive cruise so, AFAIK, pretty similar to Tesla's AP.

Couple of things I'd like to see changed/improved with Tesla's basic autopilot. It is my understanding that some other companies are currently capable of these things:

1) Very inconvenient that I can't change lanes without aggressively disengaging AP. If I want to pass someone, even if I use turn signal, it takes too much pressure on the wheel to disengage which makes the car very jerky AND disengages cruise control. So I then either have to reengage CC to maintain speed after making the lane change or reengage AP for the short time it takes to pass. Then, I have to yank the wheel to disengage AP, lose CC, manually maintain speed, make the lane change, reengage AP again. It's super clunky and doesn't match the overall Tesla experience. IMO it drastically detracts from the ease of driving that AP provides. I'm not asking for FSD here; bracing for the "man up and buy FSD or EAP comments".

Seems like it shouldn't be *that* hard to allow manual lane change after turn signal activation, while keeping CC engaged and automatic handover to AP once lane change is executed. Maybe Tesla fears this will detract too much from revenue for EAP. I haven't researched the market extensively but I think Mercedes has this capability.
 
There are so many threads about FSD, autopilot, EAP, autonomous cars, that I'm not sure really where the best place to post this will be, but here goes.

Bought my first Tesla in December. I've had a Kia Niro EV for one year. It does basic lane keep assist(worse than Tesla IMO) and adaptive cruise so, AFAIK, pretty similar to Tesla's AP.

Couple of things I'd like to see changed/improved with Tesla's basic autopilot. It is my understanding that some other companies are currently capable of these things:

1) Very inconvenient that I can't change lanes without aggressively disengaging AP. If I want to pass someone, even if I use turn signal, it takes too much pressure on the wheel to disengage which makes the car very jerky AND disengages cruise control. So I then either have to reengage CC to maintain speed after making the lane change or reengage AP for the short time it takes to pass. Then, I have to yank the wheel to disengage AP, lose CC, manually maintain speed, make the lane change, reengage AP again. It's super clunky and doesn't match the overall Tesla experience. IMO it drastically detracts from the ease of driving that AP provides. I'm not asking for FSD here; bracing for the "man up and buy FSD or EAP comments".

Seems like it shouldn't be *that* hard to allow manual lane change after turn signal activation, while keeping CC engaged and automatic handover to AP once lane change is executed. Maybe Tesla fears this will detract too much from revenue for EAP. I haven't researched the market extensively but I think Mercedes has this capability.

Thanks for sharing but you should really put this in a different thread since it is off-topic. You can start your own thread entitled "What I would like to see changed with AP". This thread is only to discuss autonomous driving from other companies (Not Tesla).
 
  • Like
Reactions: TesslaBull