Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Predictions - "Automatic driving on city streets."

This site may earn commission on affiliate links.
If the regulators follow some so far unknown logic, then of course we don’t know when that might be.

If they follow the SAE logic, then the moment would be when Tesla is testing something that has an autonomous production design intent (car responsible driving).

As long as what they are designing and testing for production are semi-autonomous driver’s aids (driver responsible driving), they are not autonomous products.

The game changer is when the software is being designed and tested for car responsible driving in production.

Personally, I think that could happen whenever Tesla reaches "L5 feature complete nogeofence" as you like to say. After all, when the software reaches that stage of being feature complete for an autonomous driving system, it will have all the required pieces needed to be an autonomous driving system prototype. So, at that point, I think it will be hard for Tesla to argue that it is still not a production intent autonomous prototype. I mean, how do you argue that it is not a production intent autonomous prototype when it has all the functioning pieces for one?
 
Personally, I think that could happen whenever Tesla reaches "L5 feature complete nogeofence" as you like to say. After all, when the software reaches that stage of being feature complete for an autonomous driving system, it will have all the required pieces needed to be an autonomous driving system prototype. So, at that point, I think it will be hard for Tesla to argue that it is still not a production intent autonomous prototype. I mean, how do you argue that it is not a production intent autonomous prototype when it has all the functioning pieces for one?

It depends.

If Uber really was to turn their Level 4 prototype into a subset... a Level 2 driver’s aid for Ford, say, would testing that particular Level 2 product require autonomous testing? Probably not as it would be a semi-autonomous driver’s aid in production.

Uber would still have to follow the autonomous rules for testings its Level 4 prototype, but not for testing its Level 2 driver’s aid subset of that system as regulations for that are less strict.

So, yes, I can see Tesla internally having to follow autonomous testing rules for some of their work already today or even in 2016 as we know, but at the same time not needing to do the same for a lot of other stuff they do because it is intended for a Level 2 release only.

Of course all this could change on a dime if regulators change how things are perceived. This is simply based on SAE and current understanding of the average regulations.
 
If the regulators follow some so far unknown logic, then of course we don’t know when that might be.

If they follow the SAE logic, then the moment would be when Tesla is testing something that has an autonomous production design intent (car responsible driving).

As long as what they are designing and testing for production are semi-autonomous driver’s aids (driver responsible driving), they are not autonomous products.

The game changer is when the software is being designed and tested for car responsible driving in production.
Tesla is already testing “full autonomy with production design intent”.

They are testing cars that have exactly the same hardware as cars being sold, but just different software, so that’s not an enforceable edict.
 
Tesla is already testing “full autonomy with production design intent”.

They are testing cars that have exactly the same hardware as cars being sold, but just different software, so that’s not an enforceable edict.

I don’t think it is that simple because in between that goal there are other production design intents (unlike in the case of Uber where their Level 2 story was not plausible).

Tesla has been selling Level 5 capable hardware since late 2016 in their own words but clearly the software so far for it has been production design intended for Level 2... hence the production design intent so far is Level 2 for the entire system.

Now, I agree Tesla probably has been testing ”Level 4/5” intended software internally and for that indeed they would need the appropriate autonomous testing permits etc. But not for testing Level 2 intended features.
 
I don’t think it is that simple because in between that goal there are other production design intents (unlike in the case of Uber where their Level 2 story was not plausible).

Tesla has been selling Level 5 capable hardware since late 2016 in their own words but clearly the software so far for it has been production design intended for Level 2... hence the production design intent so far is Level 2 for the entire system.

Now, I agree Tesla probably has been testing ”Level 4/5” intended software internally and for that indeed they would need the appropriate autonomous testing permits etc. But not for testing Level 2 intended features.
Then tell me when it becomes enforceable? They already have advanced driver assist for city, because it lane follows and does longitudinal acceleration on city streets. So multiple choice:

A) Is it when they add their very next feature (e.g. stoplights)?
B) Is it when they add the very last driver assist feature?
C) Something else (you specify what that would be)?
 
Then tell me when it becomes enforceable? They already have advanced driver assist for city, because it lane follows and does longitudinal acceleration on city streets. So multiple choice:

A) Is it when they add their very next feature (e.g. stoplights)?
B) Is it when they add the very last driver assist feature?
C) Something else (you specify what that would be)?

Well, first of all, let me stress that it might become enforceable at any time if regulators change their mind or regulate so. :)

But assuming things go down the current path and follow the SAE logic here is when it would become enforceable:

A) Maybe. If that system is intended for Level 5 production then yes. If that system is intended for Level 2 production driver’s aid with a comprehensive suite of semi-autonomous features then no.

B) Maybe. If Tesla turns away from developing driver’s assists, and are no longer planning driver’s aid production releases, but move entirely to autonomous development, then yes.

C) Anything developed and tested for autonomous driving is already applicable today. It is possible the Autonomy Investor Day system for example was already enforceable. Then again testing and developing Level 2 Smart Summon for example would not be — it is not intended to be autonomous in production.
 
Tesla has been selling Level 5 capable hardware since late 2016 in their own words but clearly the software so far for it has been production design intended for Level 2... hence the production design intent so far is Level 2 for the entire system.

I am not sure if "production intent" really applies to Tesla because of how they do OTA software updates. If Tesla produced cars like the traditional auto makers with yearly models, then yes, you could look at a particular model and go "it was produced with X driver assist features". For example, the traditional auto makers are selling "production intent L2 cars". The cars are produced with a particular set of clearly marked driver assist features that the customer gets when they buy the car. But Tesla does not do that. Our cars get new features with OTA software updates. And it is entirely possible that our cars could go from L2 to L3 or L4 with just software updates. Are our cars still production intent L2 if they started as L2 but are now L4 because of software update(s)? How do you define "production intent" if our cars do become L4 with software updates?

I think a better way is to simply look at the features we have compared to "feature complete". Our cars are L2 right now because we are not "feature complete". Once we are "feature complete", the cars will not be L2 anymore but will transition to autonomous prototypes. The reason "feature complete" is the border between the two is because when the car is not "feature complete" then by necessity, the driver needs to fill in the blanks, hence why it is still a driver assist. But once the cars are "feature complete", then the car will have all the pieces needed to be an autonomous prototype since it will have all the features needed to do the driving without a driver. At that point, the driver will just need to monitor for safety but won't need to actively drive anymore.
 
  • Like
Reactions: kbM3
I am not sure if "production intent" really applies to Tesla because of how they do OTA software updates. If Tesla produced cars like the traditional auto makers with yearly models, then yes, you could look at a particular model and go "it was produced with X driver assist features". For example, the traditional auto makers are selling "production intent L2 cars". The cars are produced with a particular set of clearly marked driver assist features that the customer gets when they buy the car. But Tesla does not do that. Our cars get new features with OTA software updates. And it is entirely possible that our cars could go from L2 to L3 or L4 with just software updates. Are our cars still production intent L2 if they started as L2 but are now L4 because of software update(s)? How do you define "production intent" if our cars do become L4 with software updates?

I think a better way is to simply look at the features we have compared to "feature complete". Our cars are L2 right now because we are not "feature complete". Once we are "feature complete", the cars will not be L2 anymore but will transition to autonomous prototypes. The reason "feature complete" is the border between the two is because when the car is not "feature complete" then by necessity, the driver needs to fill in the blanks, hence why it is still a driver assist. But once the cars are "feature complete", then the car will have all the pieces needed to be an autonomous prototype since it will have all the features needed to do the driving without a driver. At that point, the driver will just need to monitor for safety but won't need to actively drive anymore.

I disagree.

Production intent here refers to the OTA update that makes the system something. Not the hardware that in itself means nothing.

Once Tesla develops production OTA updates intended to be autonomous, those require autonomous testing. As long as they are developing and testing semi-autonomous production OTA updates, those are not regulated as autonomous systems.

We know Tesla has tested internally software intended to be autonomous, eg the 2016 video no matter how fabricated it may have been. That software so far has not been planned for any release we know of, nor is it complete at all, but its production intent is of course still clear and requires autonomous testing.

Feature complete has a completely different meaning (software development) and is irrelevant where SAE and likely regulation is concerned.
 
[

Well, first of all, let me stress that it might become enforceable at any time if regulators change their mind or regulate so. :)

But assuming things go down the current path and follow the SAE logic here is when it would become enforceable:

A) Maybe. If that system is intended for Level 5 production then yes. If that system is intended for Level 2 production driver’s aid with a comprehensive suite of semi-autonomous features then no.

B) Maybe. If Tesla turns away from developing driver’s assists, and are no longer planning driver’s aid production releases, but move entirely to autonomous development, then yes.

C) Anything developed and tested for autonomous driving is already applicable today. It is possible the Autonomy Investor Day system for example was already enforceable. Then again testing and developing Level 2 Smart Summon for example would not be — it is not intended to be autonomous in production.

A) That makes no sense. Everything they’ve done is intended for Level 5 Autonomy. By that definition, lane keeping should have already been enforced because using your words “it is intended for level 5 autonomy “.

That’s the whole point: Level 5 Autonomy is a combination of capabilities and increasing miles between interventions.

There’s no hard point where you can say “Ok NOW they’re shipping enough features or NOW they have so many miles between interventions that we call this Level 5 Autonomy intent, or whatever you’re calling it.
 
Well, first of all, let me stress that it might become enforceable at any time if regulators change their mind or regulate so. :)

This is a good point - no new laws are needed in states like California - manufacturers test at the discretion of the regulators. The rules can be modified at any time by the regulators.

A) That makes no sense.

Yes, it does.

There’s no hard point where you can say “Ok NOW they’re shipping enough features or NOW they have so many miles between interventions that we call this Level 5 Autonomy intent, or whatever you’re calling it.

The regulators can decide whatever they want, whenever they want. In California, they don’t need to wait for laws or anything else. They simply create regulations to close any gaps that exist. It’s called rule-making - the law is broad (it is intended to maintain public safety) and the regulators flesh out the details as they see fit and adopt new rules as they so choose.

In the end, the regulators decide what is the design intent and what is the vehicle autonomy level.
 
A) That makes no sense. Everything they’ve done is intended for Level 5 Autonomy. By that definition, lane keeping should have already been enforced because using your words “it is intended for level 5 autonomy “.

Absolutely it makes sense.

Everything Tesla has done (AP-wise) is not inteded for Level 5 autonomy. Indeed one should argue most of what they’ve done is not intended for Level 5 autonomy or any autonomy for that matter.

Yes, I agree Tesla has stated their long-term (or next-year, pick your poison) goal is to deliver autonomous software to their car hardware platform. Level 5 no geofence and all that. Once this autonomous software is in testing, it will be treated as autonomous. We know Tesla has a permit to test it too. They have probably done some such testing, maybe even on Autonomy Investor Day.

But so, far, today, Tesla on the other hand has delivered only semi-autonomous driver’s aid software to the same car hardware platform. Level 2 features. When this software is in testing or in production, it is not treated as autonomous. See the recent comments about Smart Summon from California regulators for example.

And the same applies to everything in-between that situation today and the long-term goal: As long as the software Tesla is developing is inteded to be released into production as a semi-autonomous driver’s aid only (aka Level 2), it will not be treated as autonomous. (Unless regulators regulate otherwise, of course.)

So even if their distant goal is Level 5, if they are developing and testing Automatic city driving driver’s aid for Tesla V10.1 for December 2019, that is not autonomous production design intent, but Level 2 semi-autonomous and is treated as such (unless V10.1 really is autonomous Level 5 no geofence, but that’s a different discussion).

That’s the whole point: Level 5 Autonomy is a combination of capabilities and increasing miles between interventions.

There’s no hard point where you can say “Ok NOW they’re shipping enough features or NOW they have so many miles between interventions that we call this Level 5 Autonomy intent, or whatever you’re calling it.

You are partially mistaken here, which may explain the confusion. There absolutely is a very hard point where we can call this Level 5 autonomy intent: the manufacturer makes that determination (though it must be believable of course for the regulators like in many regulated industries).

While Level 5 autonomy is also about capabilities, it is first and foremost about who is responsible for the drive: car or driver. And this is something the manufacturer gets to announce. As long as they state the driver is always responsible in the intended production release, they are not developing an autonomous car — and of course they can not release that car as autonomous then either. In Tesla’s case this basically applies to each OTA update separately as the production design intent varies from release to release.

If you make a theoretically Level 5 capable product, yet where the driver is always responsible, that is a Level 2 car — a semi-autonomous car.

Now, as I said, regulators may introduce granularity of their own over time. And there may be regional differences. But this is how SAE basically defines it and broadly speaking regulators seem to have followed those cues.
 
Last edited:
The reason why Uber got into trouble claiming to be testing a Level 2 car was that it was not believable. Where was the Level 2 project at Uber and for what? They were trying to skirt autonomy testing regulation by claiming their robotaxi prototype was a Level 2 project.

On the other hand, it is perfectly believable for Tesla to be developing and testing Level 2 production releases for semi-autonomous software updates — and indeed they have done it since 2014 (starting at Level 1 though). Just now half of this sub-forum is discussing Tesla’s latest semi-autonomous release V10.

As a funny counter example to Uber one might argue that Tesla’s robotaxi project is indeed in actual fact more a Level 2 project than a robotaxi. ;) But what matters more is that Tesla has several ”believable” projects: shorter term Level 2 production release intended projects (like V10) as well as longer term Level 5 development projects (like ”2020” driverless robotaxi). Only the latter require autonomous status.
 
Last edited:
I disagree.

Production intent here refers to the OTA update that makes the system something. Not the hardware that in itself means nothing.

Once Tesla develops production OTA updates intended to be autonomous, those require autonomous testing. As long as they are developing and testing semi-autonomous production OTA updates, those are not regulated as autonomous systems.

We know Tesla has tested internally software intended to be autonomous, eg the 2016 video no matter how fabricated it may have been. That software so far has not been planned for any release we know of, nor is it complete at all, but its production intent is of course still clear and requires autonomous testing.

I am not sure it is that straight forward because individual OTA features can be semi-autonomous yet at some point, the whole becomes autonomous. For example, would traffic light response be an autonomous feature by itself even though our cars as a whole would not be fully autonomous? And then at what point do all the features add up to an autonomous system?

So Tesla could very well release individual OTA updates and say "this feature is not autonomous" and yet, at some point the whole will be autonomous.

Feature complete has a completely different meaning (software development) and is irrelevant where SAE and likely regulation is concerned.

it is not irrelevant at all. If "feature complete" means that the car has all features needed to handle all dynamic driving tasks within a specified ODD then based on the SAE definition, that "feature complete" is an autonomous driving system.
 
  • Like
Reactions: kbM3
Now, I agree Tesla probably has been testing ”Level 4/5” intended software internally and for that indeed they would need the appropriate autonomous testing permits etc. But not for testing Level 2 intended features.
I think the feature intent "as deployed" should be the guideline.

I think Tesla can continue to develop the AP and push to costumers (things like City NOA) that are intended to be deployed as L2. But City NOA would be a good test case to see whether regulators would intervene. As long as Tesla says you have to keep your hands on the wheel - my best guess is they won't.
 
  • Like
Reactions: kbM3
Everything Tesla has done (AP-wise) is not inteded for Level 5 autonomy. Indeed one should argue most of what they’ve done is not intended for Level 5 autonomy or any autonomy for that matter.

Claim without evidence.

Intent wont be a factor.

I’m sure they’re going to go to an automaker and ask for every feature E.g.:
1) “Do you ever intend for your dynamic cruise control feature to be in a production autonomous vehicle, or are you going to redesign it?”

It can’t be based on intent. Tesla has always intended their vehicles to be fully autonomous as shipped by gradually adding feature after feature. There’s no magic point where a regulator can step in and say “This is no longer driver assist. This is now an autonomous vehicle”. And it can’t be based on how they’re implementing the features, or future “intent” of that feature.

You’re trying to argue “Well they’re not going to use the same neural net architecture for lane keeping, so it’s not intent yet. But boy as soon as they change to a different neural net architecture, then the regulators will come down.”

I cannot see it happening that way.
 
This is a good point - no new laws are needed in states like California - manufacturers test at the discretion of the regulators. The rules can be modified at any time by the regulators.



Yes, it does.



The regulators can decide whatever they want, whenever they want. In California, they don’t need to wait for laws or anything else. They simply create regulations to close any gaps that exist. It’s called rule-making - the law is broad (it is intended to maintain public safety) and the regulators flesh out the details as they see fit and adopt new rules as they so choose.

In the end, the regulators decide what is the design intent and what is the vehicle autonomy level.
Laws and rules have to have a logical and enforceable phrasing. For it to be enforceable, there would have to be a very clear demarcation between driver assist and autonomy. It cannot be the intent.

How would it even work?

Nobody’s give an answer at what point does Tesla violate the regulation? I mean precisely when would they violate it according to your understanding of the regulation?
 
Finally something we agree on. ;)
I think the feature intent "as deployed" should be the guideline.
I think Tesla can continue to develop the AP and push to costumers (things like City NOA) that are intended to be deployed as L2.

Indeed that is what I am saying as well. If the production design intent is not autonomous (e.g. making of semi-autonomous OTA update V11 with City NoA), then it is not covered by autonomous regulation.
But City NOA would be a good test case to see whether regulators would intervene. As long as Tesla says you have to keep your hands on the wheel - my best guess is they won't.

Agreed. This touches on my second point: Even with the SAE based guidelines being clear that autonomy is a manufacturer production design intent question, it is possible regulators will intervene beyond those definities if they see fit. This risk definitely exists. It already exists with Smart Summon, even though clearly so far it is considered semi-autonomous and not covered by autonomy regulation, but have a spectacular accident or few and things might change.
 
Last edited:
Everything Tesla has done (AP-wise) is not inteded for Level 5 autonomy. Indeed one should argue most of what they’ve done is not intended for Level 5 autonomy or any autonomy for that matter.
Claim without evidence.

I would argue reality is our best evidence. Tesla so far has not produced anything autonomous and as far as we know the next feature in the pipeline is not autonomous either, the Automatic city driving, given that Tesla expressly says so in the Design Studio. So obviously these production releases do not have an autonomous design intent.
Intent wont be a factor.

I’m sure they’re going to go to an automaker and ask for every feature E.g.:
1) “Do you ever intend for your dynamic cruise control feature to be in a production autonomous vehicle, or are you going to redesign it?”

It can’t be based on intent.

For SAE it definitely is: production design intent on who is responsible for the drive and when (car or driver). Just read their position paper. And most of the current U.S. regulation at least follows their cue. I would agree regulators may change their mind of course.
Tesla has always intended their vehicles to be fully autonomous as shipped by gradually adding feature after feature. There’s no magic point where a regulator can step in and say “This is no longer driver assist. This is now an autonomous vehicle”. And it can’t be based on how they’re implementing the features, or future “intent” of that feature.

Yes, Tesla’s long-term intent for Autopilot since late 2016 certainly has been Level 5. I agree with that. But that is not their production design intent for their production software currently. So, so far they have not been regulated as autonomous features, except possibly some in-lab testing like the 2016 FSD video that showed autonomous production design intent and maybe stuff like Autonomy Investor Day. We know NoA for example is not tested as an autonomous system, neither is Smart Summon... why not? Because there is no current production design intent for them to be autonomous. They are driver’s aids as released to production.

It doesn’t matter currently that some future Smart Suommon of 2025 could be autonomous, what matters in testing the current system is that the current system’s production design intent is not autonomous.

I also agree there is no magic point where a regulator can step in and say, from features, this is no longer driver’s assist. That is why the magic point currently is manufacturer’s (believable) declaration and intent: if the manufacturer is developing and testing something intended to be produced as car-responsible driving autonomous vehicle, then the autonomous rules generally speaking apply. If the manufacturer is developing and testing something inteded to be produced as a driver-responsible driver’s aid, then the rules generally do not apply.

So if Tesla is developing and testing a Level 2 driver’s aid called Automatic city driving intended for V11 OTA release for, say, early 2020 that is semi-autonomous and the autonomous testing rules do not apply.

If Tesla on the other hand is testing a robotaxi prototype software intended to be driverless, the autonomous rules apply, since the eventual production design intent for that software is Level 5.

This really is how it is currently based on the SAE system.
You’re trying to argue “Well they’re not going to use the same neural net architecture for lane keeping, so it’s not intent yet.

While regulators may change their mind, and I would agree they might, that is pretty much how it is happening so far with Tesla. Of their intertwined Autopilot projects, only certain internal testing has been counted as autonomous, while other portions — that are intended to be released into production as semi-autonomous — have not been and in my view that is within the rules.
But boy as soon as they change to a different neural net architecture, then the regulators will come down.”

I cannot see it happening that way.

This would be taking it too far. The demarkation point is not the NN architecture. It is what the developed and tested system is intended to do in production as a whole.

So if Tesla puts the same NN into two different branches of development:

1. V11 OTA update software with City NoA driver’s aid
2. 2020 driverless robotaxi software Level 5

Only in the latter branch would autonomous rules apply, even if both use the same NN.
 
Last edited:
I am not sure it is that straight forward because individual OTA features can be semi-autonomous yet at some point, the whole becomes autonomous. For example, would traffic light response be an autonomous feature by itself even though our cars as a whole would not be fully autonomous? And then at what point do all the features add up to an autonomous system?

Very simply put: When the manufacturer declares a production car responsible for the drive (and that declaration is approved by regulators) it will become autonomous within manufacturer’s declared design domain, not a moment before that.

Same for testing: When the manufacturer declares the product in development as autonomous production design intended (and that declaration is approved by regulators), it is considered an autonomous test vehicle within the declared design domain, even if it requires a safety driver.

Otherwise semi-autonomous development, testing and production is not regulated as autonomous (again, as long as approved by regulators — the declarations have to be believable, as Uber learned the hard way, but for Tesla Level 2 production design intents are very believable).
So Tesla could very well release individual OTA updates and say "this feature is not autonomous" and yet, at some point the whole will be autonomous.

Sure and I guess some of you are arguing they are doing that already, though personally I’m not too sure anything running on HW2/2.5 today bears much if any resemblance to any future autonomous Tesla software.
If "feature complete" means that the car has all features needed to handle all dynamic driving tasks within a specified ODD then based on the SAE definition, that "feature complete" is an autonomous driving system.

No, it is not. It is an autonomous driving system once the manufacturer declares it to be. And for testing it is the manufacturer production design intent that makes the distinction.
 
Last edited:
To evolve the theme, having said all that how SAE works and those following its cue...

Do I think there could be a time Tesla is so close to autonomous driving with their driver’s aids, yet still declaring it Level 2, that some regulator might take action? Of course. If they’d determine Tesla is trying to cheat by playing loose with production design intent, absolutely they could like in the case of Uber that was clearly not making a Level 2 product at all. But we are so far very far away from anything like that with Tesla it seems to me. There are plenty of absolutely believable semi-autonomous production software releases in Tesla’s future where they can follow the semi-autonomous rules like, say, that hypothetical V11 with City NoA driver’s aid.

Do I think other causes could cause regulators to take action? For sure. Some noteworthy accidents for example might change the conversation entirely and encourage regulators to come up with completely new rules.
 
  • Like
Reactions: AlanSubie4Life