Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla testing self driving in Arizona and Texas, not so much CA

This site may earn commission on affiliate links.
The problem with this definition is someone could just leave out a single module and test their system as "L2" and then add the module in right before deploying.
But then, they wouldn't have tested as L4 and won't get regulatory approval.

Well, I could be wrong. The reason I said that is because I consider pulling over for emergency vehicles to be part of the OEDR. And based on the definition of L2, if the system is missing some OEDR, then it is L2. So it would seem to fall under L2.
Yes - its very clear from the feature list I've posted earlier that Tesla "FSD" is missing a LOT of features needed for L4. It is L2 - and city NOA is no different from freeway NOA. Unlike Waymo/Cruise - whose design intent has been robotaxi - for all practical purposes Tesla design intent is L2. They have diligently worked on each feature NHTSA prescribes. There is no attempt to even "follow all state laws".

The software is literally called "Full Self Driving." and robotaxi level reliability is planned for late next year.
"FSD" is just a marketing term - should not be confused with actual design intent or functional practicality. "robotaxi level reliability" being planned for next year would be news to Karpathy :p

ps : More seriously - I'm sure the plan is to get to robotaxi level reliability - but on features that are already in Beta. We have not seen any evidence to suggest they are actually working on covering all the needed features yet.

Its a bit like Model 3. Yes, Tesla's long term strategy was to produce Model 3 (or Bluestar) as the third iteration of their cars. But when designing Roadster - that was not the design intent.
 
  • Like
Reactions: diplomat33
Hmm ... no. You didn't give a definition. Definition is something you can write down without examples - and shouldn't be made to fit theories you are proposing.
Because you are correct that there isn't a clear definition except design intent. Tesla's beta FSD is the same as the prototype AV software developed by dozens of other AV companies. If you listen to the users of beta FSD it's clear that they believe they are using prototype AV software that will ultimately be capable of autonomous operation.
But again all that matters is if it's safe. Discussion of semantics and legality is silly because if there is a problem then state DMVs or the NHTSA will have no problem cracking down on testing on public roads.
"FSD" is just a marketing term - should not be confused with actual design intent or functional practicality. "robotaxi level reliability" being planned for next year would be news to Karpathy :p
I've never heard Karpathy say that. It's crazy that people can watch Tesla presentations and interviews with the CEO and come away thinking that the design intent of beta FSD is not robotaxis or that's there's some super secret other build of the software that is the actual robotaxi software. Beta FSD is robotaxi software (that needs a lot of work!).
 
Tesla's beta FSD is the same as the prototype AV software developed by dozens of other AV companies.
Its not. I just provided the reasoning in my above post.

It's crazy that people can watch Tesla presentations and interviews with the CEO and come away thinking that the design intent of beta FSD is not robotaxis ...
Its crazy to think what Musk says is the design intent vs long term strategic goal/objective.

To me "design" is a very specific term - which says exactly how a piece of sw/hw will be made to work to output some functionality. So, when even basic functionality like pulling over when you hear/see emergency vehicle is not on the feature list - design intent at this time is definitely not "FSD". The current design intent is City NOA - i.e. do the basic minimum needed to go from place A to place B - and let the driver handle all "edge cases" like emergency vehicles, different speed limits based on the time of the day etc etc etc that are listed in the feature list I've posted earlier.
 
Its crazy to think what Musk says is the design intent vs long term strategic goal/objective.

To me "design" is a very specific term - which says exactly how a piece of sw/hw will be made to work to output some functionality. So, when even basic functionality like pulling over when you hear/see emergency vehicle is not on the feature list - design intent at this time is definitely not "FSD". The current design intent is City NOA - i.e. do the basic minimum needed to go from place A to place B - and let the driver handle all "edge cases" like emergency vehicles, different speed limits based on the time of the day etc etc etc that are listed in the feature list I've posted earlier.
I guess we have very different definitions of design intent. You really think all the companies officially testing autonomous vehicles have emergency vehicle response in their initial test software?
I'm curious what type situation the writers of the SAE spec had in mind when they talk about misclassification of prototype L4-5 as L2?
 
Here is the key distinction: a test vehicle that cannot do the full OEDR is testing a L2 system. It was L2 to begin with. It is not a L4 system that became L2 for testing purposes.

Agreed.

This a slippery slope argument and I don't think there is a bright line. NoA is certainly pushing the limits of what I would consider driver assist.

Have to disagree here. Why would it be pushing the limits? you don't give a reason why.

I would say there is always a very very clear and obvious line between driver assist and not... it's pretty dang simple.

Let's take a step back, If a company develops and releases a product that offers hands free driving address to address, including intersections and parking and everything, but the product is very clearly designed as a L2 system that requires a driver at the wheel and is only clearly advertised as a driver assist system. Would you say such system is pushing the limits of what you would consider driver assist?


So you're arguing that all autonomous vehicle testing regulations are completely unnecessary and can be easily circumvented? If you look at Uber's AV testing saga in CA they made the exact same argument and were shutdown by the DMV. Of course after that Uber killed someone in Arizona while testing their AV.

The problem with this definition is someone could just leave out a single module and test their system as "L2" and then add the module in right before deploying.

I think we are getting into the heart of this. No I do not think regulations are unnecessary. But I do think they are often unnecessary, very flawed, misunderstood, and yes easily circumvented like you say.

For Uber, that had nothing to do with regulation or lack of regulation in Ca or AZ. Now say Cali had regulations that required use of DMS or regulations relating to the safety drivers or time between interactions while safety driving, or other. Then I would think differently, and I am all for regulations relating to this, for testing or for production vehicles too. However, that is entirely unrelated to the regulation we were discussing.

and you are right a company could just remove a single module to test their system as "L2"... and this is because this idea never made any sense to begin with.... this like saying a complete system that has all the necessary modules must be regulated because potentially dangerous... but an incomplete system that is missing important modules for safety is free game and regulation is not necessary. Makes no sense.

Let's go back to my description from above, say a company is developing a driver assist product, and has no plans to ever turn it into L4. And they clearly market it as so. And this company wants to test their system as L2... because it is L2. would you have a problem with this?

I am starting to think, that maybe you don't care whether something is driver assist or not... but simply care about testing automation in a highway environment or urban environment? yea? or Is this an incorrect assumption?

Consider this spectrum,

Say a company wants to test a fully self driving stack however in shadow mode, and the vehicle does not even have by-wire control and a human is driving... should this be regulated / require a permit?
How about an ADAS system that only provides collision warnings, but is otherwise fully manual, should this be regulated and require a permit? How about an Adas system that performs ACC? or a a system that does ACC plus stopping for stop lights? or an adas system that performs lane changing, or a system like NoA? or system like NoA on city roads?

Where would you draw the line, and for what reason?

I am not saying, I don't think there should be regulation, but I think differentiating regulation based on feature set or long term design intention certainly makes no sense.

To me the only thing that really matters is if it improves safety. NoA might improve safety (I'm not convinced but it is plausible) but I seriously doubt beta FSD improves safety.
All this is moot if testing AVs is no more dangerous than regular driving. Hopefully that is the case but I suspect that Tesla knows it is not and will only release a very scaled back version of the software (i.e. autosteer on city streets).

So I think are opinions here are quite aligned... I mean I will point out that companies testing robotaxi systems with safety drivers in city environments have far higher safety records than just manual driving in city environments. Tesla FSD beta being released to end users is a little different than trained safety drivers I agree.

Hopefully that is the case but I suspect that Tesla knows it is not and will only release a very scaled back version of the software (i.e. autosteer on city streets).

I agree with you. But your statement could also be rewritten as.... "I suspect Tesla knows that IT IS [less dangerous than regular driving] as long as they release it certain constrains and limitations and other tools to keep the driver in check"
Which I think we agree they may do this, however, that is the same that they have done with every L2 feature they have launched over the years. I.e. nags, confirmations, geofence, etc

(i.e. autosteer on city streets).

Can you clarify what you mean by this?? To me, Tesla has autosteer on city streets ever since Ap1.

But it is worth nothing that the SAE document clearly says that levels are assigned, not measured, based on the design intent of the manufacturer:

Page 30: "Levels are assigned, rather than measured, and reflect the design intent for the driving automation system feature as defined by its manufacturer.

Yes thank you this is an important point. Another reason why it makes no sense to regulate based on L2/L4... because that just lets companies decide to opt in or opt out of regulation.

"FSD" is just a marketing term - should not be confused with actual design intent or functional practicality. "robotaxi level reliability" being planned for next year would be news to Karpathy :p

Yes exactly

If you listen to the users of beta FSD it's clear that they believe they are using prototype AV software that will ultimately be capable of autonomous operation.
It's crazy that people can watch Tesla presentations and interviews with the CEO and come away thinking that the design intent of beta FSD is not robotaxis or that's there's some super secret other build of the software that is the actual robotaxi software. Beta FSD is robotaxi software (that needs a lot of work!).

I don't think the opinions of users of FSD beta on what the system is or isn't is that relevant. They are not the ones designing the system.

and again, I still disagree and do not think Tesla 'FSD' is for robotaxis.

Sure the company may someday try to create a robotaxi and leverage what they have learned and some data from this 'FSD'.... but that doesn't matter. They would also be leveraging what they learned from AP1 and from tesla fleet vehicles without autopilot.

Its crazy to think what Musk says is the design intent vs long term strategic goal/objective.

This! Exactly!
Thank you
 
I guess we have very different definitions of design intent. You really think all the companies officially testing autonomous vehicles have emergency vehicle response in their initial test software?
I'm curious what type situation the writers of the SAE spec had in mind when they talk about misclassification of prototype L4-5 as L2?

You're right. They certainly do not have emergency vehicle response in their initial test software. Most of these test vehicles more closely fit L2 just like Tesla FSD. But these companies again, just do the regulator song and dance because its the easiest thing to do and provides them the best publicity.

The creates of SAE spec created SAE levels to describe products and services... and not to describe testing or test protocols.
Again, why it makes no sense to regulate testing based off SAE level design intent.
 
To me, Tesla has autosteer on city streets ever since Ap1.

"Autosteer on city streets" is specifically listed as a FSD feature on the website. So it is more than just using the "autosteer" part of AP on city streets. It refers to a special set of FSD features. It refers to "autosteer" being able to do more than what it does now. FSD Beta basically shows us what "autosteer on city streets" is. It is the ability of "autosteer" to do things like make turns at intersections, maneuver around double parked cars, move over in an empty parking spot to let an incoming car pass by etc....
 
"Autosteer on city streets" is specifically listed as a FSD feature on the website. So it is more than just using the "autosteer" part of AP on city streets. It refers to a special set of FSD features. It refers to "autosteer" being able to do more than what it does now. FSD Beta basically shows us what "autosteer on city streets" is. It is the ability of "autosteer" to do things like make turns at intersections, maneuver around double parked cars, move over in an empty parking spot to let an incoming car pass by etc....

Thanks, yea I know it was listed as upcoming feature in FSD feature list, so I knew it meant something different but it was always vague. and figured it may imply those things you mention. But I guess I was wondering what features specifically Daniel meant to describe in his post. Or what would and would not be included in an 'scaled back FSD release'
 
Let's take a step back, If a company develops and releases a product that offers hands free driving address to address, including intersections and parking and everything, but the product is very clearly designed as a L2 system that requires a driver at the wheel and is only clearly advertised as a driver assist system. Would you say such system is pushing the limits of what you would consider driver assist?
Yes, I would. Again the only reason any of this matters is safety. If testing a L4 system is safe (relative to regular driving) then it's fine. I happen to believe that testing autonomous vehicles requires skilled test drivers whose performance is monitored to achieve that level of safety. I could be wrong. We'll see!
Not every automation feature is equally safe. I think that a feature that automates unprotected left hand turns would reduce overall safety unless its performance was extremely good. An adaptive cruise control feature doesn't need to work very well at all to be safe when monitored by the average human.
I am starting to think, that maybe you don't care whether something is driver assist or not... but simply care about testing automation in a highway environment or urban environment? yea? or Is this an incorrect assumption?
Yes. I think testing in an urban environment is potentially much more dangerous.
So I think are opinions here are quite aligned... I mean I will point out that companies testing robotaxi systems with safety drivers in city environments have far higher safety records than just manual driving in city environments. Tesla FSD beta being released to end users is a little different than trained safety drivers I agree.
Yeah because the safety drivers are monitored and will get fired if they screw up. I've seen a lot of beta FSD users letting the car do illegal maneuvers just to see what will happen. I'm pretty sure test drivers of other manufacturers don't do that.
Can you clarify what you mean by this?? To me, Tesla has autosteer on city streets ever since Ap1.
"autosteer on city streets" is currently the only promised future feature of FSD. I meant "autosteer on city streets" that actually works well and is officially supported (technically the owner's manual tells you not to use the current autosteer on city streets).
Where would you draw the line, and for what reason?
You're saying there is no hard line and I agree. To me it all comes down to whether extra vigilance is required to safely test the system or the system is good enough that it causes the driver to lose vigilance (my bigger long term concern).
Yes thank you this is an important point. Another reason why it makes no sense to regulate based on L2/L4... because that just lets companies decide to opt in or opt out of regulation.
Uber tried to claim their system was L2 in California. The DMV decided that it was subject to AV testing regulations and forced them to comply. So, no, companies can't opt out.
 
I guess we have very different definitions of design intent. You really think all the companies officially testing autonomous vehicles have emergency vehicle response in their initial test software?
I'm curious what type situation the writers of the SAE spec had in mind when they talk about misclassification of prototype L4-5 as L2?
I don't care about SAE specs. Its all just theoretical.

So its a practical question. People raising money talking about robotaxis should have their design intent at L4/L5 level. We all know who they are. They have design intent of autonomy - their cars are in "production" when they achieve that autonomy (whether they have safety drivers or not). So Waymo has been in production at L4 for sometime. Cruise apparently just got there (and a Chinese company recently too ?).

But companies that are clearly targeting higher and higher levels of autonomy - like Tesla, MobilEye (Nissan etc), GM have design intent of driver assistance, may be eventually leading to full autonomy some day. But these companies are in "production" when the features are available in the private cars of their customers.
 
  • Like
Reactions: mikes_fsd
Anyone think that CA might lose big if they don't make their regulations more AV friendly? It seems we are already seeing AV companies looking at other States as possible areas for deploying AVs. We got Tesla looking at AZ and TX for testing and not reporting data in CA. We got Zoox looking at Las Vegas for deploying robotaxis. Cruise is also testing in Phoenix. Of course, We know Waymo has a ride-hailing service in Phoenix.
 
By testing, I mean that Waymo has vehicles equipped with their FSD package, with a safety driver, that are driving around in autonomous mode on public roads, collecting data, refining the software etc... Those rides are not open to the public. They are strictly in-house testing. So we do not have videos of those drives.

Here is the map and info I am referring to. This is from the Waymo Safety Report released in Sept 2020:

9SdMi7t.png


Source: https://storage.googleapis.com/sdc-prod/v1/safety-report/2020-09-waymo-safety-report.pdf

PS: If you have not done so already, I would recommend reading the Safety Report. It gives great detail on all the testing Waymo does and how their FSD works.
So only locked into Cities... tsk tsk. Too bad it can't cross over into another city via interstate.
 
Stop with the "Waymo is geofenced to a tiny desert sandbox" strawman. You know full well that Waymo is testing FSD in several States across the US, almost as many as Tesla, if you look at the map Waymo put out.

And yes, there is a massive chasm between Tesla and Waymo. Waymo has true driverless, Tesla does not. As Krafcik put it, the Waymo Driver is a complete replacement for a licensed human driver. Tesla's FSD is not.

the problem is people doesn’t bother to read actual data.

Geofencing is required for level 4. Per both SAE standard and CA/AZ rule.

That was the main reason CA DMV went after Elon and Tesla last October. Because Tesla does not have permit to test Self Driving function.

Tesla legal formally confirmed its an L2 system. That is also what let NTSB to warn Tesla on Friday.
 
  • Disagree
Reactions: diplomat33
the problem is people doesn’t bother to read actual data.

Geofencing is required for level 4. Per both SAE standard and CA/AZ rule.

That was the main reason CA DMV went after Elon and Tesla last October. Because Tesla does not have permit to test Self Driving function.

Tesla legal formally confirmed its an L2 system. That is also what let NTSB to warn Tesla on Friday.

I read all the data.

Geofencing is not required for L4. L4 just means a limited ODD. That limited ODD can be weather restricted, traffic restricted, speed restricted, etc... Geofencing is just one possible restriction, that happens to be a very common restriction, but it is not the only one.

The difference between L2 and L4 is not whether one is geofenced. The difference is that L2 cannot drive in its ODD without a human. L4 can drive in its ODD without a human.

The reason Tesla is not given a permit to test FSD in CA has nothing to do with Tesla not geofencing. Tesla could easily get a permit now for FSD if they were willing to submit disengagement reports per CA regulations. Tesla says they don't need a permit because they say they are not testing FSD.