Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD rewrite will go out on Oct 20 to limited beta

This site may earn commission on affiliate links.
It's also not legal to crash into other motorists, and I imagine some autonomous vehicles have done that before.

SAE doesn't say anything about the vehicle needing to operate in a legal manner, does it?
It's true the SAE doesn't specify how well the system has to work. It's a silly hypothetical though. If some company wanted to throw away a huge amount of money and possibly be criminally liable then I suppose they could try it.
 
  • Like
Reactions: willow_hiller
It's true the SAE doesn't specify how well the system has to work. It's a silly hypothetical though. If some company wanted to throw away a huge amount of money and possibly be criminally liable then I suppose they could try it.

SAE is useful for measuring how autonomous a vehicle is, and that's pretty much it. It's definitely not useful for measuring how useful an autonomous vehicle is, or even how far along in development an autonomous vehicle is. And some corporations treat SAE as a box-ticking exercise, which is why we tend to argue over it on this forum.

I think almost everyone (except Consumer Reports) agrees that AP/FSD is more useful, and more technologically advanced than Chevy Supercruise. But because FSD beta is Level 2, and Supercruise is Level 3, a lot of people argue that Supercruise is somehow superior.
 
How does a manufacturer of self driving technology convince their technology is level 4/5? They won’t by just showing lines of code. You need to have real world data to convince them.

Tesla introduces FSD and tell everyone that they need to keep an eye on the system; so this is level 2. Over time the system gets updates and perfected. This means that system needs less and less human intervention up to a point that in practice no human intervention is recorded. Tesla collects all the data and shows it to authorities who can analyse it. They convince authorities that system is now so much better than a human driver that in practice no human intervention has been made during so many million miles without any accidents. Authorities get convinced by data and tell Tesla that system does not require human oversight and FSD becomes level 4/5.

I don’t see any other way how a (random) level 4/5 system will ever get approved.
 
SAE is useful for measuring how autonomous a vehicle is, and that's pretty much it. It's definitely not useful for measuring how useful an autonomous vehicle is, or even how far along in development an autonomous vehicle is. And some corporations treat SAE as a box-ticking exercise, which is why we tend to argue over it on this forum.

I think almost everyone (except Consumer Reports) agrees that AP/FSD is more useful, and more technologically advanced than Chevy Supercruise. But because FSD beta is Level 2, and Supercruise is Level 3, a lot of people argue that Supercruise is somehow superior.
I don't understand why people downplay the significance of even Level 3. If Supercruise were Level 3 (it's not) it would be FAR superior to Autopilot for many users (including myself). I could ride from here to Los Angeles while watching a movie.
No company has ever clicked Level 3 box. Many of us think it will be a big deal if any automaker actually does. I guess I'm in the minority around here in thinking that beta FSD looks completely useless until it's Level 3.
Also it's potentially dangerous if it causes people to become complacent which is a different issue. We'll see what happens.
 
  • Like
Reactions: thimel
How does a manufacturer of self driving technology convince their technology is level 4/5? They won’t by just showing lines of code. You need to have real world data to convince them.

Tesla introduces FSD and tell everyone that they need to keep an eye on the system; so this is level 2. Over time the system gets updates and perfected. This means that system needs less and less human intervention up to a point that in practice no human intervention is recorded. Tesla collects all the data and shows it to authorities who can analyse it. They convince authorities that system is now so much better than a human driver that in practice no human intervention has been made during so many million miles without any accidents. Authorities get convinced by data and tell Tesla that system does not require human oversight and FSD becomes level 4/5.

I don’t see any other way how a (random) level 4/5 system will ever get approved.
Level 4/5 does not require approval in many US states. It's the Wild West here.
Waymo is operating Level 4 vehicles without approval in Arizona, it's not required. Here is their recent paper on safety: https://storage.googleapis.com/sdc-...Waymo-Public-Road-Safety-Performance-Data.pdf
Basically you drive a bunch of miles with safety drivers and then for every disengagement you simulate the counterfactual to see what would have happened. Tesla will probably do the same thing once they're ready to go driverless.
Cruise and Waymo are approved to operate Level 4 (no safety driver!) in California.
 
How does a manufacturer of self driving technology convince their technology is level 4/5?
I would guess at least for initial steps, it would be similar to level 2 features with regulators wanting a demo. Looks like California DMV wanted demos and improvements and more demos according to the dialog in Emails between Tesla and CA DMV on Smart Summon, FSD

Even if things are legally allowed without pre-approval, regulators may be quick to shut things down especially if they feel they were not part of the discussion. From Tesla's Full Self-Driving mode under the watchful eye of NHTSA - Roadshow NHTSA "will not hesitate to take action to protect the public against unreasonable risks to safety." And probably because Tesla is headquartered in California with many Teslas on the roads there, CA DMV will probably be ready to cancel things if necessary.
 
Tesla introduces FSD and tell everyone that they need to keep an eye on the system; so this is level 2. Over time the system gets updates and perfected. This means that system needs less and less human intervention up to a point that in practice no human intervention is recorded. Tesla collects all the data and shows it to authorities who can analyse it. They convince authorities that system is now so much better than a human driver that in practice no human intervention has been made during so many million miles without any accidents. Authorities get convinced by data and tell Tesla that system does not require human oversight and FSD becomes level 4/5.

I don’t see any other way how a (random) level 4/5 system will ever get approved.


So there's a few misunderstandings here.

For one, a system doesn't magically "become" L3 or higher when it needs few or no interventions.

It becomes the higher level when the designer/manufacturer of the system attains and attests to the functionality and requirements of that level.


Right now Teslas system isn't L2 strictly because Tesla "says" it is.

It's explicitly programmed to check for driver interaction with the wheel... and it's specifically programmed under some conditions to command the driver to immediately take full control of the car.

Even if it drove around with 0 interventions for 5 years in that state it would still be L2 because it would still be programmed to expect and require a human to always be able to take over instantly. Any system that ever requires instant takeover from a human- no matter how long between requests- is L2 at most by definition of the standard.

Only when they remove that could they classify it any higher.... (ever requires human takeover after some non-immediate but brief amount of warning would be L3 for example).

For L4 specifically they'd need to remove any need ever for a human to take over- including the vehicle being able to always safely park itself if it finds itself unable to drive for some reason.

L5 would be L4 but is never unable to drive (short of mechanical failure of the vehicle, the road collapsing, etc)

The other misunderstanding is around "the regulators"- more on that below-

I would guess at least for initial steps, it would be similar to level 2 features with regulators wanting a demo

The "regulators" thing is a red herring though. It's already legal in half a dozen states to run an L5 car.

Nobody does- because nobody has one.

Waymo has a 4, and they do run it in one place.


Even if things are legally allowed without pre-approval, regulators may be quick to shut things down especially if they feel they were not part of the discussion

That makes no sense.

This is regulated at the state level.

Those states were part of the discussion when they passed laws approving self driving cars to operate right now with no further approvals needed


These are states that went out of their way to write, and pass, laws specifically to permit this. Immediately when such cars are ready.

They wished to be out ahead of the Californias of the world who wish to regulate things to a crawl.


If one actually had a self driving car, and wished to prove it worked and was safe, there are no hoops to jump through at all

Just put them on the road in those states it's already legal. And wait.

After a while of them working great (assuming you were smart enough to not sell them before they worked great) you'd have people in OTHER states demanding THEIR states make them legal.

That's a problem that solves itself.



. And probably because Tesla is headquartered in California with many Teslas on the roads there, CA DMV will probably be ready to cancel things if necessary.

The CA DMV has literally no authority of any kind to "cancel" anything outside the states borders.
 
No, my examples were of extreme capability differences within a single level

Which is why it's clear you don't understand, and appear to have not even read, the SAE standards.

They are levels of automation.

Not "how extreme is your capability for other things"


You keep being mad at a standard you don't seem to have made any effort to understand what it's even measuring.

Waymos cars in AZ are level 4 because they offer a higher, more advanced, level of automation than Teslas Level 2 offering.


If you want to measure something other automation stop being irrationally upset a measure of automation isn't doing it for you.



, so level alone is meaningless for gauging progress. That's my point.


And your point continues to be clearly and factually wrong.

Level alone has meaning for gauging the progress of how automated the system is.

That's literally what the standard is for. And it does that just fine.

Each level is significantly more automated than the one below it, and inherently conveys specific information about what being at that level means if you care to understand it.
 
Level alone has meaning for gauging the progress of how automated the system is.

Yes, that's the only thing the levels are good for and perhaps designed for. Again and again, it's not good for gauging developmental progress of FSD, but that's what a lot of people use the levels for. Maybe I'm being too harsh for saying the levels are stupid, since they're only designed for one aspect of autonomy, which is who's responsible.
 
  • Like
Reactions: beachmiles
Level alone has meaning for gauging the progress of how automated the system is.

But it really doesn't. Just look at how Tesla does their FSD testing in California. They turn the driver monitoring/nags on making it a L2 system so they don't have to report disengagements to the state. Then when they are ready, or want to film a demo, they turn the driver monitoring/nags off, and it is suddenly a L4/L5 system and they have to report disengagements. (It is the same software with likely just a single configuration bit changed.)

So the level can't be used to gauge progress. (A L2 system can be a perfect L5 system just with the nag bit turned on.)
 
Last edited:
Yes, that's the only thing the levels are good for and perhaps designed for.


Right!

So your continually railing against the fact they are good for literally the thing they are designed for measuring continues not to make any sense.

it's like being mad at a yard stick because it doesn't tell time very well.


Again and again, it's not good for gauging developmental progress of FSD

If you're looking for progress of FSD in terms of the advance of autonomy specifically, it's a perfectly good measure.

Again that's literally what it's for.

If you're trying to gauge something entirely different, like "How few times a definitely level 2 system that ALWAYS requires a human to be available and alert requires disengagement" then it's a poor measure of that. But it's not in any way meant to be a measure of that in the first place


Personally I care drastically less about how many L2 disengagements there are than "Is it L3 on highways yet"

Because that first one doesn't really benefit me. Like, at all.

In L2 I'm ALWAYS having to actively pay attention no matter how "good" the system is, so the amount of good beyond "pretty decent" isn't significant.

But being able to read a book while the car is driving me somewhere is a huge leap in progress


So the L2 vs L3 distinction is a perfect gauge of progress for the thing I'm interested in.

Likewise L3->L4 is huge, because now I can sleep while it's driving me someplace. I don't even need to be in the front seat now, I can have a bed in the back with the seats folded down.

That's far more significant progress than "Only disengages on L2 left turns 0.1% of the time now instead of 0.5%!" in my book.





Maybe I'm being too harsh for saying the levels are stupid, since they're only designed for one aspect of autonomy, which is who's responsible.


I think that might be correct :)




But it really doesn't. Just look at how Tesla does their FSD testing in California.

You mean virtually not at all?

The only autonomous testing Tesla has ever done in CA was the 500+ miles over the same route again and again to make the Fake News FSD video from 2016 and with a TON of disengagements.... and the much smaller amount done in 2019 for the Autonomy Day video.

Both were L3 (we know that both from Tesla/CA DMV communications, and the "for legal reasons" disclaimer about the driver being needed... L3 is the highest level where a driver is still needed).



They turn the driver monitoring/nags on making it a L2 system so they don't have to report disengagements to the state

Nope.

We know now for a fact having seen the FSD beta pushed to the fleet, which Elon claimed was only "days" behind the version he personally runs as the most advanced version.

And it's not remotely close to L3 or higher capabilities yet over anything but a planned/fixed (possibly pre-gamed for the second demo video) route.

In fact it's so far from that Elon specifically told folks like 3 days ago they need to be even more careful with the latest build than folks using regular FSD.
 
SAE J3016 said:
The level of a driving automation system feature corresponds to the feature’s production design intent. This applies regardless of whether the vehicle on which it is equipped is a production vehicle already deployed in commerce, or a test vehicle that has yet to be deployed. As such, it is incorrect to classify a level 4 design-intended ADS feature equipped on a test vehicle as level 2 simply because on-road testing requires a test driver to supervise the feature while engaged, and to intervene if necessary to maintain safe operation.
Having safety driver monitoring means nothing.
I still say it's a level 5 prototype. Robotaxis = Level 4. I will die on this hill. :p

But it really doesn't. Just look at how Tesla does their FSD testing in California. They turn the driver monitoring/nags on making it a L2 system so they don't have to report disengagements to the state. Then when they are ready, or want to film a demo, they turn the driver monitoring/nags off, and it is suddenly a L4/L5 system and they have to report disengagements. (It is the same software with likely just a single configuration bit changed.)

So the level can't be used to gauge progress. (A L2 system can be a perfect L5 system just with the nag bit turned on.)
Uber tried the exact same thing and failed.
Uber dismissed warnings about its illegal self-driving test for months, emails show
This is why I keep saying that Tesla needs to monitor their beta testers more carefully before there's an issue and the DMV cracks down.
 
Uber tried the exact same thing and failed.
Uber dismissed warnings about its illegal self-driving test for months, emails show
This is why I keep saying that Tesla needs to monitor their beta testers more carefully before there's an issue and the DMV cracks down.

That's not accurate from even your own link.

For one- Uber was picking up public passengers in company owned/run cars.

Tesla is not.

For another Uber was demoing the autonomous capabilities of the cars (including to the press) while telling CA DMV there were no such capabilities in the first place and all those cameras and other sensors were just for safety systems.


Tesla in contrast is going well out of its way to tell everyone involved, including the drivers themselves, this is NOT an autonomous system and you need to be even more alert and prepared to control the car then the previous versions of the system.... (and indeed many of the videos posted make it clear the driver needs to remain alert at all times)


That said, CA DMV definitions of autonomy are not precisely the same as the SAE ones, so bit of topic drift there.
 
  • Helpful
Reactions: powertoold
That's not accurate from even your own link.

For one- Uber was picking up public passengers in company owned/run cars.

Tesla is not.

For another Uber was demoing the autonomous capabilities of the cars (including to the press) while telling CA DMV there were no such capabilities in the first place and all those cameras and other sensors were just for safety systems.


Tesla in contrast is going well out of its way to tell everyone involved, including the drivers themselves, this is NOT an autonomous system and you need to be even more alert and prepared to control the car then the previous versions of the system.... (and indeed many of the videos posted make it clear the driver needs to remain alert at all times)


That said, CA DMV definitions of autonomy are not precisely the same as the SAE ones, so bit of topic drift there.
Picking up passengers has nothing to do with it. That's an entirely different regulatory body in California (CPUC allows robotaxis but did not allow them to charge, that may have changed though).
It doesn't matter who owns the vehicle, people testing beta testing FSD are designees.
The presence of a natural person who is an employee, contractor, or designee of the manufacturer in the vehicle to monitor a vehicle's autonomous performance shall not affect whether a vehicle meets the definition of autonomous test vehicle.
https://www.dmv.ca.gov/portal/uploads/2020/06/Adopted-Regulatory-Text-2019-1.pdf
Every autonomous vehicle company tells their safety driver to be alert and prepared to control the car!
Testing autonomous vehicles is tricky to do safely, that's why Tesla and other AV companies emphasize vigilance.
Anyway, I think it meet the SAE definition too since the design intent is clearly Level 5. It requires monitoring only because it's still beta.
 
  • Informative
Reactions: powertoold
Picking up passengers has nothing to do with it.

It does though because that's where all the videos came from showing it was operating autonomously putting the lie to Ubers claims.



https://www.dmv.ca.gov/portal/uploads/2020/06/Adopted-Regulatory-Text-2019-1.pdf
Every autonomous vehicle company tells their safety driver to be alert and prepared to control the car!
Testing autonomous vehicles is tricky to do safely, that's why Tesla and other AV companies emphasize vigilance.
Anyway, I think it meet the SAE definition too since the design intent is clearly Level 5. It requires monitoring only because it's still beta.


The design intent is absolutely not level 5 for Tesla.

This was already called out pages ago- challenging anyone to cite anything from Tesla, ever, that made this claim.

And nobody came up with anything- but not only is there nothing to come up with- Teslas own FSD page explicitly makes clear their aspirational intent is L4, not 5.

Beyond that- the current system can't even meet the definitions of SAE L3 capabilities- there is no time where you can actively NOT be monitoring the vehicle safely, let alone higher levels.

Folks keep confusing "intent" as SAE uses it (What is the system you currently are testing is intended to be capable of RIGHT NOW) with "future aspirational intent" like "we hope some day after vastly changing and improving it it might do other things it can't do now"




Further from your CA doc


Your source said:
An autonomous test vehicle does not include vehicles equipped with one or more systems that provide driver assistance and/or enhance safety benefits but are not capable of, singularly or in combination, performing the dynamic driving task on a sustained basis without the constant control or active monitoring of a natural person


Teslas current system, including FSD Beta, is not capable of performing the dynamic driving task on a sustained basis without the constant control or active monitoring of a natural person.

Therefore it is not included in the definition of an autonomous test vehicle per your own source.

They'd LIKE it to be capable of those things someday.

Today is not that today.

Tomorrow isn't looking good either.
 
  • Helpful
Reactions: powertoold
The design intent is absolutely not level 5 for Tesla.

This was already called out pages ago- challenging anyone to cite anything from Tesla, ever, that made this claim.

Perhaps Tesla didn't make the exact claim you had in mind, but this is close enough for the lay person.

Autopilot

Full Self-Driving Capability
All new Tesla cars have the hardware needed in the future for full self-driving in almost all circumstances. The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat.

All you will need to do is get in and tell your car where to go. If you don’t say anything, the car will look at your calendar and take you there as the assumed destination or just home if nothing is on the calendar. Your Tesla will figure out the optimal route, navigate urban streets (even without lane markings), manage complex intersections with traffic lights, stop signs and roundabouts, and handle densely packed freeways with cars moving at high speed. When you arrive at your destination, simply step out at the entrance and your car will enter park seek mode, automatically search for a spot and park itself. A tap on your phone summons it back to you.

The future use of these features without supervision is dependent on achieving reliability far in excess of human drivers as demonstrated by billions of miles of experience, as well as regulatory approval, which may take longer in some jurisdictions. As these self-driving capabilities are introduced, your car will be continuously upgraded through over-the-air software updates.
 
  • Love
Reactions: mikes_fsd