Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Any communication from Tesla that doesn't use the correct terminology is truly just spouting marketing mumbo-jumbo, it's in stark contrast to something like Mercedes' page about Drive Pilot that has the correct language everywhere you expect it

 
No OEM offers anything remotely like that other than Tesla.
Thank god. L2 on city streets is a fairly useless value proposition. Less safe, less comfort, more stress and slower than driving yourself.
You can twist this how you like, but "supervised autonomy" is an oxymoron. Especially in an automotive context. If something needs constant supervision it's not autonomous.
 
  • Informative
Reactions: 2101Guy
You can twist this how you like, but "supervised autonomy" is an oxymoron. Especially in an automotive context. If something needs constant supervision it's not autonomous.
I'll play Elon's silly definition game for a minute. In a broad sense a baby has autonomous movement but requires constant supervision.

Just because it's autonomous doesn't mean it's any good. Babies are also feature complete, they just need more training.

Of course it's a stupid analogy, but who knows how Elon's mind works.
 
No it doesn't. It's an oxymoron. Autonomy literarily means it doesn't need any supervision. The correct industry jargon is "supervised automation", which is a fancy word for L2.
Autonomy is about the ability to make decisions.

Otherwise you'd have to say that FSD is operating autonomously when the driver is distracted, but not when they are watching even though the vehicle behavior is unchanged.
 
Thank god. L2 on city streets is a fairly useless value proposition. Less safe, less comfort, more stress and slower than driving yourself.
You can twist this how you like, but "supervised autonomy" is an oxymoron. Especially in an automotive context. If something needs constant supervision it's not autonomous.
OK, Capt. Semantics. After numerous posts about terminology, I think we got your point.
 
I'll play Elon's silly definition game for a minute. In a broad sense a baby has autonomous movement but requires constant supervision.

Just because it's autonomous doesn't mean it's any good. Babies are also feature complete, they just need more training.
I love this analogy since the baby won't be able to drive without supervision in 14 years.. :)
 
  • Like
Reactions: Dan D. and 2101Guy
I'll play Elon's silly definition game for a minute. In a broad sense a baby has autonomous movement but requires constant supervision.

Just because it's autonomous doesn't mean it's any good. Babies are also feature complete, they just need more training.

Of course it's a stupid analogy, but who knows how Elon's mind works.
Holy sh$$, Dan, you made me do a spit take - I LOVE that analogy. 😂
 
  • Funny
Reactions: Dan D.
The SAE levels DO NOT describe capability, reliability, error rate or lack thereof. It does not assume a particular path of development.

SAE level describes who is responsible for what and when.

Is the human responsible for driving at all times? L0 to L2
Is the car responsible for driving in as described by the manufacturer ODD, but requires the driver as a fallback safety when requested? L3
Is the car responsible for ALL driving in as described by the manufacture ODD including safety fallback? L4
Is the car responsible for ALL driving in ALL reasonable expected condition an average driver can also drive? L5

That is all the SAE level talks about. A L2 door to door system can be in practice as reliable and capable as a L4 system but as long as the manufacturer states you are responsible for driving at ALL times then it is still a L2 system. What you as an end user thinks does not matter, the manufacturer has to state what the role of the end user is when the system is in operation.


They clearly give and have always given examples of increasing capability by level.
 
Autonomy is about the ability to make decisions.

Otherwise you'd have to say that FSD is operating autonomously when the driver is distracted, but not when they are watching even though the vehicle behavior is unchanged.
No. Most people would say autonomy is 99% about reliability. In a supervised setting like L2 is, capability is 95%. Going from "holy sh$$ this mostly works 80% of the drives" to "sigh, when are we there" IS the problem. It might be hard to understand, but that's how it is.

What do you think Waymo has been up to the last eight years?
 
  • Like
Reactions: Doggydogworld

They clearly give and have always given examples of increasing capability by level.
Examples of features. The levels and SAE 3016 do not tell you how capable the system is. That is up to the manufacturer to define. The higher you are in the taxonomy the less human supervision. That is literally what it is about. The example features are emblematic of the who is responsible for what.

That is why BMW can have Advanced Traffic Jam Assist and Mercedes can have Traffic Jam Pilot and both are functionally using the same Lane Keep Assist and Adaptive Cruise Control features, but BMW system is L2 while Mercedes system is L3.

SAE J3016 (everyone talking about this should read the document)

Page 1
As in the previous version, it provides a taxonomy describing the full range of levels of driving automation in on-road motor vehicles and includes functional definitions for advanced levels of driving automation and related terms and definitions. This document does not provide specifications, or otherwise impose requirements on, driving automation systems (for further elaboration, see 8.1). Standardizing levels of driving automation and supporting terms serves several purposes, including:

1. Clarifying the role of the (human) driver, if any, during driving automation system engagement
The point of the levels is to define who is responsible for performing the DDT hence from L3 and up the manufacture must take responsibility when the system is engaged. The levels don't describe how capable the feature is, does not define a development path, and is not a technical specification.

Page 36
8.1 This document is not a specification and imposes no requirements. This document provides a logical taxonomy for classifying driving automation features (and ADS-equipped vehicles), along with a set of terms and definitions that support the taxonomy and otherwise standardize related concepts, terms and usage in order to facilitate clear communications. As such, it is a convention based upon reasoned agreement, rather than a technical specification.

By itself, this document imposes no requirements, nor confers or implies any judgment in terms of system performance. Therefore, while it may be appropriate to state, for example, that a given ADS feature does not meet the definition of Level 4 because it occasionally relies on a remote fallback-ready user to perform the fallback (and is therefore a Level 3 feature), it is not appropriate to conclude that the feature in question is therefore “non-compliant” or “unsafe.”

8.2 Levels are Assigned, Rather than Measured, and Reflect the Design Intent for the Driving Automation System Feature as Defined by its Manufacturer

The level of a driving automation system feature corresponds to the feature’s production design intent. This applies regardless of whether the vehicle on which it is equipped is a production vehicle already deployed in commerce, or a test vehicle that has yet to be deployed. As such, it is incorrect to classify a Level 4 design-intended ADS feature equipped on a test vehicle as Level 2 simply because on-road testing requires a test driver to supervise the feature while engaged, and to intervene if necessary to maintain operation.

8.3 Level Assignments are Nominal, Rather than Ordinal, and are Never Fractional

While numbered sequentially 0 through 5, the levels of driving automation do not specify or imply hierarchy in terms of relative merit, technology sophistication, or order of deployment. Thus, this taxonomy does not specify or imply that, for example, Level 4 is “better” than Level 3 or Level 2
 
  • Like
Reactions: nvx1977
Calling it "Supervised Autonomy" is another slight tweak to the narrative that should be interpreted as a fancy way of saying "get ready to deal with FSD as a L2 ADAS for the foreseeable future." This has been the gradual and frankly sneaky shift away from Robotaxis right around the corner while trying to not rile up too many customers.

Tesla's goal has always been to get this out to the full fleet as a L2 ADAS, then some new similarly-iterative process will begin with the goal of producing something L3+. Autosteer on City Streets will hit the wider fleet, it'll have a Beta tag, it may or may not still have the safety score, and it'll require constant supervision regardless of capabilities. Tesla has no idea what a L3+ system/vehicle or any form of "unsupervised autonomy" looks like right now.


I think you'd be surprised how many other companies are at a similar level but just don't deploy the capabilities while making the driver responsible, they're far more conservative in their approach. But turning on the functions, like the system attempting to steer through sharp curves etc, is likely as easy as going into a Dev menu and clicking some buttons. Autopilot/FSD have these menus, other systems have them too.

Tesla tries to do everything ok and make the driver responsible while believing that mass data collection on public roads is the path to solving this. Other companies don't want that, they want the car to perform flawlessly so they limit the functionality to the narrow range of functions it can do all but perfectly.
 
No. Most people would say autonomy is 99% about reliability. In a supervised setting like L2 is, capability is 95%. Going from "holy sh$$ this mostly works 80% of the drives" to "sigh, when are we there" IS the problem. It might be hard to understand, but that's how it is.

What do you think Waymo has been up to the last eight years?
Going from "My car can steer itself on the highway. But it's not perfect so I have to pay attention so I can stop it making mistakes, and I have to drive it when I'm not on a highway." to "I put in my destination and my car drives me all the way there. But it's not perfect so I have to pay attention so I can stop it making mistakes" also represents a huge leap in functionality from the user's perspective, especially as error rates decrease. Not society-changing like L4 but certainly life-changing.
 
On the bright side, Im sure Tesla will eliminate the "FSD will do the worst thing at the worst possible time" warning by 12/31/2022, the time when Elon has said FSD will be "completed". Surely he would not consider something doing the worst thing at the worst time, to be considered "completed", right?

"Tesla CEO Elon Musk said during the Q4 2021 Earnings Call that he is confident the company’s Full Self-Driving suite will be finished by 2022."
 
Calling it "Supervised Autonomy" is another slight tweak to the narrative that should be interpreted as a fancy way of saying "get ready to deal with FSD as a L2 ADAS for the foreseeable future." This has been the gradual and frankly sneaky shift away from Robotaxis right around the corner while trying to not rile up too many customers.

Tesla's goal has always been to get this out to the full fleet as a L2 ADAS, then some new similarly-iterative process will begin with the goal of producing something L3+. Autosteer on City Streets will hit the wider fleet, it'll have a Beta tag, it may or may not still have the safety score, and it'll require constant supervision regardless of capabilities. Tesla has no idea what a L3+ system/vehicle or any form of "unsupervised autonomy" looks like right now.


I think you'd be surprised how many other companies are at a similar level but just don't deploy the capabilities while making the driver responsible, they're far more conservative in their approach. But turning on the functions, like the system attempting to steer through sharp curves etc, is likely as easy as going into a Dev menu and clicking some buttons. Autopilot/FSD have these menus, other systems have them too.

Tesla tries to do everything ok and make the driver responsible while believing that mass data collection on public roads is the path to solving this. Other companies don't want that, they want the car to perform flawlessly so they limit the functionality to the narrow range of functions it can do all but perfectly.
Shifting the naming would have the benefit of eliminating the whining about the name Full Self-Driving. If Supervised is in the name, there's no confusion that it's L2. (People even had issue with Autopilot even though it acted like an autopilot system, because people say "I was on autopilot" when they actually mean "I was autopilot".)
 
On the bright side, Im sure Tesla will eliminate the "FSD will do the worst thing at the worst possible time" warning by 12/31/2022, the time when Elon has said FSD will be "completed". Surely he would not consider something doing the worst thing at the worst time, to be considered "completed", right?

"Tesla CEO Elon Musk said during the Q4 2021 Earnings Call that he is confident the company’s Full Self-Driving suite will be finished by 2022."
You understand you're not quoting a quote, right?