Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
There would be no reason to redefine SAE levels just because Tesla can’t do FSD Beta successfully in those tiers.

While L2 and L3 may not be impractical for AVs, they’re well-defined for limited scope systems. L2 highway assist features are available on many cars, and L3 seems to be coming first to traffic jam scenarios.
I think the problem is that the transition from level 2 to level 3 is rather fraught for humans. The machine is capable enough to do many/most things well and thereby give a false sense of trust that in turn leads to complacency. Then it makes a mistake.

Related to that is the human tendency to look at mistakes and say “a person never would have made that mistake!” while ignoring all the things the machine did right and all the mistakes it avoided.
 
I think the problem is that the transition from level 2 to level 3 is rather fraught for humans. The machine is capable enough to do many/most things well and thereby give a false sense of trust that in turn leads to complacency. Then it makes a mistake.

Related to that is the human tendency to look at mistakes and say “a person never would have made that mistake!” while ignoring all the things the machine did right and all the mistakes it avoided.
L3 does not require driver attention or supervision.
 
There would be no reason to redefine SAE levels just because Tesla can’t do FSD Beta successfully in those tiers.

While L2 and L3 may not be impractical for AVs, they’re well-defined for limited scope systems. L2 highway assist features are available on many cars, and L3 seems to be coming first to traffic jam scenarios.
I think the point he was trying to make is that NHTSA may say that humans cannot be relied on to focus properly for L2 and L3, no matter which manufacturer creates it. The argument is that when the car starts driving itself, humans may naturally tune-out and over-rely on it. We'll treat it like we're in an L4 car when we're really not, and if the car suddenly needs us to take over, which is very possible with L2 and L3, we won't be paying enough attention and an accident may occur.
 
I think the point he was trying to make is that NHTSA may say that humans cannot be relied on to focus properly for L2 and L3, no matter which manufacturer creates it. The argument is that when the car starts driving itself, humans may naturally tune-out and over-rely on it. We'll treat it like we're in an L4 car when we're really not, and if the car suddenly needs us to take over, which is very possible with L2 and L3, we won't be paying enough attention and an accident may occur.
Again L3 does not require driver attention.
 
Again L3 does not require driver attention.
My bad, Dan. I didn't read the SAE levels correctly that L3 did not require human attention:

Level 0 – No Driving Automation The performance by the driver of the entire DDT. Basically, systems under this level are found in conventional automobiles. Level 1 – Driver Assistance A driving automation system characterized by the sustained and ODD-specific execution of either the lateral or the longitudinal vehicle motion control subtask of the DDT. Level 1 does not include the execution of these subtasks simultaneously. It is also expected that the driver performs the remainder of the DDT.
Level 2 – Partial Driving Automation Similar to Level 1, but characterized by both the lateral and longitudinal vehicle motion control subtasks of the DDT with the expectation that the driver completes the object and event detection and response (OEDR) subtask and supervises the driving automation system.
Level 3 – Conditional Driving Automation The sustained and ODD-specific performance by an ADS of the entire DDT, with the expectation that the human driver will be ready to respond to a request to intervene when issued by the ADS.
Level 4 – High Driving Automation Sustained and ODD-specific ADS performance of the entire DDT is carried out without any expectation that a user will respond to a request to intervene.
Level 5 – Full Driving Automation Sustained and unconditional performance by an ADS of the entire DDT without any expectation that a user will respond to a request to intervene. Please note that this performance, since it has no conditions to function, is not ODD-specific.
 
I think the problem is that the transition from level 2 to level 3 is rather fraught for humans. The machine is capable enough to do many/most things well and thereby give a false sense of trust that in turn leads to complacency. Then it makes a mistake.

Related to that is the human tendency to look at mistakes and say “a person never would have made that mistake!” while ignoring all the things the machine did right and all the mistakes it avoided.
L3 doesn't require paying attention but does require being ready to take over, so no napping. Just watching videos. L3 basically says, "I can't handle this. Please help." The issue with Tesla is not about some transition from L2 to L3. The issue with Tesla is that it's solidly, solidly, solidly in the L2 camp in capabilities but it makes promises of L4 and above. L2 Tesla says, "I can handle this," and then drives straight off the road into a ditch, or straight into the back of a truck, or straight into the back of an ambulance. Tesla L2 in an an L4 dress, and that's dangerous as hell.
 
L3 doesn't require paying attention but does require being ready to take over, so no napping. Just watching videos. L3 basically says, "I can't handle this. Please help." The issue with Tesla is not about some transition from L2 to L3. The issue with Tesla is that it's solidly, solidly, solidly in the L2 camp in capabilities but it makes promises of L4 and above. L2 Tesla says, "I can handle this," and then drives straight off the road into a ditch, or straight into the back of a truck, or straight into the back of an ambulance. Tesla L2 in an an L4 dress, and that's dangerous as hell.
No where does the system say it's L4, nor hint at being L4. When you get your car and enabled Autopilot and NoA (if you have the FSD package), there are warning pages that detail exactly what to expect, and that you must remain in control at all times (L2). If you're invited into FSD Beta, the warning screens are even more intense and require a 2nd level of acceptance (a check box), telling you that you must be in complete control, and that the system can do the wrong thing at the wrong time.

Not sure where in any of those warning screens does it indicate you can relax and let the car drive itself. If you're referring to comments made on Twitter, those don't apply as they are not company policy or legally binding to Tesla. Treat them like marking hype or campaign promises from politicians. If you are saying that people don't read those warning screens and simply press "Accept", like they do with Apple EULA's on iPhones, then I can't help those people. There's a massive, grand-canyon wide difference between blindly accepting EULA on an iPhone vs a big, heavy, moving car that can kill you or others around you if you use it improperly.
 
NHTSA may say that humans cannot be relied on to focus properly for L2 and L3, no matter which manufacturer creates it.

Except humans can be relied on to focus properly. At least for L2...since L3 doesn't exist. Tesla themselves claim drivers on Autopilot (L2) have less accidents than drivers without. We can argue the merits of their testing, but with many other manufacturers offering L2 highway lane centering and adaptive cruise systems, the rate of accidents has not been high enough to raise any brows over at the NHTSA.

The majority of L2 systems in production are limited in scope to just adaptive cruise and lane keeping and are pretty safe. Where has anyone said that lane keep assist is too dangerous for humans?

FSD Beta on the other hand takes L2 to an extreme that other manufacturers do not, and now that they've created a huge and complex set of scenarios that the driver is supposed to watch out for, THAT is what becomes dangerous. So yes, the NHTSA could come and say FSD Beta is too complex for an L2 system and humans can't be expected to nanny it safely. But Autopilot and other lane keep L2 systems from other manufacturers are fairly simple and have shown to be safe
 
  • Like
Reactions: AlanSubie4Life
L3 doesn't require paying attention but does require being ready to take over, so no napping. Just watching videos. L3 basically says, "I can't handle this. Please help." The issue with Tesla is not about some transition from L2 to L3. The issue with Tesla is that it's solidly, solidly, solidly in the L2 camp in capabilities but it makes promises of L4 and above. L2 Tesla says, "I can handle this," and then drives straight off the road into a ditch, or straight into the back of a truck, or straight into the back of an ambulance. Tesla L2 in an an L4 dress, and that's dangerous as hell.
That's funny. After 9 months of driving almost exclusively using AP/NOA/FSDb, I've not yet had my car attempt to drive into a ditch, or straight into the back of a truck or even straight into the back of an ambulance. I'm sure that I would have noticed had it attempted to do any of these things as I keep pretty close watch over it.

Am I doing something wrong? I can't find any setting for these things. I can't even get the car to curb a wheel. I used to reliably get PB, but seem to have forgotten how to do even that.
 
  • Funny
Reactions: Sigma4Life
That's funny. After 9 months of driving almost exclusively using AP/NOA/FSDb, I've not yet had my car attempt to drive into a ditch, or straight into the back of a truck or even straight into the back of an ambulance. I'm sure that I would have noticed had it attempted to do any of these things as I keep pretty close watch over it.

Am I doing something wrong? I can't find any setting for these things. I can't even get the car to curb a wheel. I used to reliably get PB, but seem to have forgotten how to do even that.
So you're saying FSD Beta is flawless you've never had a safety disengagement since using it? Majority of FSD Beta users would find that very hard to believe
 
You are 100% sure about that, correct? In all places? No exceptions?

(Careful how you respond..my answer is already waiting) LOL
If you took Tesla to court with a cause of action, and said that the car does not do what Elon said on Twitter, Tesla's lawyers will have you produce the website materials and the contract you signed from the website as part of your ordering process. They will bring into evidence what's on the screen and what you accepted on the screen. The court will likely dismiss your case.

The SEC is different, as their rules on manipulation of the market price do extend to social media comments, or comments to the press, or any other public remarks.

I'm referring to cause of action against a company because of what an officer of the company said on social media contradicts what is publicly stated on the company's official website and in contract language you agreed to when you made a purchase, and in agreements during use (such as the text you agreed to when you enabled AP/NoA/FSD Beta).

If I'm wrong, why haven't you sued Tesla yet? :)
 
  • Like
Reactions: sleepydoc
Since people seem to have a hard time comprehending what a transition from level 2 to 3 means, let me try to spell it out. (Honestly, is it really that tough or is every really that pedantic?…never mind)

Level 2 encompasses everything from adaptive cruise control to a system that is on the verge of being approved for level 3. As the system progresses in capabilities the human driver does and needs to do less and less, leading to complacency. If a level 3 system is such that it virtually never needs intervention you get someone falling asleep while watching a movie and they don’t wake up when the car needs them to (or in time to properly assume control) and there’s an accident.

Humans do poorly at tasks that require vigilance for rare events and no action the rest of the time. That’s what the transition from 2-3 involves.
 
If you took Tesla to court with a cause of action, and said that the car does not do what Elon said on Twitter, Tesla's lawyers will have you produce the website materials and the contract you signed from the website as part of your ordering process. They will bring into evidence what's on the screen and what you accepted on the screen. The court will likely dismiss your case.

The SEC is different, as their rules on manipulation of the market price do extend to social media comments, or comments to the press, or any other public remarks.

I'm referring to cause of action against a company because of what an officer of the company said on social media contradicts what is publicly stated on the company's official website and in contract language you agreed to when you made a purchase, and in agreements during use (such as the text you agreed to when you enabled AP/NoA/FSD Beta).

If I'm wrong, why haven't you sued Tesla yet? :)

Uh...

No comment....

Thanks
 
  • Funny
Reactions: Dewg