Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Emails between Tesla and CA DMV on Smart Summon, FSD

This site may earn commission on affiliate links.
I think a driver facing camera monitoring system is going to become more and more of a good idea for Tesla as they roll out FSD Beta. It would help prevent abuse since people could not use anti-nag devices or do silly videos where they climb in the back seat while the Tesla is driving. It would also allow hands-free operation which would be great for Tesla owners who are responsible. FSD Beta could drive us around town and we would just need to keep our eyes on the road. We would no longer need to tug the wheel or hit stalk confirmations all the time.

Unfortunately it would mean that all of us with pre-2021 Model X and S models will either be stuck with the wheel nag or won't get FSD Beta, because our vehicles don't have a cabin-facing camera. Or they figure out how to retrofit one.
 
Not sure if you read that article, but L2+ is basically highway NoA, plus optional driver monitoring features.

These include adaptive merging for when vehicles are entering or exiting the highway, and various types of enhanced automatic emergency braking (AEB) aimed at improving vehicle-to-pedestrian safety and car-to-cyclist (and motorcyclist) safety. Also in the development program for production are interior-monitoring technologies to ensure driver attention.

Didn't see it as highway specific. I see mention of traffic sign recognition, automated parking, vehicle-to-pedestrian safety.

The article mentions "multiple (in some cases triple) redundancies in the vehicle’s sensor suite and related actuators" as being an obstacle between Level 2 and Level 3
 
why wouldn’t tesla want to push level 2 all the way to 2x human performance or whatever they’re targeting?
Oh I totally agree that Tesla will try to push Level 2 as much as they can, but unclear how far they can get away with reducing nags at the same time as I believe while this avoids CA DMV oversight, NHTSA might start pushing back. In the summary of NHTSA's investigation of the Autopilot fatality, they did note the existence and improvement of nags:
The Autopilot system is an Advanced Driver Assistance System (ADAS) that requires the continual and full attention of the driver to monitor the traffic environment and be prepared to take action to avoid crashes. Tesla's design included a hands-on the steering wheel system for monitoring driver engagement. That system has been updated to further reinforce the need for driver engagement through a 'strike out' strategy. Drivers that do not respond to visual cues in the driver monitoring system alerts may 'strike out' and lose Autopilot function for the remainder of the drive cycle.
 
Do you think reducing nags will be necessary for Tesla to achieve their FSD safety / performance targets?
It probably isn't necessary to reduce nags, but the data collection could be messy with accidental disengagements trying to satisfy nags especially with FSD turning the steering wheel more than highway Autopilot. Even with the flawed average miles per disengagement metric, it can be used to measure progress within a system if miles and disengagements are counted relatively consistently. I believe the current nag interval is roughly 1/2 mile on NoA highways and 1/6 mile elsewhere, so if Tesla decreases the frequency, it could reveal their increasing confidence level of the software. Overall, reducing nags is more of a feature for the owner as Tesla should be able to track "true disengagements" progress to their targets.
 
the data collection could be messy with accidental disengagements trying to satisfy nags especially with FSD turning the steering wheel more

Perhaps this may be true once more people have fsd beta, but based on the current videos, most of the accidental disengagements are from having hands on the wheel while the car was changing lanes, intending to turn, or making minor adjustments. I don't remember seeing anyone accidentally disengage to satisfy a nag.

Even with a reduced nag frequency, Tesla would probably want fsd beta users to keep their hands on the wheel. It'd be interesting to see how they transition from that to hands free.
 
Perhaps this may be true once more people have fsd beta, but based on the current videos, most of the accidental disengagements are from having hands on the wheel while the car was changing lanes, intending to turn, or making minor adjustments. I don't remember seeing anyone accidentally disengage to satisfy a nag.

Even with a reduced nag frequency, Tesla would probably want fsd beta users to keep their hands on the wheel. It'd be interesting to see how they transition from that to hands free.

If it's nagging you in the middle of a lane change, or during an adjustment, and you grab the wheel, then that would count as an accidental disengagement due to nag. Nags are quite frequent in the city, they don't only happen when you're going straight.
 
Again, this doesn't really describe any of the capabilities of FSD beta. To me that quote translates to "We're not confident enough in the performance of the object detection to hand the entire driving task over to the system, yet. And as such when we release this feature to the public, the driver will still be responsible for monitoring and taking over when necessary."

Is achieving Level 3+ just a matter of improving the accuracy of the object detection? We don't know; this letter doesn't tell us that.
Like others have said... not just improving the accuracy of object detection... but increasing the accuracy and reliability of everything! L3 is getting everything 100x-1000x more reliable so reliable that driver can take eyes off road and OEM take responsibility. Tesla is not doing this. But it is a real gradual and slow process that is happening in the world.

The bit I don’t understand, is how is Tesla going to increase price of fsd as features improve, if L3+ is being done completely behind closed doors now?

it Seems there is a hard fork going on, which doesn’t line up with expectation that users will be able to experience the iterations themselves.

No. There is a hard fork between their old L2 and their new L2 (current production AP stack and FSD beta)

There is no additional fork that goes to anything above L2

It'll be interesting to see how far Tesla can / wants to push the limit of Level 2. As you suggest, Tesla will have a lot of their own data knowing where FSD performs better, and potentially Tesla could relax the nag interval dynamically depending on the situation while officially still requiring driver supervision. This could even be combined with driver monitoring to get pseudo-Level-3 effectively-hands-free still without needing to report to CA DMV.

Tesla is NOT pushing the limits of L2... making the FSD beta L2 is just a standard and normal use case of L2.

you comment about adding driver monitoring to enable hands free and you call it pseudo-Level-3... while this is absolutely nothing like real L3 and does not provide the value to end user that L3 does... I agree with you that it does improve the value user experience of the product greatly over hands-on L2.

Not sure if you read that article, but L2+ is basically highway NoA, plus optional driver monitoring features.

These include adaptive merging for when vehicles are entering or exiting the highway, and various types of enhanced automatic emergency braking (AEB) aimed at improving vehicle-to-pedestrian safety and car-to-cyclist (and motorcyclist) safety. Also in the development program for production are interior-monitoring technologies to ensure driver attention.

No. L2+ is not "basically highway NoA, plus optional driver monitoring features."

Highway NoA plus DMS is one common use case or example of L2+

L2 can be address to address La to NY without every touching the wheel... (You don't even need to add the silly "+")

 
L2 can be address to address La to NY without every touching the wheel... (You don't even need to add the silly "+")

I agree. From what I know, L2+ is marketing jargon that originated from Mobileye.

I get that you can use L2 for "fsd features," but what I was saying is that if Tesla haphazardly rolls out the fsd beta, regulators can limit L2 features to exclude intersection turns. This would mess up the fsd beta development.
 
Tesla is NOT pushing the limits of L2... making the FSD beta L2 is just a standard and normal use case of L2.

If we're speaking purely from the SAE definition of L2 I'd say that's correct, and it's likely because the SAE never anticipated a case where a car company put so much capability within an L2 system.

But, when it comes to regulatory oversight along with other interested parties there is a lot more to consider.

There is a limit to what humans can oversee, and this is something that is well studied. We know from numerous studies that humans have a hard time overseeing things that work really well. We grow restless, and our attention wanders.

Is it really fair to put ALL the responsibility on the person behind the wheel when they do little to none of the actual driving?

Different regions will have different answers.

Texas might say "That sounds great to me."
California might say "No, can you put some limits on it?"
Europe might say "Are you out of your friggen minds?"

Tesla themselves might decide to take a more careful approach than what's in the FSD beta. The way Tesla introduced the traffic light/sign response shows that even Tesla tries to play it a bit safe when it comes to a general release. The current version won't allow a car to proceed through a stop light without some kind of confirmation. Where there has to be a lead car through a green or the driver has to confirm.

The "go on green" by itself would be a pretty monumental thing. The reason being is its a case where the existing SW is likely around 98% correct if not more so. This means its pretty easy for a driver to both get complement, and then two months later the angle of the sun throws it off so they find themselves running a red light going "ugh, I hope.... oh crap".

The danger of improving, and adding capabilities to an L2 system is there is a point where the human driver starts to diminish in their capacity. When that starts to happen the overall safety of the combined whole will suffer. Despite our shortcomings humans are still much better than robots.
 
Last edited:
But, when it comes to regulatory oversight along with other interested parties there is a lot more to consider.

There is a limit to what humans can oversee, and this is something that is well studied. We know from numerous studies that humans have a hard time overseeing things that work really well. We grow restless, and our attention wanders.

Is it really fair to put ALL the responsibility on the person behind the wheel when they do little to none of the actual driving?

Different regions will have different answers.

Texas might say "That sounds great to me."
California might say "No, can you put some limits on it?"
Europe might say "Are you out of your friggen minds?"

Tesla themselves might decide to take a more careful approach than what's in the FSD beta. The way Tesla introduced the traffic light/sign response shows that even Tesla tries to play it a bit safe when it comes to a general release. The current version won't allow a car to proceed through a stop light without some kind of confirmation. Where there has to be a lead car through a green or the driver has to confirm.

The "go on green" by itself would be a pretty monumental thing. The reason being is its a case where the existing SW is likely around 98% correct if not more so. This means its pretty easy for a driver to both get complement, and then two months later the angle of the sun throws it off so they find themselves running a red light going "ugh, I hope.... oh crap".

The danger of improving, and adding capabilities to an L2 system is there is a point where the human driver starts to diminish in their capacity. When that starts to happen the overall safety of the combined whole will suffer. Despite our shortcomings humans are still much better than robots.

Yea I understand all of this... and figured this is what you were getting at. I just wanted to say "pushing the limits of L2" is a poor way of describing it... but I guess most people probably would have understood what you meant, and that is a lot fewer words than your explanation above... So I'll drop the issue.
 
I think a driver facing camera monitoring system is going to become more and more of a good idea for Tesla as they roll out FSD Beta. It would help prevent abuse since people could not use anti-nag devices or do silly videos where they climb in the back seat while the Tesla is driving. It would also allow hands-free operation which would be great for Tesla owners who are responsible. FSD Beta could drive us around town and we would just need to keep our eyes on the road. We would no longer need to tug the wheel or hit stalk confirmations all the time.
I'm going to have to disagree here. My professional opinion, and I have almost 40 years civilian and military aviation experience monitoring autopilot controlled flight, is that eye monitoring of L2 driver assist systems is dangerous. Your hand should be on the wheel. Adding up reaction time and time to get your hand on the wheel from your lap is just too long. Not to mention the probability that in a time critical situation some people are going to miss the wheel or grab it wrong on their first try. Yes, the current system can be defeated, so what? Red light traffic signals can be defeated by the driver's right foot, as can speed limits. The number of accidents caused by idiots using nag defeat devices will inevitably be dwarfed by well meaning folks who just can't react fast enough or well enough with their hands in their lap. It's just not a big deal to rest your hand on the wheel. If you do that no nags, and you're being a safer driver too. Yes I know some other systems use eye monitoring, and people will die because of it. Tesla has got this one right. Take your hands off the wheel when we're L3.
 
I'm going to have to disagree here. My professional opinion, and I have almost 40 years civilian and military aviation experience monitoring autopilot controlled flight, is that eye monitoring of L2 driver assist systems is dangerous. Your hand should be on the wheel. Adding up reaction time and time to get your hand on the wheel from your lap is just too long. Not to mention the probability that in a time critical situation some people are going to miss the wheel or grab it wrong on their first try. Yes, the current system can be defeated, so what? Red light traffic signals can be defeated by the driver's right foot, as can speed limits. The number of accidents caused by idiots using nag defeat devices will inevitably be dwarfed by well meaning folks who just can't react fast enough or well enough with their hands in their lap. It's just not a big deal to rest your hand on the wheel. If you do that no nags, and you're being a safer driver too. Yes I know some other systems use eye monitoring, and people will die because of it. Tesla has got this one right. Take your hands off the wheel when we're L3.
The Tesla FSD is doing way more steering than the other US driver assistance systems so I agree you had better have your hands on the wheel. Things like SuperCruise are mostly lane-keeping and offer little other abrupt maneuvering. Plus Tesla FSD is rather risky at the moment and split second reactions including instantly gripping the wheel are completely essential.

Putting aside defeat mechanisms which are hard to detect (obviously), Tesla should keep the hand monitoring (I would argue that for city driving in this version of Beta your hands should be always on the wheel with no time delay, in which case the current system of hand monitoring isn't quite sufficient). As to eye monitoring, that adds an additional level of ensuring attention and is a good idea also. It's all too easy to have your hand on the wheel and not be paying attention. It should be about safety & redundancy at this point.

Look at the Uber crash in Arizona in 2018. The "driver" didn't have hands on the wheel and wasn't looking or paying attention. End result, death of a pedestrian, a charge of negligent homicide for the "driver", and zero punishment for the rest of the Uber designers & decision-makers. Does anyone think Tesla would defend us any better than Uber did to their driver?

For highway driving & city FSD if it gets much much better than it is now, perhaps hands-free one day, but not yet. Whether SuperCruise and the others should be allowing hands-free, at least they are not instantly dumping control or requiring intervention like Tesla FSD Beta, but also they don't handle nearly the same level of driving situations either. Right now is no time to be reducing anything, they should be layering on the safety.

(While they are at it, get rid of California Stops. Come on, it's a developing system, obey the law.)
 
Last edited:
@ Dan D. I agree that having two monitor systems would be better than one, absolutely. That said, if you're going to have only one it should be hands on the wheel. As far as the other systems, yes they'll do less, but you can still be on the freeway with your hands in your lap. Just my highly experienced opinion, but I think that's dangerous. As far as FSD Beta, it seems to be improving rapidly. Other than that I won't speak to a system I haven't personally tested.
 
  • Like
Reactions: Dan D.
Speaking of NHTSA potentially pushing back on Tesla Autopilot, NTSB wrote to NHTSA:
The NTSB remains concerned about NHTSA’s continued failure to recognize the importance of ensuring that acceptable safeguards are in place so that vehicles do not operate outside their ODDs and beyond the capabilities of their system designs. … For example, Tesla recently released a beta version of its Level 2 Autopilot system, described as having full self-driving capability. By releasing the system, Tesla is testing on public roads a highly automated AV technology but with limited oversight or reporting requirements. Although Tesla includes a disclaimer that “currently enabled features require active driver supervision and do not make the vehicle autonomous,” NHTSA’s hands-off approach to oversight of AV testing poses a potential risk to motorists and other road users.

NTSB refers to SAE J3016 and mentions how drivers can misuse Level 2 Autopilot and "operate the vehicles outside the intended ODD pose an unreasonable risk to safety," however "Operational Design Domain" is used in the context of an "Automated Driving System" which is used specifically to describe a level 3, 4, or 5 driving automation system.

Some of the recommendations from the letter:
H-17-41: To the manufacturers of vehicles equipped with Level 2 vehicle automation systems (Volkswagen Group of America, BMW of North America, Nissan Group of North America, Mercedes-Benz USA, Tesla Inc., and Volvo Group of North America)—Incorporate system safeguards that limit the use of automated vehicle control systems to those conditions for which they were designed. (Status: Open—Acceptable Response; Tesla Status: Open—Unacceptable Response)

H-20-2: To the National Highway Traffic Safety Administration—Evaluate Tesla Autopilot-equipped vehicles to determine if the system’s operating limitations, the foreseeability of driver misuse, and the ability to operate the vehicles outside the intended operational design domain pose an unreasonable risk to safety; if safety defects are identified, use applicable enforcement authority to ensure that Tesla Inc. takes corrective action. (Status: Open—Initial Response Received)
 
NTSB refers to SAE J3016 and mentions how drivers can misuse Level 2 Autopilot and "operate the vehicles outside the intended ODD pose an unreasonable risk to safety," however "Operational Design Domain" is used in the context of an "Automated Driving System" which is used specifically to describe a level 3, 4, or 5 driving automation system.

I could be wrong but I think ODD can apply to any SAE Level, not just L3+. ODD simply means all the conditions where a feature is designed to operate. So a L2 feature or L2 automated driving system would have an ODD too.
 
ODD simply means all the conditions where a feature is designed to operate. So a L2 feature or L2 automated driving system would have an ODD too.
Ah yeah I think you're right. I found this diagram showing some limited ODD for Level 1 through 4. I guess NTSB wants Tesla to explicitly say currently released Autopilot's ODD is say limited access highway, and if Tesla continues to allow turning on the feature on city streets, NHTSA should force Tesla to disable it?

odd.png


I suppose Tesla could then say the ODD is "any roads" then NTSB would want NHTSA to force Tesla to disable the feature as it's not actually safe for "any roads?"
 
  • Like
Reactions: diplomat33