Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Government regulations of L2 driver assist systems

This site may earn commission on affiliate links.
But has anyone made any attempt to compare the heavily publicized deaths against the number of times AP has in fact saved someone? Of course not!

First, the headline "No-one dies as Tesla drives safely on freeway" doesnt sell newspapers (or get click-though in modern terms) compared to lurid headlines about an AP caused death. Second, its far harder to document occasions when AP saves the day, by its very nature. It's easy to see where seat belts or airbags save someone, since by their nature they prevent injury in an accident. But what about AP simply avoiding that accident in the first place?

Let's look at the most extreme case, where Tesla are forced to disable AP on all cars. And all those miles that would have been driven by AP are now driven manually. Would that actually cause more accidents/deaths than if AP were doing the driving?

I don't know the answer to that, but I'm pretty sure the NTSB doesnt either. And until they or someone does, some of the pronouncements they are making seem ill-advised.

It's not about some calculus of lives saved versus lives lost. Even if AP was much safer than the average driver, we would still need to investigate if an accident happens. And yes, I am aware of Tesla's stats about AP being safer than the average driver. But that does not mean that we should do nothing about accidents and just accept deaths. It's the job of the NTSB to investigate every accident, not just Tesla, to see what caused the accident and if anything could be done to prevent it from happening again. The goal is to try to prevent every accident if possible.

And remember that ultimately, the goal is not to permanently disable driver assist systems like AP, but to make them better. Ultimately, we want safe fully autonomous vehicles that don't need drivers at all. So if an accident identifies a potential flaw in Autopilot, we want to know about it and fix it. And now, we've had several serious and fatal crashes with Autopilot that have identified that the lack of a proper driver attention system in AP is a serious flaw.
 
Let's look at the most extreme case, where Tesla are forced to disable AP on all cars. And all those miles that would have been driven by AP are now driven manually. Would that actually cause more accidents/deaths than if AP were doing the driving?

I don't know the answer to that, but I'm pretty sure the NTSB doesnt either. And until they or someone does, some of the pronouncements they are making seem ill-advised.
That is also what I want to know. It seems like a tricky thing to determine.
Which pronouncements do you disagree with? It all seems reasonable to me.
 
That is also what I want to know. It seems like a tricky thing to determine.
Which pronouncements do you disagree with? It all seems reasonable to me.

The claim about the steering wheel nag system seemed questionable. I have a lot of respect for the NTSB, but the claim that it is ineffective didn't seem to be backed up by any substantial analysis, comparative or otherwise. There wasn't even a discussion about if it was really the cars job at all. to make sure the driver was attentive .. it was just assumed.

And why should it be the cars job? We've had cruise control for decades, and I don't recall anyone ever claiming those cars had to make sure the driver paid attention.

What if AP is off, and the driver is texting? Should the car nag the driver to stop that and focus on driving? If it doesn't do this nag, and the inattentive driver causes a crash, who is liable? At what point do we draw the line and simply admit that no amount of technology is going to make an irresponsible driver into a responsible one?

In the US in particular there is an uneasy balance between keeping car makers honest for making safe cars, and drivers (and the lawyers behind them) trying to blame car makers for their own bad driving (witness all the "uncontrolled acceleration" claims when in fact its almost always the driver hitting the wrong pedal in a panic).

People tend to deflect blame away from themselves, especially if deep down they realize something was their fault. In the case cited, the driver was on AP and playing a game on his iPhone. The NTSB said Apple should not have let that happen, and the car should not have let that happen. But, really, the driver was at fault here; he did something stupid, and paid a high price. The Tesla manual told him it was stupid. The Tesla warning screens when he enabled AP told him it was stupid. The Tesla nag screens kept telling him it was stupid. Common sense should have told him it was stupid. No car maker can protect against that level of recalcitrance.
 
It's not about some calculus of lives saved versus lives lost. Even if AP was much safer than the average driver, we would still need to investigate if an accident happens. And yes, I am aware of Tesla's stats about AP being safer than the average driver. But that does not mean that we should do nothing about accidents and just accept deaths. It's the job of the NTSB to investigate every accident, not just Tesla, to see what caused the accident and if anything could be done to prevent it from happening again. The goal is to try to prevent every accident if possible.

And remember that ultimately, the goal is not to permanently disable driver assist systems like AP, but to make them better. Ultimately, we want safe fully autonomous vehicles that don't need drivers at all. So if an accident identifies a potential flaw in Autopilot, we want to know about it and fix it. And now, we've had several serious and fatal crashes with Autopilot that have identified that the lack of a proper driver attention system in AP is a serious flaw.

To an extent I agree. But, in reality, it is a calculus of lives saved vs lives lost, callous though that may sound. It's easy to "prevent every accident if possible" -- you just ban cars and driving and voila, no accidents. Of course, this is absurd. But the NTSB report, paraphrased, said "the car should have made sure the driver was paying attention". Really? Since when did responsibility for safe driving shift from the driver to the car? And the answer isnt "when he turned on AP", as Tesla makes clear in the manual, and the warning when he enabled AP, and the nags when he doesnt touch the wheel. In fact, by definition, the responsibility does not shift until we have true level 5 cars (if ever).

The "serious flaw" in this accident was the driver was playing a game on his iPhone. Where should we apportion the blame? The car? Apple? The maker of the game he was playing for making it too addictive? Oh, what about the driver? What about drunk drivers? Should we blame the brewers of beer?

I'm not saying we should not investigate accidents, apportion blame, and look for ways to improve safety. I'm saying the NTSB implied that AP also created a shift in responsibility to the car and away from the driver. Which, imho, is dangerous and wrong.
 
The "serious flaw" in this accident was the driver was playing a game on his iPhone. Where should we apportion the blame? The car? Apple? The maker of the game he was playing for making it too addictive? Oh, what about the driver? What about drunk drivers? Should we blame the brewers of beer?

The driver is to blame for failing to prevent the crash since the driver was not paying attention and did not correct AP's mistake. But AP is also to blame for causing the crash. AP was "on" and it was the one controlling the car and the one that directly drove the car into the crash attenuator. So it is the one that most directly caused the crash. When AP is "on" and is actively controlling the car, it is entirely fair to look at whether AP made a mistake or not.

Also, AP is a L2 system which by design requires driver attention. Yet, AP does not have a good driver attention system. So that is a design flaw there.

The claim about the steering wheel nag system seemed questionable. I have a lot of respect for the NTSB, but the claim that it is ineffective didn't seem to be backed up by any substantial analysis, comparative or otherwise. There wasn't even a discussion about if it was really the cars job at all. to make sure the driver was attentive .. it was just assumed.

And why should it be the cars job? We've had cruise control for decades, and I don't recall anyone ever claiming those cars had to make sure the driver paid attention.

The difference is that cruise control is L1. So the driver is still controlling steering. With L2, the car is doing all the controls. So the driver does not need to touch the controls. The driver is more passive. When the driver does not need to touch the controls, there is no built mechanism to make sure the driver is paying attention. Yet, since L2 is not self-driving, the driver is still required to pay attention. That is the issue with L2, in that the driver is passive but still needs to pay attention.

I hope you can see the distinction between a system where the driver is active versus a system where the driver can be passive but still needs to pay attention.

That's the issue with L2 in that the car is "driving" but is not self-driving. That's an inherent contradiction baked into L2. Since the car is "driving", it can be confused for self-driving when it is not. That's one reason why some companies are skipping L2 and trying to go straight to autonomous driving.

What if AP is off, and the driver is texting? Should the car nag the driver to stop that and focus on driving? If it doesn't do this nag, and the inattentive driver causes a crash, who is liable? At what point do we draw the line and simply admit that no amount of technology is going to make an irresponsible driver into a responsible one?

If AP is off, then the driver is clearly driving. So it's obvious who is responsible and who is liable. If AP is on, then the system is partially doing some of the driving. So it is fair to assign some blame to AP since it is controlling the car.
 
The driver is to blame for failing to prevent the crash since the driver was not paying attention and did not correct AP's mistake. But AP is also to blame for causing the crash. AP was "on" and it was the one controlling the car and the one that directly drove the car into the crash attenuator. So it is the one that most directly caused the crash. When AP is "on" and is actively controlling the car, it is entirely fair to look at whether AP made a mistake or not.

Completely agree, which is why the NTSB findings are so odd. A lot of focus on the driver attention aspect, much less on the car making a fatal mistake.
 
I hope you can see the distinction between a system where the driver is active versus a system where the driver can be passive but still needs to pay attention.

I can, and that is why I think the Tesla system is a good model for attention checking, compared to camera-based attention systems (e.g. Cadillac). The steering wheel nudge is a "challenge-response" system that forces the driver to prove they are attentive, rather than a system that attempts to passively determine his attention level. These "active" systems have been shown, over decades of use, to be inherently safer than passive systems (in areas such as medical and flight systems). Hence my concern about the NTSB blanket statement about the Tesla system being bad.
 
If AP is off, then the driver is clearly driving. So it's obvious who is responsible and who is liable. If AP is on, then the system is partially doing some of the driving. So it is fair to assign some blame to AP since it is controlling the car.

Again, I think we are agreeing but from a different angle. I'm not being an apologist for the car, or Tesla. But apportioning blame is tricky when legal aspects are introduced into the equation. Yes, the car did a BAD job, very bad, and Tesla should work hard to make sure it doesnt happen again (or at least is far less likely). But the responsibility still sits with the driver, and in that sense it is like cruise control.

A car on cruise control, left to its own devices, will happily drive straight into the car in front of it (setting aside the newer smart ones). it's been that way for decades. But we don't blame the manufacturers for that, we assume the driver should have braked. This is entirely analogous to the Tesla situation; the car was performing an autonomous function that required active driver intervention to prevent an accident. In one case, its braking, in another, steering. Yes, we might perceive the car as being "more" in control, and in some technical sense, "more" to blame, but you have to be very careful before you shift responsibility away from the driver, or even hint at that.
 
I can, and that is why I think the Tesla system is a good model for attention checking, compared to camera-based attention systems (e.g. Cadillac). The steering wheel nudge is a "challenge-response" system that forces the driver to prove they are attentive, rather than a system that attempts to passively determine his attention level. These "active" systems have been shown, over decades of use, to be inherently safer than passive systems (in areas such as medical and flight systems). Hence my concern about the NTSB blanket statement about the Tesla system being bad.

I do agree that a challenge response sounds good in principle but I think the camera driver attention is still far superior to Tesla's torque system.

Tesla's system is horrible in many ways. It can be defeated with hacks. It only checks for hands on wheel so as long as it thinks my hands are on the wheel, it will be happy even if I am not paying attention. It is entirely possible to rest one hand on the wheel or tug the wheel without ever looking at the road. I can tug the wheel with one hand and play on my phone with my other hand, never watching the road, and Tesla's system won't care. it's just checking for a challenge response whether it is real or fake and it is not checking if I am actually paying attention to the road which is bad.

The camera driver attention system is superior because it cannot be defeated with hacks and it checks for eyes on road so it is checking that I am actually watching the road. With eyes on road, I can see if something is wrong and it is a quick response to grab the wheel. That's why it is better to check the driver's eyes than to check if the hands are on the wheel.

Ideally, I think a L2 car should have both systems, a torque system for checking hands on wheel AND an inside camera for checking eyes on road. That way, you ensure both hands on wheel and eyes on road. By doing this, you make sure that the driver sees what is happening on the road AND has hands on wheel ready to intervene. You really need both for quick reaction in case of a problem.

A car on cruise control, left to its own devices, will happily drive straight into the car in front of it (setting aside the newer smart ones). it's been that way for decades. But we don't blame the manufacturers for that, we assume the driver should have braked. This is entirely analogous to the Tesla situation; the car was performing an autonomous function that required active driver intervention to prevent an accident. In one case, its braking, in another, steering. Yes, we might perceive the car as being "more" in control, and in some technical sense, "more" to blame, but you have to be very careful before you shift responsibility away from the driver, or even hint at that.

You are missing the key difference. With cruise control, the human driver is still steering. With Autopilot, the driver is not steering. So with cruise control, we don't blame the manufacturer because the driver was still steering, ie the driver was still in control. But with AP, the driver is not steering. So it is possible for the driver to be completely disengaged even though they are supposed to pay attention.
 
Ideally, I think a L2 car should have both systems, a torque system for checking hands on wheel AND an inside camera for checking eyes on road. That way, you ensure both hands on wheel and eyes on road. By doing this, you make sure that the driver sees what is happening on the road AND has hands on wheel ready to intervene. You really need both for quick reaction in case of a problem.

Agreed. Though interesting that today Mercedes announced there next-gen system would be a capacitive wheel-touch sensor.
 
Agreed. Though interesting that today Mercedes announced there next-gen system would be a capacitive wheel-touch sensor.

yes, a capacitive wheel touch sensor is better than torque because it measures actual hands on the wheel rather than just a motion. Plus, you can simply hold the wheel normally and you don't need to do the little tug motion.
 
You are missing the key difference. With cruise control, the human driver is still steering. With Autopilot, the driver is not steering. So with cruise control, we don't blame the manufacturer because the driver was still steering, ie the driver was still in control. But with AP, the driver is not steering. So it is possible for the driver to be completely disengaged even though they are supposed to pay attention.

But my point is its not a key difference, it's one of degree. A car w/o cruise control always has the driver in total control. Cruise control removes some of that direct control, AP removes more. I suspect a lot of the "its fundamentally different" argument comes from familiarity; we are used to cruise control, while AP remains a novelty. But my point remains; a car on cruise control can drive itself into an accident without driver intervention, which is no different from AP. I dont see how they are any different as far as responsibility is concerned.

But what about attentiveness? You could argue that almost all accidents are caused by lack of attention, regardless of automatic systems. People space out when driving. Almost all of us have driven home on that daily commute with almost no true attention to the road at all. Does AP make that more prone to happen? Probably. Does that make any difference to responsibility? Nope. But is it the cars job to assure that the driver does pay attention? That's really what this discussion is about.

On the one hand, of course any system that helps keep the driver attentive, and thus making driving safer for everyone, is a good idea. I'm not disagreeing with that, and I hope people continue to innovate in that space for all our sakes.

But the dark side says that, thanks to the machinations of the legal system (at least in the US), when you add a system that monitors awareness, you (the maker of the system) are in some way assuming liability for making sure the driver is aware. And that does worry me; because that path leads to all sorts of lawsuits, and bad drivers trying to shift blame to car makers even when they were responsible ("the car SHOULD HAVE reminded me not to play a game on my phone!"). And the end result? Makers back off the whole thing, Tesla shuts down AP for fear of litigation, and we all get stuck in the Stone Age of manual driving forever.

So this is a tight-rope we are all walking. I just don't want us all falling off because of a few greedy lawyers and a few idiot drivers who try to game the system.
 
But my point is its not a key difference, it's one of degree. A car w/o cruise control always has the driver in total control. Cruise control removes some of that direct control, AP removes more. I suspect a lot of the "its fundamentally different" argument comes from familiarity; we are used to cruise control, while AP remains a novelty. But my point remains; a car on cruise control can drive itself into an accident without driver intervention, which is no different from AP. I dont see how they are any different as far as responsibility is concerned.

This is where we disagree. It is not a matter of degree. Cruise control removes some direct control, AP removes ALL direct control, not just more. AP removes ALL control because the car does both steering AND braking. It does both.

But in terms of responsibility, driver responsibility is the same for both L1 and L2. The key difference though is that L1 cannot be confused for self-driving, L2 can be. So while, the driver is responsible for both, with L1, the driver clearly knows that he is responsible, but with L2, the driver can mistakenly think that he is not responsible.

But what about attentiveness? You could argue that almost all accidents are caused by lack of attention, regardless of automatic systems. People space out when driving. Almost all of us have driven home on that daily commute with almost no true attention to the road at all. Does AP make that more prone to happen? Probably. Does that make any difference to responsibility? Nope.

Yes it makes a difference because driver attention is a fundamental part of how L2 works. Without driver attention, L2 cannot work.

But is it the cars job to assure that the driver does pay attention? That's really what this discussion is about.

If it is L1, no, if it is L2, yes. With L2, it is the job of the car to make sure the driver is paying attention because L2 cannot work properly if the driver is not paying attention.
 
The claim about the steering wheel nag system seemed questionable. I have a lot of respect for the NTSB, but the claim that it is ineffective didn't seem to be backed up by any substantial analysis, comparative or otherwise. There wasn't even a discussion about if it was really the cars job at all. to make sure the driver was attentive .. it was just assumed.

The guy was playing a game on his phone and crashed, how much proof do you need that it is ineffective?

Of course it's the car's responsibility. The car offers a driver assistance feature that removes the need for the driver to perform one action - steering. However the system cannot operate by itself, it requires human oversight. We know through decades of research that humans can easily be distracted in situations like that where the system works fine by itself most of the time and only occasionally needs intervention.

Other similar systems account for that. For example on Japanese high speed trains there is a system for automatically reading signals, since the train is moving so fast the driver can't reliably see them. However, the driver must confirm each signal by pressing a button or the train brings itself to a stop. You will note that in nearly 60 years of operation they have had 0 serious or fatal accidents.
 
The guy was playing a game on his phone and crashed, how much proof do you need that it is ineffective?

Of course it's the car's responsibility. The car offers a driver assistance feature that removes the need for the driver to perform one action - steering. However the system cannot operate by itself, it requires human oversight. We know through decades of research that humans can easily be distracted in situations like that where the system works fine by itself most of the time and only occasionally needs intervention.

Other similar systems account for that. For example on Japanese high speed trains there is a system for automatically reading signals, since the train is moving so fast the driver can't reliably see them. However, the driver must confirm each signal by pressing a button or the train brings itself to a stop. You will note that in nearly 60 years of operation they have had 0 serious or fatal accidents.

So your argument is that one crash means a system is "ineffective" ? Or, put another way, any system that is not perfect is ineffective. Good luck with that argument in the real world. Systems on trains are an example of "active" attention verification, which, actually, is exactly that the Tesla system is. So what is your point here?

So let's take your analogy and go with it. A Japanese train crashes. The investigators find that the driver had rigged a system that automated pressing that confirmation button so he could play a game on his mobile phone. Who is the culprit here? Your argument seems to be that its the train makers are at fault for having a system that could be by-passed. Really?

There is no such thing as a safety system that cannot be by-passed somehow or other. Human ingenuity, sadly, sees to that. If you require that the system be perfect, then it never will be. So, in effect, you are arguing that Autopilot should be banned.
 
  • Like
Reactions: MP3Mike and X-pilot
But in terms of responsibility, driver responsibility is the same for both L1 and L2. The key difference though is that L1 cannot be confused for self-driving, L2 can be. So while, the driver is responsible for both, with L1, the driver clearly knows that he is responsible, but with L2, the driver can mistakenly think that he is not responsible.

In which case its the driver who is at fault. I think the Tesla manual, and the warning screens that you have to click on when enabling AP, make it very clear that AP is level 2. The nag system consistently reminds you of this. Every time you use AP. Yet this guy played a game on his phone. How anyone can say the driver isnt to blame here is not clear to me.

Anyway, beating a dead horse and all that.
 
  • Like
Reactions: X-pilot
There is no doubt that Tesla took advantage of the lack of regulations. They released a beta driver assist with all kinds of known issues and with a poor driver monitoring system but trusted that they could improve it through OTA updates fast enough. And any accidents could simply be blamed on the driver not paying attention.

I don't think the answer is disabling AP completely because then you go back to no driver assist at all. But I do think limiting AP to strictly operating only in its ODD would be a good start. And I think Tesla should implement a proper attention monitoring system immediately.

I would like to see a more rigorous regulatory system for all automated driving systems regardless of SAE levels since we are clearly entering the era of advanced L2 but also the first autonomous systems (L3, L4 and even L5 at some point) will be coming in a few years. It's not just L2. When automakers start putting L3, L4 or even L5 autonomous systems on their consumer cars, we can't just trust the automakers that the systems are safe enough. There needs to be some oversight.

I would do the following:

1) Have the NHTSA test all automated driving systems for basic standard of safety and competencies. This would not just be hands-on testing but also collecting data and documentation from the auto maker. They need to test for phantom braking, lane keeping, responding to stopped vehicles at highway speeds etc... Issues like frequent phantom braking, getting confused with faded lines and hitting crash barriers, hitting stopped fire trucks, or hitting semi-trucks crossing in front like we saw in the Tesla crashes are serious safety issues whether the driver is paying attention or not. A system with those kinds of issues should not be allowed on roads until the auto maker fixes the issues.

2) The NHTSA would also be responsible for classifying the SAE level and determining the proper ODD.
- Based on the SAE level and ODD, the NHTSA would mandate how the system can be used and when. If the system is L0-L3, it must have a robust driver attention system that includes but not limited to a driver facing camera and it must be able to reliably detect lack of attention, distraction, fatigue, drowsiness and loss of consciousness.
- The NHTSA should also require that the system only be able to be turned on in its designated ODD.

3) The NHTSA should require some basic sensor redundancy based on the SAE level and ODD. To be clear, I am not suggesting that every car be forced to have like 30 cameras, 30 lidar and 30 radar. The sensor suite should match the SAE level and ODD. L2 does not need the same sensor suite as say L4. But I think the NHTSA should mandate some basic sensor redundancies like a rear radar or a forward lidar for example. This would help in those critical safety cases like responding to a stopped vehicle on the highway. Just depending on cameras alone is not good enough. There are too many ways that camera vision can fail from being blinded by the sun, poor visibility, confusing lane markings or even defacing signs.

I think these changes would greatly help with safety but also increase consumer confidence. This way, for example, if a car is labeled as L4 autonomous on the highway by the NHTSA, you would know it was fully validated and tested and it's not just some marketing claim by the automaker.
A lot of good thought & info in this discussion, but I end up feeling that the "only solution" is to have the government save us all from ourselves... What happened to personal responsibility?