Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla Working on Driver Monitoring System

This site may earn commission on affiliate links.
The basic problem with this discussion is how the various driving modes rank safety-wise:

(a) Manually driving the car
(b) FSD/AP with driver paying attention
(c) FSD/AP with driver distracted/inattentive

At some point, (b) is going to be safer than (a) (so I claim, though I dont know when). But what about (c)? All this discussion basically assumes that (a) is safer than (c). If not then (as has been noted), disabling FSD/AP when the driver is not paying attention is detrimental to overall safety. Why switch from (c) to (a) when (c) is safer? This isnt an argument about SAE levels, its just about the measured safety of the system regardless of declared capabilities.

Humans are, on average, crappy drivers. Statistically, we all think we are above average, which of course is a contradiction. It really wont take much for a car to, overall, be better than most humans. At that point, driver attention becomes moot.
 
It really wont take much for a car to, overall, be better than most humans.
That’s actually an open question, about exactly how much that will take in the general case. I think it will happen eventually, but saying “it won’t take much” I think understates that challenge. (And I’m assuming arbitrary amounts of hardware here.)

Off topic here though. Sorry.

More on topic:
You did not include the case of a human driver with safety and driver assist features backed up by driver monitoring. That might be very safe.
 
The basic problem with this discussion is how the various driving modes rank safety-wise:

(a) Manually driving the car
(b) FSD/AP with driver paying attention
(c) FSD/AP with driver distracted/inattentive

At some point, (b) is going to be safer than (a) (so I claim, though I dont know when).

Honestly I think it already is.

Using NoA on a long highway trip for example I end up [B}far[/B] less tired and distracted as the hours add up compared to manually driving entirely by myself, and it provides various active safety features on top of that (I've had my car move over on its own in a lane as someone next to me came over the line before I even noticed it- thanks ultrasonics!)



But what about (c)? All this discussion basically assumes that (a) is safer than (c). If not then (as has been noted), disabling FSD/AP when the driver is not paying attention is detrimental to overall safety. Why switch from (c) to (a) when (c) is safer? This isnt an argument about SAE levels, its just about the measured safety of the system regardless of declared capabilities.

Humans are, on average, crappy drivers. Statistically, we all think we are above average, which of course is a contradiction. It really wont take much for a car to, overall, be better than most humans. At that point, driver attention becomes moot.


One problem here is right now when C fails, it fails in a way a typical human finds unbelievable stupid and dangerous

Like running straight under a sideways truck the system could clearly see from a long ways off.


There's technology reasons that happens right now. But most folks aren't interested in hearing them.

So a little better, if it still kills people stupidly, is not going to fly.


Now, if you can prove with data C is many times safer than humans, like 5-10x safer or more, it's a much easier lift to get people over those stupid accidents.

It's not yet though... (and I'm not sure it CAN be- in the sense that once it's that good, it ought to be able to drive without the human at all)


More on topic:
You did not include the case of a human driver with safety and driver assist features backed up by driver monitoring. That might be very safe.


The EU certainly thinks so- hence requiring DM systems by mid next year in new vehicles.

Remains to be seen if the US ever gets around to doing so
 
One problem here is right now when C fails, it fails in a way a typical human finds unbelievable stupid and dangerous

Like running straight under a sideways truck the system could clearly see from a long ways off.

Completely agree, self-driving systems in general are going to exhibit a very different accident pattern. They wont fail through inattention like humans, but instead will do insane things that make no sense right out of nowhere. Oh wait, yeah humans do that too.

Ultimately though, my feeling is its only a matter of time until the cars beat out humans even when it comes to the stupid stuff.
 
That’s actually an open question, about exactly how much that will take in the general case. I think it will happen eventually, but saying “it won’t take much” I think understates that challenge. (And I’m assuming arbitrary amounts of hardware here.)

There are two ways to look at this. One is accident avoidance, one is accident mitigation. Humans are overall good at avoidance, we know the rules of the road and can navigate though unusual situations safely (a blocked road, for example). But we're not good at mitigation; once a bad thing starts to happen we panic, or freeze, or over-react. Result, a messy accident.

A self-driving car will tend to be a different mix. Generally, its going to be good at mitigation simply because its response time will be so much better .. faster braking, computing just enough braking to avoid a collision, or swerve around a sudden obstacle. Stuff humans get badly wrong so often. But something as simple as a garbage can in the road might well cause the car no end of trouble. It will not hit it, but it might just stop and not know what to do.

As others have noted, what this means is the car will most of the time do much better than humans, but then for no apparent reasons make dumb mistakes. Mostly those mistakes will be annoying ("why did it need to stop there???"), sometimes deadly. But I stand by my point that, humans being as bad as they, the number of deadly mistakes will go down rapidly. Particularly as, once an accident scenario is identified, it can be mitigated rapidly and deployed to an entire fleet, something impossible to do with humans.
 
  • Like
Reactions: Phlier and bpjod
They're using this now, apparently, for driver monitoring, if you agree to enable it. HW3 only though; HW2.5 no support.

hmm wonder if they'll upgrade to HW3 w/ AP only IF we agree to give up privacy. I hate the nag and if GM/Caddy can do eye monitoring so can TSLA

OT For the record I waited till the very last moment for HW3 per Elon Tweet and still get tax credit for my delivery (6/30/19). Multiple SA emails confirming that it would be HW3 and of course it's HW2.5
 
  • Funny
Reactions: AlanSubie4Life
even "appropriately located" camera (I guess you mean something like GM supercruise?) is far from foolprof. Here's a case that would fool it:

Driver eyes open? Yes
Driver looks forward? Yes.
Conclusion: driver is paying attention.

I was just saying that there are more appropriate places for the camera if the intent is driver monitoring (not what this camera was intended for).

But I would also argue that since a human could easily detect that this driver was playing a video game (unless he hid the controller from view and even then a human looking from behind the steering wheel could figure it out), an appropriately powerful and capable vision system could also detect that, so this particular activity would be easy to detect. In theory. Eventually. I sure hope monitoring systems are looking for more than the most basic of information. They might start that way of course, but if you have perception capabilities equivalent to a human (a must for self-driving - needs to be able to read head movements, mouth movements, hand signals, etc to be as safe and capable as a human! - these are things human drivers do every day when driving), you would presumably use them.
 
But I would also argue that since a human could easily detect that this driver was playing a video game (unless he hid the controller from view and even then a human looking from behind the steering wheel could figure it out), an appropriately powerful and capable vision system could also detect that
you keep forgetting that people are really inventive. A determined person would still find a way to do what they want and fool a stupid computer program.

perception capabilities equivalent to a human (a must for self-driving - needs to be able to read head movements, mouth movements, hand signals, etc to be as safe and capable as a human! - these are things human drivers do every day when driving),
this statement is contradictory. If you have a perfect general artificial intelligence, it could just concentrate on driving instead of watching the driver. If you must watch the driver, the system is by definition not smart enough.
 
you keep forgetting that people are really inventive. A determined person would still find a way to do what they want and fool a stupid computer program.

No, I am not forgetting that. My original point, again, was just that the positioning could be better (as I think we all realize), not that it could not be defeated.

Of course nearly any system can be defeated, but that I don’t see that as nearly as big a deal as not being able to deal with common use cases. There are going to be accidents due to people defeating the systems! And that’s ok. It’s the case for pretty much anything.

y. If you have a perfect general artificial intelligence, it could just concentrate on driving instead of watching the driver. If you must watch the driver, the system is by definition not smart enough.

Not really. I wasn’t claiming perfect general artificial intelligence was required. There’s a middle ground too. My point was that it didn’t seem too difficult to detect that particular case. And I am pretty sure you would start being able to easily detect that case before you were capable of full self driving.

I mean, the Tesla camera already seems to be trying to detect phone use (not clear how good it is at it though). So that seems relatively “easy.”

I guess I am just saying that it seems like the well-constrained monitoring problem seems a lot easier to solve (and will typically (though not always) have substantially lower cost in the cases where it is defeated or fails) than the autonomous driving problem.
 
Last edited:
  • Like
Reactions: Phlier
What is it with this obsessive need to over-ride safety systems and prove you are smarter than a passive sensor?

As a mechanical engineer, I am aware how we design systems with a certain set of assumptions about the environment in which it will be used. If the system is designed to require an attentive driver, then that is what must happen. The reason it requires an attentive driver is because situations are going to happen and you will have x seconds to respond or you will die/kill someone. If someone is so clever that they can fool the system and break the set of assumptions then they will die/kill someone. Well done.

Also then the outcome is a new set of sensors and safety systems have to be developed to prevent further misuse, and the cycle continues. Is it so difficult to just obey the rules? Yes, I already know the answer to that - for certain types of people.
 
even "appropriately located" camera (I guess you mean something like GM supercruise?) is far from foolprof. Here's a case that would fool it:

Driver eyes open? Yes
Driver looks forward? Yes.
Conclusion: driver is paying attention.

Could be partially fixed with a new category:

* Both hands visible

Other categories might be:

* Driver Is Alive <-- looks for signs of life, like blinking, shifting position in the seat
* Too Much Seat Visible <-- ipad stuck on the headrest with a looped video of someone paying attention? nope!
* Driver Cannot See Road Ahead <-- seat is reclined too far back; driver is a child

The great thing about this solution is that new categories can be added to address common issues, and when all else fails, they still have the torque sensor to fall back on.

Of course, nothing will stop those most determined. The best hack would probably be to simply replace the (unencrypted?) cam feed with your own.
 
I guess I am just saying that it seems like the well-constrained monitoring problem seems a lot easier to solve (and will typically (though not always) have substantially lower cost in the cases where it is defeated or fails) than the autonomous driving problem
given that you freely admit any such system could be defeated relatively easily, why even bother improving?

Personally I think current applications of DMS are wrong to a big degree. What I hope they'd do is detect a distracted driver when being distracted is unsafe. In particular that means when ADAS is not enabled. Enabling ADAS in those cases seems to be net benefit (in addition to trying to bring the driver back into compliance of course).

What is it with this obsessive need to over-ride safety systems and prove you are smarter than a passive sensor?
people want to get distracted. And they will get distracted. But they want to get distracted in a "safe" manner. Which means on ADAS. And if ADAS refuses to be engaged when people want to be distracted you have two options: people will drive distracted not on ADAS or they'll try to find ways to fool ADAS into thinking they are not distracted.

Could be partially fixed with a new category:
but what's the end goal? To drive distracted drivers to disable driver assist? Really? is this a worthwhile outcome? Safe?
 
People prefer not to die, mostly.

We can hopefully agree on that much at least?

A lot of folks think they can turn on AP, then play on their phone "safely" for long periods.

This isn't true of course (indeed, the few AP deaths we know of appear to have ALL involved a heavily distracted driver who had plenty of time to avoid the accident if they'd been looking at the road in front of them).


So the idea is, if you throw a bunch of nags that, if they keep being ignored, disable the driver aids when someone is heavily distracted, since they don't wish to die, and now no longer have the crutch they THINK makes it safe to be heavily distracted, they will stop being heavily distracted.


That's the idea anyway.


Now... is it safer for them to drive manually "only" looking down at their phone for a few seconds at a time then looking at the road for a few seconds?

Or is it safer to drive on an assist system but they go minutes without looking at the road?


Probably depends on how good the assist system is.
 
given that you freely admit any such system could be defeated relatively easily, why even bother improving?

Because currently some such systems don't provide the basic functionality needed to detect distraction. Defeats will cause accidents (some of which would happen anyway), but a high functioning system will probably prevent a lot more.

Personally I think current applications of DMS are wrong to a big degree. What I hope they'd do is detect a distracted driver when being distracted is unsafe. In particular that means when ADAS is not enabled. Enabling ADAS in those cases seems to be net benefit (in addition to trying to bring the driver back into compliance of course).

Yeah, that seems reasonable. Something like having more aggressive lane keeping, etc., in those cases, makes sense. It's kind of a way of determining driver intent. If the driver is intently steering off the road, perhaps they mean to (to avoid some obstacle). No intervention (perhaps...at least wait and see). If the driver is clearly asleep and driving off the road, best to do aggressive lane keeping and warning.

But in all these cases you need a system that can detect driver distraction and a variety of other states. Hence the need for improvement. I think of it as a human observing the driver and providing appropriate warnings and assistance in the case of problems. That's the goal (and it's easier to achieve than having a autonomous driving system).

In a way, it's an ADAS!
 
Last edited: