I agree with this, but it is not about “always.” It’s about understanding the driver’s attentiveness overall and what they are likely doing with their hands. A human can do this easily in nearly all cases especially if they are told when the car is turning, etc. Even if at a particular instant they can’t tell, overall they will have a very good idea. If you are not sure for long enough, you issue a nag.
More to the point for this thread: what if the camera sees both hands of the driver while also detecting consistent wheel torque? That is a really easy way to detect a defeat device in a very short observation window! Where else would the consistent torque be from? (Someone should try, to to see whether Tesla takes advantage of simple ways of doing things…)
I encourage watching the long videos here and just try to gauge how you would grade the driver at any point in time, and how accurate you would be. I think the camera can give excellent information on how attentive the driver is, with a sophisticated enough system to figure out what is happening (like a human brain). It’s much easier than using a still image (which also is not that hard, usually, though it can be).
Could there be better camera positions (designed for this task)? Sure. But raw image data (at least in daytime) seems good enough to me as long as the system is smart enough.
Tesla Model 3 and Model Y come with an integrated passenger facing cabin camera, Tesla hacker green unlocks the recorded clips (videos, details).
www.teslaoracle.com