I understand the desire for "leave me alone and let me drive", but I also appreciate the intent behind monitoring drivers for attentiveness.
The link below, dated February, 2022, speaks to driver attentiveness at different times of the day, and among a few different car models, including a Tesla Model 3, Cadillac Escalade, Subaru Forrester, and Hyundai Santa Fe. Some of the parameters from the test are these (direct quotes):
Three 2021 model year vehicles, the Cadillac Escalade, Subaru Forester, and Hyundai Santa Fe, and the 2020 Tesla Model 3 were tested by AAA over two days for 10-minute segments along a 24-mile loop of a limited-access toll road in California.
The Escalade, equipped with “Super Cruise,” and the Forester with “EyeSight” and “DriverFocus” are categorized by AAA as direct ADAS with driver-facing infrared cameras. The Santa Fe, with “Highway Driving Assist,” and no driver-facing camera as well as the Model 3 with “Autopilot” and a cabin camera not used for driver monitoring are categorized as indirect ADAS.
This is the link to the study:
AAA study finds all ADAS-equipped vehicles should have driver-facing cameras for distraction detection
The interesting part of the study, which is a joint effort initiated by AAA, is also commented upon (direct quote from study):
A Feb. 1 Consumer Reports article states that it along with the National Transportation Safety Board, the Insurance Institute for Highway Safety (IIHS), and the European New Car Assessment Program have found that when automation, such as adaptive cruise control and lane assist, are engaged at the same time drivers are more likely to not pay attention to their surroundings. About 50% of new vehicle models allow drivers to engage adaptive cruise control and lane-centering at the same time, according to an analysis by Consumer Reports last fall. Because of that correlation, they believe ADAS features should always be paired with driver monitoring using computers and cameras to keep an eye on attentiveness.
The point of the study, apparently, is that when drivers engage any type of automated driving assistance they tend to pay less attention and are less aware of immediate traffic situations, many of which, as the study points out, may require immediate interaction by a real person driving. For that reason the study concludes that some type of driving monitoring system is appropriate.
In 2017 my wife and I visited the Fremont Tesla site and I had a very interesting 20 minute conversation with a Tesla employee. I learned that he was a dual degreed PhD engineer/Doctor of Divinity, and his official title was "ethicist". His job at Tesla was to "be a neutral arbitrator for the 'greater good'". I asked him to describe his job. He told me that Tesla hired him, and many others on the team covering a wide range of specialties, to determine "programming rules based upon all aspects of the greater good and responsibility for protecting Tesla's customers".
For example, in a situation where the Tesla automated driving system determines that an accident is unavoidable, what is the correct response? And how will it, if at all, change as technology changes? He explained it like this: The Tesla you are driving is about to be in an accident with serious potential for harm to the occupants of the Tesla AND to the other drivers in different cars. Should Tesla minimize the potential for harm to the Tesla owner, or attempt to minimize overall potential harm across all drivers? For example, what it the accident would likely be a multi-car event, and the Tesla cameras realize that the cars involved are smaller less structurally sound vehicles. What is the priority? Keep in mind all of these decisions become part of the code/programming for FSD/AutoPilot and other Tesla products.
Taking it a step further, what happens when FSD capability improves so that car to car communication in real time is possible? We are in the early stages of FSD in the industry, with Mercedes Benz certified as Level 3 (Conditional Automation) using "Drive Pilot". That stage is "conditional automation". What happens when cars routinely provide Level 4 - (casually defined as "the car drives, you ride") High Automation, and eventually Level 5 "Fully Automated Driving".
If you are a solo Tesla passenger and the car you are about to impact is occupied by a large family, or elderly passengers, does that effect the way the car should act? He went on to describe an incredible array of circumstances, all of which need to be considered, evaluated and either incorporated into FSD or not.
He emphasized that he had no solution at the time, nor was one imminent, but I was beyond impressed that Tesla assembled and hired such a team to evalutate the way they should view and prioritize these aspects.
So, to finally answer the OP's question, I guess there is no single answer, just a probability distribution of outcomes, some of which will invariably involve driver attentiveness and response times, to create a flexible "risk minimization" evaluation of possible outcomes. And my assumption is that whatever is initially adopted will morph over time to include newer technology that allows for deeper evaluation of risk and attention.
Incidentally, it's interesting that most manufacturers have some type of driver monitoring system as a default in their self driving features, whether or not they are in an EV or ICE.