That is far from the truth, because in Autopilot mode the driver can always instantly brake, steer, and accelerate. The human driver is always in full control.
Whom have we sacrificed? Has anybody been killed by the autopilot? Or have people killed themselves by misusing the autopilot? These are two very different things.
I don't think you read up on the "swiss cheese model of safety". The theory is you have different entities (sheets of cheese) with small or bigger risk (holes in the cheese). Human, technical, weather etc. Humans make mistakes, and that is hard to reduce. A catastrophe is not a result of one single action. It is the combination of many different risks that happen at the same time that will result in a catastrophe. Therefore, identifying risks in the other entities, you might design these to mitigate the risk so that these human mistakes (or misuse) will not result in a catastrophe.
We know people have slept in the Tesla on AP and survived to tell the story. Add some additional incident at that moment and the result could be opposite.
In detail, Tesla, to reduce risks of disaster, could change the following in AP.
-Tesla AP can be tricked by an apple or water bottle. (could be a capacitive sensors for hand touch only)
-the old nags was way to infrequent (they fied this but could be even more frequent)
-the PlingPlong says "car is in control" (it could have no sound and a slowly onset of steering assist)
-the plong-pling says "driver has control (could be totatlly silent)
-If you try to mildly steer, the car presses the steering wheel in the other direction. (a blended driver/system approach could mitigate this)
-if you try to steer harder, the car firmly resists for a split second, resulting in the vehicle jerking a bit (it could never resist steering wheel input)
-car tries to steer in turns it is not yet designed for (other cars disengage much earlier, could disengange without warning)
- it will allow engagment on roads not suitable for the capabilites of the system (it could be stricter limited to higways only)
-it will go really fast even if the radar then has too short range to handle an obstacle in front. (limit to speed could be enforced)
-it will continue to go straight if you put on the blinker (it could disengage while blinking letting the driver change lanes manually (only AP))
-EAP will try to auto lane change, which is cool but slow and implying "car is in total control" (it could let driver do manual lane change)
-it will not warn if there is a conflict/ when it chooses the wrong lane marker or crack to follow (sensitivity for "system confidence" could give frequent warnings)
Then you have the narrative on FSD, the old 2016 video, robotaxi, the fan crowd, Tesla autonomy leadership etc that make owners even more complacent, adding to the risk. Making a bigger difference in UI between AP/EAP and FSD could be a smart move.