Level 3 autonomy, the first true autonomous stage, must give the user at least 5 seconds for handover of control. AP can instantly disengage and hand over control because it's L2 semi autonomous.
I still argue that 5s is not always enough for an
inattentive driver to become attentive and gain situational awareness. 95% of the time it is but not 100%. Remember that presumably if this happens something unexpected is going on that the car can't deal with itself -- it's some kind of complex situation. And you've had your nose in a book, eyes off the road, maybe you've even fallen asleep because an L3 system doesn't have steering wheel nags. Or maybe your eyes are on the road but you're under "road hypnosis". The point being you have no idea what's happening outside the car. Maybe you've had a heart attack even.
However, the long range camera can resolve 250m which is enough for over 5 seconds at most reasonable speeds. Longer than 10 at speeds most people travel especially in suburban or urban areas. The AP2.5 radar also resolves to 250m vs 160m in prior AP.
What if you're cresting a hill or going around a bend? None of the sensors on board can see around corners or over hills.The car will have to react to whatever it sees when it crests the hill or rounds the bend, and it may have less than a second to do it. Not enough time to alert the driver. They can't even see
up a hill, if you're headed down into a valley, and it doesn't need to be steep for this to completely blind all of the sensors, a very gentle down slope followed by very gentle up slope will put the road after the valley out of range of both cameras and lidar -- it's just as bad as a crest in fact for the car, but not for humans -- we can swivel our cameras.
L3 systems are
hard. I would argue that even if the L3 system sees something it can't handle at the edge of its sensing range (so like Tesla's long range forward camera) where in principle it has 5s for the driver to become attentive and take over, it basically needs to handle that situation by immediately starting to pull over and come to a stop, because once you allow the driver to be inattentive you can't rely on them becoming attentive. The system must be able to come to a safe stop itself. And it might need to start executing that right away. That's not going to be a pleasant experience for the occupants. But also, things can happen much closer than the boundaries of your sensing range; they can happen right in front of you. The system needs to be able to react to that autonomously because there will be no time to alert the driver. This level of performance is possibly within reach for restricted-access highway driving in the near future, but local roads? Forget it. (And Elon has implied that "FSD" will handle local roads, without a driver in the seat even, nevermind attentive or not.)
L3 doesn't make sense. By the time we have an L3 system I would trust with my life, we will have L4 systems.
Now, on the other hand, a very good L2 system that actually enforces in a reasonable way that the driver is attentive (ahem) can save a lot of lives while we wait for L4. I think autopilot is already preventing a lot of accidents, as an L2 system. When people start treating it as an L3 system is when it starts killing people.