That's roughly what's keeping me from immediately jumping all over the attendant here. I don't think "driver" is even the proper word for her.
I still have a problem with what she appeared to be doing. I'm not sure how, if the Uber program was having as many troubles as it was, that she'd not be more attentive?
This could run deeper than the attendant, to a systemic thing. Perhaps she'd "learned" that there wasn't any problems until other vehicles came within a certain range, so assessed that this was a "low alert" area.
It's even possible that the Uber program management was pushing to get their "interventions" down. This could be creating a situation that was encouraging attendants to stay further away from the wheel so they didn't create false negatives by intervening when it wasn't truly needed. That sort of things can be subtle, simply by excessive dressing down of vigilant attendants, providing bonuses to attendants with low numbers of interventions, or just outright firing people that have higher numbers of interventions. Word gets around, and soon you've got nothing but attendants that pay poor attention and are air drumming as the car blows though seconds old red lights.
Speculation but hoping that the NTSB does a very thorough inspection of the whole program here looking for roots causes, so this stuff can get addressed.
Uber has had issues with inattentive drivers, and in the past they did fire at least two safety drivers for it. One of those was the air drummer.
But, in both cases it was because someone else saw them. They didn't seem to be reviewing the logs to verify that the safety driver was actually paying attention. So it seems to be that it was a systematic problem in how Uber operated it's program.
The industry as a whole is well aware that a human safety driver is problematic. So there are no excuses for not implementing checks. It's a weak point that's accepted because I don't believe there is any alternative. You can't rely 100% on simulations because they don't recreate real life situations. This fatality accident is the exact kind of thing that likely wouldn't have happened in a simulation. Mostly because of what the bike/person appeared like. The way it looked is likely something the vision neural net was never trained on.
Doing a shadow mode thing in real life does work for some situations (like this one), but not all. It also can't recreate situations that only happen when a human driver has to deal with a machine which drives very differently than most humans. At some point along the way you need to put the autonomous car on the road with a safety driver behind the wheel.
The culture of loose regulations is likely causing a situation where autonomous cars are being put on the road well before they're ready for it. Where there is too much reliance on the safety driver. While simulation can't recreate every real life situations they can create a enough situations to evaluate whether to give a self-driving car company the license to operate on the road.
I do think Arizona should take some of the blame for this accident occurring because of how easily they grant a license to operate.
As far as I know they don't require releasing data like intervention counts. But, in some ways that requirement in California has caused an industry wide obsession with it.
I have the same concern you do that the use of intervention count as a measure of quality is harming the safety of them.
In this case I don't think safety driver had the qualifications to be a safety driver. To me it seems like Uber was hiring generic drivers because they only cared about miles, and they wanted to do it as cheaply as possible. This driver didn't care about driving, or about what the car was doing. Her driving attitude seemed to be "Hey, I'm being paid just to sit here on my phone".
The road is supposed to be for driving from point A to point B in a safe and prudent manner.
Now sure that's a bit laughable as that's hardly ever the case in real life. Everyone seems to have different agendas.
But, autonomous cars aren't supposed to be adding to the problem. They're supposed to be removing the messy human component from the road for those of us that just want to get from point A to point B where we no longer need to express ourselves on the road.
With Waymo/Cruise the safety drivers aren't taking over when they know the car is creating an unexpected situations on the road. A lot of the "no-fault" accidents happened because the cars were acting in an unexpected manner.
I think we have to face the fact that the lack of regulatory oversight has created a messy situation.
We have Tesla selling a FSD option on a car where there is no pathway to it. Where we're totally blind about how that can be achieved.
With L2 cars we have people overly confident in systems where there is no REAL measurement of what situations any of the systems can handle. Like I have no idea if my AP1 car has a better AEB system than a Subaru Eyesight system. The only thing we really have is user stories which are all over the place.
This Uber vehicle FAILED in what should be a basic test of an L2 car. It failed so badly I imagine the city safety feature of Volvo would have performed better had it been on the car.