no one else has corroborated that their AP behaves the same
There was at least one poster here who corroborated exactly this earlier in this thread:
Model X Crash on US-101 (Mountain View, CA)
the bottom line is that you can’t blame AP
I don't think anyone is trying to
blame AP or find fault in any single thing. We're only trying to determine if AP was a factor in the collision. I think this is very important for the future of autonomous driving and the public's acceptance of it. If the public, or Tesla, or the state agencies that maintain the roads and highways can learn something from this crash to prevent others like it, that's the preferred outcome of this discussion.
Of course you can't blame AP, because AP is not supposed to be fully autonomous or perfect and every Tesla driver knows that they have to stay in control. Walter was a very smart man, a career software engineer, and a technology enthusiast. He understood the limitations of AP as well (if not better) as anyone.
As in most serious crashes, there were multiple failures here that contributed to the cause and the fatality of this crash. I don't think any single factor can be pointed as "blame". In this case, knowing the driver and how he typically used AP, I think AP was a factor. The driver may have been distracted or not paying attention; also a factor. The improperly reset attenuation barrier was also a factor in the severity of the damage and injury to the driver. The lane markings and lack of stripes, rumble strips, or chevron patterns in the gore point are also a factor. The position of the sun may have also been a factor. The position of other cars on the road -- also a factor. I think you get my point.
I am also a software engineer (not at Apple). I do not write software for self-driving cars, but the principles are the same regardless of the type of software. Whenever there is a critical failure in a system, you must identify and rank all the factors that contributed to the failure. Autonomous vehicles are a very challenging problem, because many of the factors are external (i.e. roads, other vehicles, weather, lighting, etc). Most well-designed systems require multiple points of failure to fail in a catastrophic way, and I think that's what happened here. So ... what can we learn from it? We have to identify all the factors and see which of them are practical and feasible to mitigate in future similar scenarios.
Could it be that something as simple as some striping or chevron paint in the gore point could have prevented this? Maybe.
If Caltrans had reset the attenuation barrier prior to the crash, would Walter still be alive? Maybe.
If the vehicle's front radar was programmed to alert an inattentive driver to an upcoming stationary object in the path of travel, could that have prevented this? Maybe.
It's 2018 and autonomous driving is in it's infancy. I remember when Walter first got his new Model X and I rode in it with him for the first time, I was very impressed with the car in general and particularly the technology features. I was jealous. I wanted one, too. Walter was an early adopter of technology like me, and was likely just as excited about the promise of a fully autonomous driving future as I am. Self-driving cars are going to be more and more a part of our lives in the coming years. If one good thing can come out of this tragedy, it would be improvements in the technology or in the physical roads/markings to make it safer and easier for both human and computer drivers in the future.
I think it's understandable that Tesla becomes the poster company in the media for these type of accidents right now.
Last point: I don't think Tesla is being singled out here. It's self-driving technology (and particularly the failures) that have grabbed the public's attention recently. The fatal self-driving Uber crash with the pedestrian seems to be getting a lot more national press coverage than this crash. It was a Volvo, not a Tesla.