No way they venture into LAX this year without safety drivers. They don't even service Phoenix airport itself (just SkyTrain), and Sky Harbor airport is an order of magnitude easier than LAX.
I think they could maybe do driverless to the parking lot or whatever that services LAX airport by end of 2023. That's 11 months away. Don't underestimate the power of ML. 11 months of ML training to get the planner good enough is not that crazy IMO. We've seen Waymo make very good progress with their ML in 2022.
The immobilized Waymo itself doesn't bother me, but backing traffic up a mile and a half is completely unacceptable. They have to figure out how to un-stick these cars within 1-2 minutes.
I agree about needing to unstick the car faster. I said the same thing about Cruise.
But I do have questions about what exactly happened. Did the Waymo computer crash like we've seen with Cruise? Did the Waymo suffer a mechanical failure that required a team to come get the car? Or did the Waymo software just struggle to navigate rush hour traffic and took too long? The solution will be different depending on the cause.
It looks to me from the pic, that the Waymo tried to merge in rush hour traffic and struggled to complete the maneuver because the path was blocked. If that is the case, then the answer is more ML training on the Planner. The fact is that AVs, like Waymo, still struggle too know how to cut in. They tend to wait for when the path is clear before moving. This can cause the AV to freeze if traffic is busy and nobody yields to the AV. This might be what happened in this situation. There was a video showing a Waymo doing a good job of cutting in, but I think it needs more training to handle cut ins better and avoid cases where it gets paralyzed when the path is blocked. Just my educated guess on what the problem might be.
Another example is it was able to stop itself from falling into the construction hole on the street but then it doesn't know what to do next:
The unoccupied vehicle nearly fell into a construction trench.
sfstandard.com
I think part of the reason the Waymo did not know what to do next is because it perceived all paths to be blocked. Going forward was blocked by the hole. The sides and rear were blocked by people standing in the way. When an AV thinks all paths are blocked, it will likely wait for a path to open up which can result in the AV seemingly "freezing" if a path does not open. Like I said above, AVs need more training to know how to "force" their way, to make a clear path. One concern is that other vehicles and pedestrians will refuse to yield to AVs, causing them to get paralyzed when the path is blocked. So AVs will need to "force" their way sometimes.
You are correct that AVs like Waymo are very sophisticated but they still lack intelligence in some situations. AVs will only do what their NNs tell them to do. If they encounter a situation that the NNs don't know how to handle, the AVs will get "stuck".
But I don't think we will need huge supercomputers in the cars themselves. We will have big supercomputers, like DOJO, back at the facility to train the NN. Then the new NN will go in the car. The computers in the cars just need to run the NN, they don't need to train the NN. So the computers in the AVs don't need to be as big. I am confident that with more ML training, we will get AVs to better handle these situations where they get stuck.
And remember, we don't need AVs to handle everything 100%, we just need to get AVs to enough 9's that they are "good enough".
On Reddit someone who has ridden in Waymo 300 times said 2 of her 65 driverless rides needed remote assistance to come, get in the car and drive it to her destination. The car stopped out of traffic both times, which is great, but a 3% failure rate is still several orders of magnitude too high.
I definitely agree that a 3% failure is terrible and unacceptable. But is it really 3%? 65 rides is not a big sample.