I wonder how often it "updates" the LLM portion.
Alex makes an important point: Wayve did not just strap chatGPT to their autonomous driving and ask it to narrate what it sees. Wayve is training a single end-to-end AI model on both vision, language and action. That is an importance distinction.
I can see the application for solving the black box problem in e2e. By training the AI model on both vision, language and action, it is able to explain what it is "thinking" and doing. So we can see the what and the why behind the e2e.
It maintained "Keeping a steady speed, the road is clear" even when a pedestrian was running across the road around 7:10 (and continued accelerating).