It's the kind of baby step regulators like to see.
I don't see that. Angeulov's talks are very similar to Karpathy's. NNs for object ID, gradually moving into policy.
$100 billion??? Seriously? I can think of ways to source data for a nickel a mile, and they're smarter than me.
But do you need a billion miles? (As an aside, why doesn't the whole "humans don't have lasers shooting out of their eyes" meme not also apply to training mileage? Humans train on <1000 miles, why can't Musk's SuperChip do it in a million?)
As a Karpathy slide shows, NNs improve a lot when you start adding data but incremental returns quickly diminish. A failsafe (e.g. LIDAR) to handle edge cases you like airborne cars the NN doesn't understand sounds like a reasonable way to achieve six 9s vs. collecting thousands (millions?) of instances of airborne cars for NN training.
If you need millions of instances of left turns, don't you also need millions of instances of airborne cars? And millions more of cars skidding on their roofs? Why not?
Not in 2020.
AlphaStar only used imitation learning to bootstrap their agent. This got it to a basic level, quite a bit worse than the expert gamers it was imitating. They then used reinforcement learning - setting up a league of 10 agents with slightly different approaches which played each other in a continuous tournament. It was similar to AlphaGo, which used zero real-world data and beat the best in the world after 4 hours of training.
Waymo reports Phoenix disengagements to the state of CA?