Apologies if this is in the wrong section.... I have a few questions about how this fleet learning works, at a technical level, and I'm wondering if somebody out there might be able to shed some light? I get how these deep neural networks can be trained to drive by following a human driver. But to do that requires a LOT of information (and bandwidth) and a lot of processing power. I doubt that the cars have the processing power to do any meaningful training in-car, and so that implies that they must send information back to the mother ship to help refine the training. So... 1. What do the cars send back? Clearly the cars can't all be sending continuous raw video and sensor data back. The bandwidth demands would be absurd. 2. Is the data limited simply to the location and correct response to sensed/known obstacles and road features? That would seem to be useful but... more minor. It would seem to not be as helpful when the car is learning more complex behaviors: navigating intersections, construction areas and parking lots; learning how to deal with on-road obstacles like birds, junk dropped off of trucks, and so on. 3. Are the cars smart enough to send back detailed video on exceptional circumstances, like dropped objects and so-on? Just curious.