The fleet video snapshots most likely are used not only to train a given network architecture to its natural learning limits but also to design new architectures that are able to understand the nuances. In some sense, that could also be why Tesla doesn't need more frequent FSD Beta releases as the continuous data collection even with "outdated" 10.12 is still useful for auto-labeled training data.This makes me wonder what is happening with all the training data Tesla is receiving from the fleet
While the specific "Chuck Cook style" intersection was most publicly watched, Tesla now has a huge corpus of unprotected left turns that can be queried to specially train occupancy, e.g., using Radiance Fields (with neural networks [NeRF] or without [Plenoxels]), to evaluate whether a new approach correctly determines an appropriate creep limit based on visibility would work or not without even needing to deploy via shadow mode.