I believe this mischaracterizes Tesla approach. Indeed, Tesla
DID experiment with high-resolution mapping and abandoned the idea. Every time the road changes, (or somebody discards and aluminum soda can on the side of the road where the radar will see it), the map needs to change.
That means the database changes need to be propagated to an every growing fleet. And it also means that 'noise' enters the system. Who monitors for the 'Wiley Coyote' effect? (ie: when sightings report things that are unphysical, or worse deliberate deceptions).
View attachment 439012
This is a nightmare of bad data, with no straightforward way to 'winnow out the chaff', except perhaps by building the very computer vision neural net that Tesla is creating now to look at those exceptions.
So why not just move the NN to function at the car in realtime? That is exactly what Elon has decided to do. More and more leading industry experts are beginning to see the wisdom of that approach.
Tesla is not collecting billions of miles. It is collecting layers of edge cases, and sorting them by frequency of occurence. Once enough examples of a particular type of pattern have been collected to train the neural net, the fleet moves to actively selecting and reporting events that match a new set of criteria via instructions from the AI team.
This last bit is important. This way, the FSD team at Tesla can actively select for, sample, and collect the data they require to solve the next self-driving challenge. In this way, they turn what would otherwise be an unmanagable, exponential deluge of data into a usable, linear progression of real-world examples which they can selectively incorporate into training the NN.
This is why Tesla's approach will succeed in delivering FSD, and why the 'Hi-Rez' mapping approach will fail.
Cheers!