willow_hiller
Well-Known Member
I could understand if someone thinks FSD is SD map and vision fusion. We know Tesla uses maps for stop sign anticipation (at least for now). I could imagine that the FSD neural net uses what it knows about intersections from an SD map to "dream" about future intersections, like how Google's DeepDream can start with a photograph and imagine patterns out of it. But again that's only used for anticipating things yet unseen by the vision system. If there's anything incorrect in the output's assumptions about the future intersection, it would be corrected by the time the car can see it.
But in all likelihood, the majority of these "I can't see the intersection up ahead, how is FSD predicting it?? They must use HD maps!" phenomenon can probably be explained by the wide angle forward facing cameras being situated higher than any photographs taken inside the car.
But in all likelihood, the majority of these "I can't see the intersection up ahead, how is FSD predicting it?? They must use HD maps!" phenomenon can probably be explained by the wide angle forward facing cameras being situated higher than any photographs taken inside the car.