It's awesome having an MD on the forums!
My
opinion on the return of radar...
"It turns out the full self drive problem ended up being far harder than I imagined." -Elon Musk
"It turns out that FSD using vision only ended up being far harder than I imagined." Elon Musk, probably
The most recent AI presentation focused a lot on occupancy networks. IMO, HD radar (depending on the TX/RX hardware used) is a much better *all around* sensor for filling out occupancy maps of the surrounding environment than LIDAR. Yes, know each has their own individual strengths, but since radar is much better at piercing through most low visibility conditions than LIDAR, it takes the OVER ALL crown. Especially when it isn't being used as the primary sensor for filling out the occupancy map; Tesla Vision has that responsibility. Confused? Good, let me explain...
Let's go back to an old problem the military used to face. How would a sniper estimate the distance to his target? The old system used to use an estimate based on the size of an average human being. You'd put the target into preset size vs distance brackets, and get a very rough estimate of the target's distance. So this would be analogous to today's Tesla Vision. So what's the state of the art in figuring out sniper range-to-target? Lasers, NOT vision. Kinda/sorta similar to LIDAR. So technology wins vs plain old vision.
So now apply the sniper situation to Tesla Vision? How far away is that dog that the object recognition system is "dog"? Well, shoot. What breed is the dog? How can we use the average canine size to determine the distance to "dog" if it's a pomaranian when "average dog" size is set to poodle? How can vision only solve this distance-to-target problem? Not very well. HD radar has entered the chat.
Tesla Vision can initially be used to fill out the occupancy map, and HD radar can be used to fill in any distance-to-object questions that the Vision system isn't able to adequately resolve.
So in my opinion, they are using it to answer any "hey, I'm not sure about the distance to this particular object" situations that Tesla Vision might have.
HD radar can also help resolve any speed and direction questions Tesla Vision might have, too.
So it's not really a matter of "fusing sensor data," it's a matter of the primary system saying, "hey, back me up here, I'm having a hard time resolving the distance on this object with xx% confidence, little help?" So in instances where Tesla Vision is able to resolve all objects to the required confidence level, HD radar wouldn't be used at all.
Again, in my opinion, this would help a lot with the directions that Tesla Vision doesn't have binocular vision to help with it's "depth perception", i.e, any direction that isn't forward facing.
Of course, this is all
speculation and
opinion on my part. But in my admittedly tiny, smooth brain, it makes sense.
Edit: It *could* also be used to help train the Vision system. Vision: "Hey, I think that object is 100 feet in front of us." HD Radar: "Close, it's 99.2 feet." Vision: "Noted. Added for training."