That's not exactly it. The radar almost certainly has the precision to see where everything is. It's the processing part that's the challenge - To make the data set manageable, the standard approach is to throw out all the stationary returns - thus getting rid of roadsigns, pavement cracks, rocks beside the road - and unfortunately stopped cars.
To capture the stopped cars, you need some way of separating them from the rest of the clutter - most likely some variation of sensor fusion. Tesla's current architecture has a major bandwidth limit that is a challenge for this - the AP computer (EyeQ3) is with the camera in the windshield, and connected to the rest of the systems only by CANBus.
That means that even if the EyeQ3 could handle the raw take from the radar, there's no way to get it there.
I'm not convinced that means there's no answer with the current hardware. The camera is trained to recognize cars visually. What if it passes requests to the radar by bearing? - "I see cars at 345 degrees, 356 degrees, and 002 degrees. Tell me the relative velocity of all of these."
Or it might be simpler, actually. We think the radar is currently passing bearings and velocities for all the moving cars it sees, right? If that's the case, then if the camera sees a car that isn't on the list from the radar, it must not be moving...
This might not address the recent incident, though - there's no reason to believe the camera recognized the truck was a truck. For that, you might need some sort of "I can see under the obstacle/I can't see under the obstacle" logic too.