Part of the problem is at this point computers can’t infer. A human driving that stretch of road can see the mirage and recognize it as such, overriding visual input with cognitive procesing, whereas a computer takes it at face value. I’m sure people have looked at this issue (and not just at Tesla) but I have no idea what the current best approach to dealing with it is.
The other problem for a car maker is that outcome severity. Phantom braking is bad but not braking and having an accident is worse. If a human gets in an accident because of a mirage people say ‘oh, yeah, I can see how that could happen.’ If FSD (or Blue Cruise, or…) gets in an accident people say “how can that happen?!?! Tesla is recklessly putting cars on the road! They need to be recalled NOW!” Not to mention the fact that any nuance that may have been happening ‘under the hood’ as the code tried to distinguish mirage from reality never gets reported in the press.