My argument is that current installed sensor hardware consisting of the current camera placement, radar and ultrasonics is unable to do stationary object recognition without false positives or negatives within a few years, which is the timescale relevant to the cars that have shipped. One argument is that doing this is an important enough matter, which has been a high priority for a long enough time, that if the current sensor suite could do it, or if Tesla knew how to do it, Tesla would have already deployed the software to do it. The fact that the functionality is not here even though there are multiple high-profile instances of running into stationary objects implies they don't know how to do it currently or (in my opinion) within a timeframe that's reasonable for current owners.
Yes, strictly speaking, stationary object recognition can be done with the current sensors: take the current camera images and give them to a system with all the relevant capabilities of a competent human being. Human beings have object detection and scene analysis abilities vastly ahead of the best current systems I have heard of (case in point: the Google "That's a Gorilla" embarrassment) and can place objects in a scene without distance data in most cases. My opinion is that software won't do this reliably within the timeframe relevant to current car owners.
Upthread I gave my opinion, which is just my opinion, that it would require a single forward looking LIDAR around hood level, or stereo cameras and compute power for interpreting binary vision, or something like cameras and a projected grid of infrared points that could be seen by the cameras and interpreted by software. All of these involve new sensor hardware that is not easy to retrofit. Obviously, your opinion may differ.