It's not a faulty image. Tesla vision now uses what is called the occupancy network. The cameras map the entire area around the vehicle into 3 dimensional cubes. it then tags the cubes as either occupied (something is in that space), or unoccupied (that space is empty). One advantage of this over the ultra sonic sensors is that it can identify overhangs that might contact the vehicle even though the area at ground level is clear.
The images you see are a reinterpretation of the occupancy network data. The network was designed to provide data to the FSD computer, not to the driver per se. I suspect the images the driver sees will improve over time.
More info:
A Look at Tesla's Occupancy Networks