I don't think its a mapping or resolution issue. I believe this is a case that requires some higher level training to be able to better infer the reason for a car ahead to be stopped.
As a human driver, you have learned to look ahead as you approach a stopped car to see if it is at the the end of a string of cars. If so, you conclude that the car is waiting for an obstruction to proceed to be cleared (a train, red light, accident, etc). You likely saw some indication of the line of cars before you were pulled up behind the end of the line.
The car seems not to have sufficient NN training to determine this yet. So, the planning sometimes gets it wrong and likely decides that you are behind a disabled or parked car and attempts to go around. In this case, the railroad crossing may have been occluded from the cameras so all the car knew at the time was that there is one stopped car ahead of it.
You can see a related effect by watching the visualization while your are stopped a few cars back at a busy intersection. You'll see cross traffic cars appear and disappear on the screen. Tesla needs to improve this by developing a capability to infer the presence of a vehicle after it has become occluded. Once this occurs, they can leverage that to infer that the last car in a line at a railroad crossing is part of a line of cars, even if the other cars have become occluded.
You can't just use a map to conclude a car several car lengths back from a railroad crossing is waiting on a train. It could be a disabled or parked vehicle that just happens to be near the crossing.
This issue has improved over the last few releases, at least it has in my experience. But, it still needs work.