Tesla has chosen a neural network approach to its FSD functionality. This technology involves creating a network of pseudo-neurons that mimics an animal's brain and then trains the network on a series of inputs (video clips from Tesla vehicles, in this case) until it reliably produces the desired output (driving behavior, in this case). One implication of this approach is that unusual inputs (trains, airplanes on the ground, cruise ships) will likely be misinterpreted until the neural network is fed enough inputs to understand what to do when it sees them. Tesla's FSD is something like a toddler in this respect. Parents carefully watch their toddlers because they know that toddlers, not fully understanding the world, are likely to do dangerous things, like touch hot stoves or walk into the street. Once, I was walking with two of my friends and their toddler when a police car zoomed past at high speed, with flashing lights and siren. Amazed, the toddler asked "what was that?" Had we not been there, it's entirely possibly that my friends' child would have been killed by that police car. A Tesla on FSD that doesn't do the right thing when it's confronted by a train is like that toddler, but it has no way to exclaim, "what was that?". No, strike that; a Tesla's neural network is much less sophisticated than that of a human toddler. I haven't seen estimates in the last few years, but the last I heard, the best neural networks had a complexity comparable to that of an insect's brain. That's what Tesla's trying to do: It's trying to teach an insect to drive a car. That may sound ludicrous, but insects are pretty good at navigating the world, so it may well be an achievable goal, but it's not an easy task.
You've chosen to emphasize the word "TRAIN" in your message as if it were self-evident what one is; but you know what a train is, and you know how dangerous one can be, because you've seen them, in real life and in photos, movies, etc., for your whole life. Your own neural network is much more sophisticated than what's in a Tesla, so you can learn more quickly, and you deeply understand how dangerous it would be to stand in front of a train as it barrels down the tracks. A Tesla's neural network is a simple mapping of visual inputs to steering, braking, and acceleration outputs. It does not understand what a train is, or even what a car is -- although a Tesla can identify a car as such with good reliability, the Tesla doesn't really understand what it is. Much less a train. To the Tesla, a train is just an unidentified object, and maybe not even that; it might just register as an unknown mass of pixels.
As somebody with a background in both human cognitive psychology and computers, I am very impressed with what Tesla has managed to do with its FSD features. I also know, however, that it will take a lot more exposure to corner cases, like trains, before the system will be able to handle a wide enough array of driving situations for the system to be as safe as a human driver. Even then, there will be novel situations that may confuse it -- volcanic flows, helicopters landing on the street, chickens falling off a truck, etc. It'll be many years before a neural network like Tesla's will be able to reliably handle such situations.