I have plenty of concerns with FSD Beta, but there are a few very common situations that I feel like they haven't even tried to program or train for:
1. School zones
2. Railroad crossings
2. Lane merging
3. Crossing solid white lines
In each of these the car seems to be using it's vision to just "handle" the situation generically without any specific programming/training for the context of the actual situation. For example, when merging lanes, either on ramps to highways/interstates or, e.g., three highway-lanes merging to two, the vision system recognizes that the configuration of the lane lines have changed and the two lanes have now become one, so it adjusts its position accordingly. But it doesn't seem to know the merge is coming, even when mapping, signage, and/or basic context (i.e. onramp) indicates such, and it doesn't signal the merge or really "yield" except to avoid hitting a car next to it, etc.
Similarly, it can't handle the flashing yellow lights at school zones in all the various configurations. It either seems to ignore them altogether or treat them as stop lights that are flashing yellow (or sometimes it interprets them as lights that are turning red). And it changes lanes when it wants to change lanes and doesn't seem to understand that cars are not supposed to cross solid white lines.
Now, these are not uncommon situations mind you, or in the "long tail of 9s" -- they are everyday driving situations for the most part. And they could use mapping data to be aware of a lot of these situations (just as humans do from signs and experience) and have the car drive in the proper context. But, it's like Tesla has just decided that it can rely on the basic driving neural networks to be able to handle these situations, and as long as there are no collisions, then they just aren't to concerned with the specific situation. I wonder, however, what is going to happen when FSD leaves "beta" status (assuming that ever happens) and is released in the wild.