Works for me better than 50% but not close enough to 100% to make it useful. My garage door is fairly narrow -- I've got about 7-8 inches to spare (3-4" on either side) with the windows folded in. I also have a retaining wall on either side of the narrow driveway, and the garage itself is narrow (1-car). If it's not pretty well lined up to begin with, there's no chance. But even if it is lined up nice and straight, some significant fraction of the time it will freak out and turn itself crooked for no good reason. Once it's even slightly crooked it cannot recover. (I'd estimate this happens 10-20% of the time. I think the feature starts to get useful if it fails <5% of the time; any more than that and it's not worth standing around with your neighbors staring at you fiddling with the thing.)
The problem is that using the ultrasonics it has, this is a very hard problem. It has blind spots directly to the sides, and not enough resolution to clearly perceive the opening, particularly after it's started to go trough the door and the garage door sill is between two sensors. (Remember, ultrasonic sensors give you just a range, not a direction. You don't get a coordinate in 3D space; you get a semi-spherical surface centered on the sensor.) I think that maybe a very sophisticated software approach could work here with only the ultrasonic sensors; it would need to build itself a 3D map based on building up ultrasonic returns over time as it proceeds and (roughly speaking) taking the intersection of these spheres (but not that simple as there is always uncertainty). As it is currently, it seems to forget that the door sill is there once it's passed the sensors.
I have some hope that they will eventually start using the cameras to help with summon. They could use a structure-from-motion algorithm to build up a 3D map of the environment. If they remember this map not just for the current parking maneuver but also remember it for next time so it's not starting from scratch, they might be able to do something reasonable. (In particular, if you've just parked in a tight garage, you'd want to remember the map you built up when you parked to have something to start with -- you only get structure from motion after you start moving...)
I see absolutely zero evidence that they have even begun to implement such a sophisticated approach -- despite the fact that what I just described is not some fantastic new innovation requiring breakthroughs to implement. Just google "structure from motion" and you'll find tutorials and pre-built libraries[1]. They have plenty of compute power to do this. Tesla is just so slammed trying to get basic functionality working (and generally having too many balls in the air for such a relatively small company) that they are leaving low-hanging fruit like this to rot on the vine. Instead as has already been pointed out, they basically spent the past year trying to reproduce the MobileEye system with their own drop-in replacement. Elon obviously expected that to take about 2 months and then they'd be on to a new architecture with features like I just described that would not be feasible with the ME system.
[1] LMGTFY:
Structure from motion - Wikipedia