Come on, your arguments are usually better than this! This was not the "Teslas can't handle this obscure corner case" argument.
If you look that location up on Google Maps, you'll see that the spot I mentioned is a dead-simple freeway split, with the left lane turning south into 161. It's something that is as basic as it gets, yet Teslas consistently aim for the gore point between lanes, for seemingly no discernible reason. In this example, the environment is such that there are plenty of ways to identify which way the road goes, particularly considering the big fat concrete wall that separates the splitting lanes, since the left lane is an underpass.
Teslas demonstrate that they're only taking a very small set of data into account. They consistently show that they're only capable of making decisions based on what they see over too short a distance ahead and they do not recognize what should be super-basic inputs, such as the big concrete wall ahead (or firetrucks for that matter). On top of that, the software makes ridiculous decisions such as " oh, this must now be a 7-yard-wide lane, let me position myself in the middle of it".
Perhaps because of what I do for a living, I can't help but look at things from the "why does it work the way it does" and "how should/could this work" angles. What I see in Teslas is a system in its infancy, sold however with the insinuation that it's vastly more capable than it really is. Aside from putting overly-trusting drivers in danger, I think a system of this capability generally makes things worse by further lowering the awareness level.
I'm surely not the only one looking forward to the day when some of them texting housewives in 3-ton SUVs are driven to school and back by a competent piece of software rather than be allowed to do their own steering while glancing away from Facebook. I think Autopilot is a dangerous and very sloppy milestone on the way to that.