... During this phase, I expect the nags to go away and you'd reasonably be able to expect to not have to pay attention until the car tells you to, but with some risk since you'd have to ascertain your surroundings and react quickly. I expect it to live in that state for years before it's actually capable of driving on its own in most scenarios, much less approved by regulators to do so. But I'd be happy with that level and think it's fair to call it "full self driving" - it will drive itself, except when it can't. The leap from Level 2 to 3 seems imminent to me. The leap from 3 to 4 is massive.
In city driving, where everything happens more quickly (kid runs out into the street, car pulls out from a driveway unexpectedly, driver swerves in front of you or makes a turn from the wrong lane or stops suddenly because somebody stepped out in front of him) you will have to pay absolutely continuous attention until the car reaches level 3. And level 3 is a
big leap up from level 2. We've been at level 2 for years and no hint of level 3 in sight other than the speculation from the more optimistic among us that HW3 will be a sea change. It will be much harder to get from level 2 to level 3 in the city than it is on the highway, and they don't even have level 2 in the city.
It's definitely doable to intervene in time if you are paying attention and thinking ahead.
My point is that in city driving you have much less time to react.
This problem [braking for a cross-turning car] has largely been solved in my driving experience. I find that AP does not brake anymore in those instances when it does not need to brake.
Mine still does. 2019.36.2.1. HW 2.5 EAP (no FSD package.)
I remain optimistic that HW3 will be a quantum leap in capability.
I agree. But I think you meant to say a big leap. A quantum leap would be the smallest leap the laws of physics allow.
I expect a quantum (tiny) leap.
If FSD is the cause of those collisions (we don't call them accidents anymore, they're collisions), then that's a problem. Being t-boned by someone running a light is one thing. FSD running a light is a complete failure and means the system is not working. Saying "there will be collisions" is entirely misguided here. FSD must not cause a collision where a human would not have.
This is actually the wrong measure, and I've encountered this idea before, that FSD is unacceptable if it ever causes an accident.
Accidents (or collisions) will happen. And the
kinds of collisions FSD causes will be very different from the
kinds of collisions humans cause. Just as in chess, the kinds of mistakes a computer makes are very different from the kinds of mistakes a human makes. A cheap chess program will beat nearly all human players, but when it makes a mistake it's one a human would never make. FSD will cause collisions and deaths, and will make different kinds of errors than humans make. The goal is to have fewer accidents, fewer injuries, and fewer deaths than humans cause. Zero, or even just "none that a human would not do" is an impossible goal. We will save lives when FSD is mature, but there will still be deaths, and some of those will be accidents a human would not have had.
Musk's argument will be "statistically safer than a human driver". But yes, we will hear a lot about the evil regulators when Tesla makes substantial improvements. In reality the regulators will keep Tesla from assuming massive potential liability by releasing level 4/5.
Regulators will be 100% on board with FSD because insurance companies with their lobbying power will be fully on board, because it will reduce collisions and deaths and injuries. And Musk will not knowingly release an unsafe product because Tesla has shown how dedicated it is to safety. (But he might use "regulators" as an excuse for missed deadlines. Nobody should ever believe Musk's time-lines. He's not lying. He's just way too optimistic.)
Buy a Tesla. It's the best car ever. But buy it for what it does when you buy it, not for what Musk promises it will do in a week or a year.