I think one problem with Musk’s conclusions and statements is that some of them are easier to agree with than others.
I don’t know if manually driving a car is comparable to human operation of an elevator to the extent that the former would be driving, given that driving cars and before them wagons has a much longer history and cultural significance than operating elevators, but the logic otherwise is easy to agree with: at some point surely automated cars will be good enough that we’ll look back at manual driving much the same way as we now look at horse riding. A leisure activity at best.
The complexity starts when you realize accepting the vision from someone does not automatically translate into belief in its execution from the same. Musk could well be right in everything theoretical (the vision, the pointlessness of Lidar, the pointlessness of maps beyond navigation) — for argument’s sake — and Tesla could still fail to deliver with the current suite, within the current timelines. In the end Musk can not control his teams ultimate ability to deliver on what remains an unsolved research problem. And it only gets more complex in reality, because in reality Musk is probably not right about everything either but probably wrong on some portions of the equation as well.
So what to believe and whom? Not so simple.
That's what makes this so much fun! Think about this, we WILL be the generation that witnesses this whole AI, self driving EV transition thing that is, in my opinion, probably as or possibly more important historically than the internet, which we ALSO saw. Once we solve self driving with AI other simpler problems will fall like dominoes because we'll have the tools and knowledge that will accelerate us towards whatever weird AI future we are heading towards. Maybe nothing much will change, or maybe we'll merge into a singularity 2 decades from now...I want to be on the front lines watching this, seeing it develop, seeing the forward and backwards steps . I could deal without all the rude negativity from some on here and most outside here. My grand kids will ask me about it like I asked mine about their first car and TV
Worth some patience and a couple grand in my opinion, but not for everyone. Not for you, don't spend the cash...
As for more on topic stuff.
1) I think yes there is a HUGE jump to true level 4 and again to level 5. I believe we'll have level 5 'functionality' for several years at least before they remove the human and go level 4. You almost need level 5 funtionality in order to get enough confidence you've found enough edge cases to go level 3. The first level 4 experiments will be acceptably safe, but will require remote human operators probably often at first. Level 3 seems like what FSD will actually end up being for my car in the next couple years. If it gets confused it just has to be smart enough to pull off and stop and ask for help from me. That will be a fundamentally safer system because it won't be relying on me monitoring it. By my definition, which I think aligns with SAE, we don't have to worry about how long it takes a human to take over. The car will be safe, just may get stuck if you ignore it. By that definition I don't see it as a huge step to level 3. Hwy NoA is actually pretty close: we need more accurate maps or more intelligence to avoid most of the dumb stuff it fails at (being in the wrong lane, speed limits etc.) and the ability to ask for help when there is a cop behind you or on the side of the road. Actual driving etc. is pretty close to acceptably safe and getting better so quickly I have no doubt we are getting close or are already safer than a human on a why. Simple things like not being aggressive, tail gating, wandering out of lanes etc automatically make it less likely for there to be an accident around you.
2) I know 1. above will make some say 'noa is nowhere close'. Well, my experience is that if you let the car do what it wants, it does weird stuff, is kinda rude, not as smooth as the smoothest drivers, and can frustrate drivers around you...BUT it almost never hits things. These days most accidents come from lack of attention and lane changes, two things computers accel at. Some of those can be easily solved with HW3 speeds (faster decision making), some tweaked algorithms, but MOSTLY its about just accepting that what it is doing is safe enough, just not what I would do. Still level 3, just not as you envision. Tesla has chosen to not spend time making existing features more robust, and this makes sense if they believe they are close to turning it all 'on' so they can work on big picture stuff. For instance; why solve the dancing cars issue on the first releases if they were going to change the AI algorithms fundamentally for V10 that fixed that as a matter of course. I think a lot of the shortcomings fall into that camp. 8 signal ticks before changing a lane? come on!
3) which leads me a very fundamental question. Are there two forks of code; FSD and EAP? Will they suddenly switch HW3 people with FSD to a completely different code base or will keep merging pieces of their FSD code into the existing code base. You see what their FSD cars were capable of, and they were obviously a V9/10 interface but a lot more showing on the display and a lot more capability. When the first FSD features come out, we'll know one way or the other pretty quickly imho.
TLDR; read it or move on, nothing really added!