Tesla introduces FSD and tell everyone that they need to keep an eye on the system; so this is level 2. Over time the system gets updates and perfected. This means that system needs less and less human intervention up to a point that in practice no human intervention is recorded. Tesla collects all the data and shows it to authorities who can analyse it. They convince authorities that system is now so much better than a human driver that in practice no human intervention has been made during so many million miles without any accidents. Authorities get convinced by data and tell Tesla that system does not require human oversight and FSD becomes level 4/5.
I don’t see any other way how a (random) level 4/5 system will ever get approved.
So there's a few misunderstandings here.
For one, a system doesn't magically "become" L3 or higher when it needs few or no interventions.
It becomes the higher level when the designer/manufacturer of the system attains and attests to the functionality and requirements of that level.
Right now Teslas system isn't L2 strictly because Tesla "says" it is.
It's explicitly programmed to
check for driver interaction with the wheel... and it's specifically programmed under some conditions to command the driver to immediately take full control of the car.
Even if it drove around with 0 interventions for 5 years in that state it would still be L2 because it would still be programmed to expect and require a human to
always be able to take over instantly. Any system that
ever requires instant takeover from a human- no matter how long between requests- is L2 at most by definition of the standard.
Only when they remove that could they classify it any higher.... (
ever requires human takeover after some non-immediate but brief amount of warning would be L3 for example).
For L4 specifically they'd need to remove any need
ever for a human to take over- including the vehicle being able to always safely park itself if it finds itself unable to drive for some reason.
L5 would be L4 but is never unable to drive (short of mechanical failure of the vehicle, the road collapsing, etc)
The other misunderstanding is around "the regulators"- more on that below-
I would guess at least for initial steps, it would be similar to level 2 features with regulators wanting a demo
The "regulators" thing is a red herring though. It's
already legal in half a dozen states to run an L5 car.
Nobody does- because nobody has one.
Waymo has a 4, and they do run it in one place.
Even if things are legally allowed without pre-approval, regulators may be quick to shut things down especially if they feel they were not part of the discussion
That makes no sense.
This is regulated at the state level.
Those states
were part of the discussion when they passed laws
approving self driving cars to operate right now with no further approvals needed
These are states that went out of their way to write, and pass, laws specifically to permit this. Immediately when such cars are ready.
They wished to be out ahead of the Californias of the world who wish to regulate things to a crawl.
If one actually
had a self driving car, and wished to prove it worked and was safe, there are
no hoops to jump through at all
Just put them on the road in those states it's already legal. And wait.
After a while of them working great (assuming you were smart enough to not sell them before they worked great) you'd have people in OTHER states demanding THEIR states make them legal.
That's a problem that solves itself.
. And probably because Tesla is headquartered in California with many Teslas on the roads there, CA DMV will probably be ready to cancel things if necessary.
The CA DMV has literally no authority of any kind to "cancel" anything outside the states borders.