Do you really believe that it is difficult for Tesla's AI to normalize vision for dirt and rain? Is so, why?
Almost 2 years after release NoA disables itself in even moderate rain... in HEAVY HEAVY rain even basic AP, which has been around years longer to be "normalized" for rain turns off.
Isn't a Tesla's visual capability already superhuman?
In that it can look multuiple directions at once- yes.
But it can't see as far (based on listed specs of the cameras) as a human.
It can't "turn it's head" to allow a working camera not obscured by something to see what an obscured camera can't see
And the cameras are vastly lower resolution than the human eye (it's getting super far afield to dive into where/when/if that makes differences, but it's certainly the opposite of the visual capability being superhuman)
The cameras also can't see low/close to the car at all (this is part of why there's no overhead 360 view- that'd require additional cameras)...
And currently there's no notion of object permanence- if something moves out of the field of a camera, it's GONE, if it appears in another camera that's a new thing. Part of why surrounding vehicles pop in and out of display.
That last one is allegedly going to be "fixed" with the AP re-write, but you can't do much about the others since they're physical HW limitations.
Isn't the obvious solution here to make sure the camera functions properly and doesn't get distorted rather than add more cameras?
How do you do that when there's no physical way to keep the lens clear?
Also sounds like this is only a problem when reversing in parking lots? Or does this also affect NoA performance on high ways in your experience?
See above- NoA turns off in even moderate rain (dropping to basic AP), and even basic AP can turn itself off in very heavy rains.
Rain (and forget snow!) is a BIG problem for camera based self-driving cars.
There's a reason Waymos main pilot program was in a little suburb in Arizona where it almost never rains.
Speaking of snow BTW- apparently it's a big enough issue Tesla added a front radar heater to the Model Y.
The S/3/X don't have that hardware though.
Backing in a parking lot is done at 1 mph. Tesla v2.0 hdw suite (Oct 2016+) has 12 ultrasonic sensors with a range of dozens of feet distributed about the car. An obscured rear camera view is not a safety issue when there is already another independent data stream usable for parking / backing.
Your Tesla might be going 1 mph... but the cross-traffic in the parking lot is probably going 30.
The ultrasonics are worthless in that situation compared to the rear-radar virtually everyone else in the industry uses for this job.
The reason Tesla didn't add the rear radar is cost.
Their 'solution' is very clear... the car won't "need" it because it's not meant to back out of spots.
Seriously.
Try self-park sometime.
It
backs into spots
So that it pulls out forward. Where it has multiple cameras, and radar, available to see cross traffic.
That's their solution.
And objectively, backing in
is safer so great.
But as many have noted, busy parking lots you're gonna have issues with people getting pissed off when your car pulls past a spot, then slowly backs into it, holding up traffic... in some cases those pissed off people following inches behind won't let you in either.
Others in discussions of this very thing also point out sometimes backing in is NOT a solution at all.... some lots make this illegal for various reasons for one.... for another you might want the rear facing the lot, not backed up to another car, for loading reasons.
Anyway- the above is why I'm very confident Tesla can offer L3 autonomy on the highway with existing HW in all cars made since late 2016.... (and given that'd cover 95% of my driving- I'm happy to let em keep my FSD money if I get that).
Car is responsible and driving in highway situations- driver NOT required to be paying active attention or even touching the wheel...So you could be reading a book, watching a movie, playing a game, whatever....
But MAY need the human to take back over with some amount of warning if say the weather is getting really bad- so driver still needs to be awake, and in the drivers seat, at all times- even if not actively paying attention to the driving while there.
(and technically if they can program it to safely pull over to the shoulder in such situations if the human ignores it they can probably get away with labeling that L4 even)
But I remain HIGHLY dubious L5 robotaxis that just work everywhere all the time are possible with current HW given it can't keep highway NoA on/working in moderate rain- so even worse in non-highway/urban situations where it'd have FAR more objects it needs to visually ID and track.
EDIT- for having seen evermores post.... based on his # of HW3 likely being able to "derain" a single camera at 3 fps...and the cameras in the car operating at 36fps... that means you'd need a computer ~12x more powerful than HW3 to handle
one camera.... or 96x more powerful to handle all of them (which it'd need to especially with the re-write relying on ALL the cameras to build a 360 view of the world).
(even if we assumed this wasn't needed for the 3 front cameras because windshield wipers- a fact I'm not confident is sufficient in heavy rain or snow- that's still needing 60x more power than HW3 offers for the other 5 cameras)
That's a
lot of additional compute.