@Knightshade, I'm curious to know. Do you think that Tesla's current approach will succeed in reaching autonomy? If so, when?
Depends what you define as their "approach"
If you mean at a high level- using vision and AI- then yes, I do. I don't think stuff like LIDAR and mm-level mapping is needed as Waymo and others do.
If you mean using the exact HW in current cars- no, I don't.
HW3 is already maxed out, and that's using both nodes to run a single instance of the software (for L4+ they'll need the entire instance to fit in a single node, so there's a second for redundancy), and it's still just L2 even at that. So they'll need at minimum significantly more compute power in the car.
Is HW4, which we know is coming (it was even mentioned at autonomy day) going to be enough? We don't know. Tesla doesn't either. They originally thought HW2.0 was enough. Then they thought 2.5 was. Then they thought 3.0 was. None were. Nobody
can know until the solution is actually done.
And the cameras are relatively poor resolution, too few, and with nowhere near enough redundancy or low light performance to deliver anything remotely approaching an L5 capable vehicle.
What they've done with them is amazing, and I think they can probably get (though likely needing HW4 to do it) to an
excellent L2 system that works on city streets... They might even get to an L3 system if they can throw enough compute at it, and the takeover warnings are relatively short....
But there's physical limits AI isn't a "fix" for and that will prevent anything like generalized L4+ use.
I think to deliver L4 or better city driving they're going to need at minimum two more forward/side facing cameras probably located around where the front turn lights are, to be able to see around all the corners the car currently tries to creep most of the way into coming cross traffic to see around.
It'd probably be smart to add a similar pair to the rear for rear cross traffic for situations when the car has no choice but to back out of someplace.
And I think all the cameras are going to need higher resolution (currently all that voxel/depth stuff is being done at 160 pixels of resolution- so again it's amazing how much they can do with so little, but there's limits nothing but better cameras can fix)... and also better low light performance because the current cameras aren't awesome there. Also better design or coating to deal with bad weather- I still get NoA dropping down to regular AP in moderate rain, and "FSD degraded" messages in such as well.
So their general approach? Yup. The exact one today with exactly todays in-car HW? Nope. At minimum they'll need more compute, and without also any camera upgrades they'd be limited to the narrow cases I mention (maybe L3 city and requiring driver takeover for things like when it can't see around corners--- and maybe L4 highway with weather limits on the ODD)