What?
I've followed
@verygreen on Twitter for a while and I thought he said compute usage on HW3 for FSD beta was beyond what would allow redundancy.
Not just FSDbeta.
Tesla, since at least mid 2020 (before there even WAS an FSDBeta), has needed to use the compute on node B to run the full stack. This has gotten worse as they've added more to the code, and it's still pretty far from capable of doing L5 driving (not it's not perfect at what it does, but it's literally still missing entire featured requires for L5- see the DMV emails from Tesla where they confirm the beta lacks fundamentally required abilities L5 needs).
To say that this precludes HW3 from being able to achieve some level X or Y of self driving takes a logical leap of faith.
It also takes you making up something I never said.
I said they couldn't do L5 with redundancy. Because they lack the compute to even do L2 redundantly per Green- as you seem to admit you were already aware of.
So I'm baffled what you're actually disagreeing with that I actually said.
They might well be able to get say, L3 non-redundant on current code. Maybe even L4 (since you can heavily restrict the ODD).
But since Tesla is heavily safety focused, would never release such a system, since a single node failure would be dangerous.
It assumes whatever final code to run FSD prod on HW3 is necessarily going to be too much to allow for redundancy. How can anyone claim to know?
Because the current code, which is deeply insufficient for L5,
already is using most of the compute on
both nodes.
It's certainly
possible (though increasingly unlikely) they will find a way to get L5 working using 100% of compute available on the entire system-- but it means either node having any failure and the system fails.
So again, Tesla, being heavily safety focused, would never release an actual L5 system without redundancy.
Thus they'd need (at least) HW4 to be able to do that redundantly.
It's also possible even 100% of both nodes on HW3 is insufficient. It's also possible HW4 is insufficient.
Tesla was confident HW2 was enough, until it became obvious it wasn't. Tesla was confident HW2.5 was enough, until it became obvious it wasn't. Tesla was confident HW3 was enough, all available evidence is it isn't. Will Hw4 be enough? Unknown.
Until they actually get it working, nobody knows how much
more compute is needed to reach the goal.
But we can be far more confident that "far less" compute isn't what'll get us there. Which is the answer you'd need for HW3 to support L5 safely.
IMO it has never been conclusively proven that what was running on the 2nd HW3 processor was not a NN specifically sourcing good examples of training data for known edge cases.
Yes, it has.
If it were doing what you suggest it wouldn't be needing to feed data heavily between nodes, nor borrow compute from one node to the other for things split across them. Green has discussed what's actually splitting across nodes in more detail elsewhere if you wanna dig further.
And to avoid mod wraith I'd suggest both of you, as I did to the previous poster, take further/deeper discussion (which has already been covered extensively) to one of the threads here:
Discussion about AI, Tesla Bot, Tesla Autopilot (AP), the promise of Full Self Driving (FSD), as well as other Autonomous Vehicles.
teslamotorsclub.com