For what it's worth, our former member Discoducky has told me that James Douma knows what he's talking about when it comes to Tesla's AI approach. Discoducky actually worked on the Autopilot software development team in 2014-2015, had weekly meetings with Elon and worked with Ashok Elluswamy to build Tesla's AI team. He has also told me that Green often thinks he knows more than he actually does. James probably gets stuff wrong sometimes and has biases and blind spots like everyone else, but in general I am personally inclined to give him credibility.
I believe
@kbM3 's rebuttals were valid as well. On what basis can we confidently conclude that
necessity is the reason that both nodes of the HW3 computer are being used to run the net? With FSD Beta being a level 2 ADAS that still requires active human oversight, is computer redundancy even a priority right now? In the event of a core failure, the driver should be the 2nd layer of protection.
Tesla have been redesigning the neural net architecture frequently and it would not be surprising if they were deliberately allowing bloat to exist in order to save engineering time and training compute so as to speed up iteration cycles. Premature optimization is the root of all evil. It is a fact that neural nets can be shrunk with optimization, but how much FSD can be compressed is uncertain. Considering that none of us here are working for Tesla AI, we are left with no option but handwaving about the possibility of squeezing a future level 4 or 5 version to fit in a single HW3 node.