I am far from a ML expert, but thought I'd share my newfound perspective on bFSD- turning more bullish :
I've been somewhat critical of the progress of bFSD for awhile now. It requires a considerable amount of interventions for most of my suburb and city commutes around Southern California. It doesn't drive as "good" as I do for many of the maneuvers. Honestly, it often embarrasses me when interacting with other vehicles. It still exhibits random phantom braking and often brakes far more abruptly and less smoothly than me. I can go on, but you get the point. My disappointment is primarily with the gap between what I want it to do (end state) and what it really does (capability today).
I'm beginning to think of it differently now. I think seeing the addition of the occupancy network gave me new insight. While Tesla has ultimately been working on giving me what I want (end state)...tactically, they've been working more on the building blocks that will ensure their ability to eventually fully solve the problem. They've been adding new NN components like Birds-eye 3D space and then the occupancy network, etc. Each of these new approaches allowing better training to solve problems that have been elusive before. The current capability of bFSD is more a testament that the building blocks are approaching "good enough". I suspect many of the things I want bFSD to do better, Tesla has confidence (or even knowledge) that they have the correct building blocks to allow iterate training to ultimately improve and master, so they turn their attention on the next problem to ensure it's solvable. I feel like their energy hasn't been to iterate the current NN to be the best it can be, rather iterate enough to highlight the next new building block required. It feels like the ratio of energy used for seeking new building blocks vs iterating the current ones is shifting towards iteration and that's when we can start to see things improve quickly.