Todd Burch
14-Year Member
Regardless of how the first iteration of v12 looks out of the gate, I think the positive thing to look forward to is that making improvements to the first version should be much easier.
With v11, improving C++ planning code in an already extremely complex codebase means very tiny baby steps and a high likelihood for bugs. It also means one step forward in one area often means one step backward elsewhere.
With neural nets, they just need to source more data from their data engine, possibly resize/reparameterize the network, then run it through compute. This is something that Tesla has proven, they have tons of real data for, and is much less prone to error than complex C++ code.
So I think this gives hope that we’ll see more steady improvements with each iteration of v12.
The Autopilot team has a lot on their plate right now:
1. They are working on Actually Smart Summon, intending to release late Q1.
2. Unless they’ve determined they can source all data from HW4 cars even for use in HW3 cars, they need to fork the HW3 and HW4 video clips and compute separate neural nets for each. They probably still need to source a lot more HW4 clips than HW3, particularly for winter and edge cases.
3. They need to get FSD running on Cybertruck and Semi.
4. They need to fully integrate the new NVidia cluster (maybe already completely done?) and the new Dojo clusters as they come online into their compute framework.
Because of all this, they may just make the v12 version just “good enough” to be better than v11 so they can get everyone focused on v12 from here on.
The real indicator of future progress will be how iterative versions of v12 improve on the previous one. THAT will be the indicator of how much room v12 has to grow.
We might even find that Tesla iterates their architecture further over time, combining or separating perception modules/networks as they learn more.
With v11, improving C++ planning code in an already extremely complex codebase means very tiny baby steps and a high likelihood for bugs. It also means one step forward in one area often means one step backward elsewhere.
With neural nets, they just need to source more data from their data engine, possibly resize/reparameterize the network, then run it through compute. This is something that Tesla has proven, they have tons of real data for, and is much less prone to error than complex C++ code.
So I think this gives hope that we’ll see more steady improvements with each iteration of v12.
The Autopilot team has a lot on their plate right now:
1. They are working on Actually Smart Summon, intending to release late Q1.
2. Unless they’ve determined they can source all data from HW4 cars even for use in HW3 cars, they need to fork the HW3 and HW4 video clips and compute separate neural nets for each. They probably still need to source a lot more HW4 clips than HW3, particularly for winter and edge cases.
3. They need to get FSD running on Cybertruck and Semi.
4. They need to fully integrate the new NVidia cluster (maybe already completely done?) and the new Dojo clusters as they come online into their compute framework.
Because of all this, they may just make the v12 version just “good enough” to be better than v11 so they can get everyone focused on v12 from here on.
The real indicator of future progress will be how iterative versions of v12 improve on the previous one. THAT will be the indicator of how much room v12 has to grow.
We might even find that Tesla iterates their architecture further over time, combining or separating perception modules/networks as they learn more.