I definitely don’t know, but I doubt with a 10000x improvement in training compute, that we would get to autonomy (even fairly wide ODD L3) with HW3/4.
Just seems like far too complicated a problem to solve with such rudimentary hardware (HW3/4).
But I could be wrong. As you say, nobody knows.
And nobody will ever know if it's possible with current hardware, because by the time anything is remotely close to solved for actual autonomy, we'll have long moved on to much more capable hardware, and no one is going to want to go back and make it run on HW3/4!
Compute improvements like that 10000x would open the way for exploring more approaches to solve the problem on arbitrary inference (and vision) hardware, though! Would help - just not on current hardware.
Just think about the problem for a bit. Think about what you do when driving and how difficult it would be for a computer. It's incredibly complicated and difficult to solve, especially with NN techniques. How could current hardware or any existing technique solve it? We don't have any example in the AI field of any hardware (with basically the maximum power currently available) even on the inference side (let alone the training side) which is capable of doing anything close to solving this problem with the reliability required. Even if you look at OpenAI's latest models - they can't even solve the problem (a very different one) that they are designed to solve with high reliability!
So I just can't see it happening.
Who knows, maybe there will be a breakthrough, but just seem like such a huge space of possible inputs that solving it is going to take a lot more power, and possibly a few breakthroughs as well. People unwisely put a lot of faith in training - remember that training has never been demonstrated to solve this sort of problem (except for humans of course).
I literally know nothing about training and inference, so it's possible there is major Dunning-Kruger going on with me. Not sure where I fall on that curve. May even be possible that I do not fall on the curve; given my lack of knowledge, I may be way off the bottom, even before the peak of Mount Stupid.
Even if you restrict the ODD substantially and forget about robotaxi, even then the problem is likely far too difficult for actual meaningful autonomy (wide ODD L3) in best-case conditions.
Just probably unsafe with many regressions.
I don’t think it really needs to. Just a year like any other.
Tesla just needs to sell cars, a lot of them, and develop new cars people want. And build a lot of batteries that don't suck. There’s not a lot of market value assigned to FSD/autonomy currently (no matter what some cray-cray investment folks may be assigning in their alternate reality) so it’s
very fortunately not important what happens, unless someone else ends up having something Tesla does not. And that seems unlikely.
This was enabled by a much larger model so won’t (cannot) happen here. ChatGPT has a very open design space, doesn't have to be right (at all!), etc. ChatGPT is a perfect manifestation of AI because it can be super wrong a lot of the time and can still be very useful. It's an assistant, but that type of assistant has very different requirements than those of a driving assistant.
v12 will be nothing special. If it is ever released, it will be an incremental improvement, and it may enable more frequent/faster updates (TBD - it's possible validation will take a lot longer because it is harder to look for regressions).
Let's keep expectations reasonable so we can be extremely content when v12 comes out:
1) It's not going to be the path to autonomy.
2) It's only going to improve on some of the issues that v11 had.
3) It will introduce new problems we didn't have before.
4) It's going to require careful monitoring to make sure it doesn't run traffic lights, etc.
5) Maybe it'll lead to more frequent updates.
6) Maybe it'll be a bit smoother.
That's a pretty solid update that we can be happy with!
It's not like we're going to get to L3 this year. Let's just be reasonable and realize we're just looking for something that will be slightly easier for Tesla to iterate on with less manpower - hopefully it will scale and reduce human capital costs for Tesla. Those cost reductions will have trickle-down benefit for FSD owners. Maybe they'll just be able to script the iterations! Make it end-to-end; no human intervention anywhere between releases! Even with the hardware costs, that would be cheap.