On S&X, yes; on 3&Y HW4 has no unused inputs. At least that is what I thought green said.HW4 has three camera imputs that are not used on the cars.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
On S&X, yes; on 3&Y HW4 has no unused inputs. At least that is what I thought green said.HW4 has three camera imputs that are not used on the cars.
This is definitely not my expectation- I think it’s a major rewrite and (mostly) end to end.
That is why my expectations of an early release are still low.
I also think there will be multiple rounds of employee and YT rollouts before we get it.
The 3/Y have the connector locations depopulated. S/X have them but with no cables plugged inOn S&X, yes; on 3&Y HW4 has no unused inputs. At least that is what I thought green said.
V11, as I recall, first went to employee vehicles mid-Nov 2022 and finally delivered to end users at the end of March 2023. So, almost five months to go from employees to customers. V12 started rollout a little later in the year than V11. So, I would expect to get V12 no earlier than April.This is definitely not my expectation- I think it’s a major rewrite and (mostly) end to end.
That is why my expectations of an early release are still low.
I also think there will be multiple rounds of employee and YT rollouts before we get it.
Yeah, I'm expecting end-to-end to drive noticeably different from 11.x, so extra practice and resetting human driver expectations is probably a good starting point. Presumably Tesla has accumulated automated test cases from 10.x as well as highway situations leading up to single stack, so hopefully there are plenty related to control to ensure those known failures of earlier FSD Beta do not regress with end-to-end. The trickier aspect is new driving behaviors that might have been "easy" for previous versions or newly learned from examples, e.g., pulling to the side of the street at a destination, probably didn't have existing testing.When we get V12 I’m going to take things slowly and test my regular routes in low traffic before starting to use regularly and be extra-extra vigilant.
No that is NOT what he said.Surprised nobody picked up on Wholemars saying "it could suck worse than v11" about fsd v12.
He's always praising v11 like it's near level 5 and now he's implying it sucks? Lol, what a tool.
Ah yeah, the original release of 11.x was exactly 11/11 11:11. Tesla's 2-week development cycle with Tuesday releases happens to match up with 12/12 12:12 this year, so that could be another internal deadline for 12.x before holiday update / end of year.11/11/22:Tesla FSD Beta V11 rollout confirmed by Elon Musk
Tesla has now started the rollout of FSD Beta V11. The update's release was confirmed by CEO Elon Musk on Twitter.www.teslarati.com
No fix yet.... I'm stuck with .27 FSDj (j for junk) and it will be six months before the beta branch gets to a .44 branch!
Best to manage and set expectations low, given Elon's track record here....This is definitely not my expectation- I think it’s a major rewrite and (mostly) end to end.
That is why my expectations of an early release are still low.
I also think there will be multiple rounds of employee and YT rollouts before we get it.
That’s a definite maybe.Back to code
My 38.9 vehicle just messaged to download 44.1
If I refuse the download, will I eventually get 38.10 offered?
Going to decline the upgradeThat’s a definite maybe.
Because the TeslaFi peeps probably haven't updated their database yet. (Or nobody has told them what FSD version it contains yet.)I don’t see FSD available in 44.1
Why?
you're suggesting that the end-to-end livestream from August is a separate development path from what we'll get with 12.x because 3 months from demo to employee testing is not enough time to make things safe
To me the above doesn’t seem that major. It certainly seems worthy of a revision to v12 Beta though!I think it’s a major rewrite and (mostly) end to end.
V12 Beta very much seems like it is going to be a small incremental change. ...
It seems to me that to believe this, you have to specifically deny and disbelieve what Elon and Ashok explained: that there is no code for recognition of stop signs, traffic lights, lanes and so on.I think that demo just shows their “end-to-end” planner in action. Which is an incremental change. They’ve rolled things in like this before - their lanes network, their occupancy network, etc. This just adds another facet (and gets rid of a lot of code). ...
I believe that it is a massive but not formless NN structure that is the starting point for the v12 video training. You don't throw "photons" at it at training time, you throw at it recordings and simulations of video and transducer telemetry.But that is very different than just throwing photons at a massive single NN structure and getting driving controls out!
Whew…It seems to me that to believe this, you have to specifically deny and disbelieve what Elon and Ashok explained: that there is no code for recognition of stop signs, traffic lights, lanes and so on.
I believe that it is a massive but not formless NN structure that is the starting point for the v12 video training. You don't throw "photons" at it at training time, you throw at it recordings and simulations of video and transducer telemetry.
I'm no ML expert, but the following is my current understanding of why this works and why it is indeed a very significant departure from the previous versions, even while it builds on them:
V12 does very much depend on the prior developments, because that is where the starting point weights come from. Those weights effectively determine the architecture of the neural network. I believe that this is tractable because it then is not, in fact, just a giant mass of software neurons with every output potentially influencing every other neuron's input. Instead there are large, but not impossibly large, lists of dot product weights that reflect the architectural grouping and functionality of various decision centers throughout the system.
There is not much compiled "code" but there is a huge database of tensor weight values that define the interactions among the neurons, and just as importantly define, by their absence, the zero-weighted non-interactions among isolated sub-networks.
In theory it could be implemented as a single homogeneous network where every neuron receives a weighted input from a gigantic list of every other neuron in the whole thing - but most of those weights would be zero, so the efficiency and performance would be dreadful, and the memory requirement would be enormous to no purpose. And more fundamentally, I believe the training iterations would quickly become quite unstable as the back propagation process would try assigning useless finite intercommunication weights among what should be the isolated sub-networks. Some kind of entropy where the prior (promising yet imperfect) network actually loses its architectural form and descends into chaos.
All this means that the v12 end-to-end network is only made possible and practical by having it train and tune the finite lists of weightings that came out of the prior code-defined NN versions. It is indeed a fundamentally different approach, but it critically depends on the prior work.
Also - and here I'm really just ruminating - the v11 starting-point platform (or future iterations of it) may still have importance as a kind of developmental breadboard to be used for cases where the v12 training is not producing satisfactory outcomes. For example, if it doesn't generatively learn to read and understand signs or crossing guard actions in school zones, they could go back to v11 and do some old-fashioned module coding that "kind of works" and becomes the foundational basis of a new sub-etwork, a needed capability enabler for the end-to-end training to then fine-tune, and from which it can extract the most performance.
Too Long? Don't Read!Whew…