Poor guy just “upgraded” to a Model YThe best V12 video out there. Shows you a lot of nuance of human behaviors that's better than V11 you may have missed.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Poor guy just “upgraded” to a Model YThe best V12 video out there. Shows you a lot of nuance of human behaviors that's better than V11 you may have missed.
Worse, afaik he quit his job a while back and FSD testing is all he is trying to do
Yes, I also thought that.I think worse case, they could do some kind of processing to make the camera's input similar (either up-rez the old cameras or down-rez the new ones to match).
Another thought is, in regards to NN training, with the new cluster of 10,000 H100s coming online, while still having the old 14,000 A100s cluster, means they could use the new, more powerful cluster to train V12 and leave the old cluster to train HW4 for the V11.x or V12.
Yeeees, but you’d think the very smart Tesla engineers could figure it out. Ah well, we’re just going to have to wait to get confirm somewhere of people with HW4 and FSD beta be it v11 or v12The down res might not work because they are feeding the neural net the raw photon count, not a processed image.
The down res might not work because they are feeding the neural net the raw photon count, not a processed image.
Now Tesla just needs to figure out how to provide FSD for those owners who just purchased a new HW4 car and no longer have FSD. Waiting 6 months isn't what owners expected so I'm hopeful Tesla will provide a solution sooner rather than later.
If the market wakes up and realizes the true potential of FSD ($$$$) and $TSLA takes off to the moon, you can buy a HW3 equipped car for fun to use FSD now.
No, the point is give the nueral nets unprocessed video (raw photons) so that they don't have any training towards a system that isn't needed and latency is as low as possible.“Raw photon count”?? That sounds like a simple number with little information? Is it?
Are we saying this FSD never actually renders anything we’d recognize as a video -just some sort of code for the video- and yet somehow uses huge numbers of these files to figure out how to drive?
Seriously...that's actually quite interesting. Right now we are in the early stages of transition...mostly human drivers with a few robots, so the robots need to mimic the best humans. Over time, if eventually 100% of cars on the road are self driving, then driving patterns and rules should change. 100% robot drivers acting like perfect human drivers is good (and better than today), but not optimized. Robotaxis with surround cameras and high speed computing will have better awareness and reaction times, and be capable of much more: higher speeds, closer spacing, tight merging, passing through narrow gaps in cross traffic, etc. But how can we flip the switch to get there unless all the cars (at least in a given area) upgrade at the same time and/or know and trust each other's capabilities...Since the Neural Network is trained by watching videos of good human driving, how will Tesla keep it current when there are no human drivers left, only FSD cars?
I have HW4 and FSD is not enabled. It feels like EAP before FSD was introduced so hard to judge it by anything.Yes, I also thought that.
I am most interested to know if anyone with a HW4 car has FSD and how that is going, I find it very hard to believe HW4 cars don't have FSD.
What I think would work is a down-rez of the HW4 video feed to a standard compatible with NNs trained on HW3. Short term that only means one set of NNs being trained. HW4 data is simply banked for future use.
I might be wrong, but I don't understand why I am wrong, perhaps the down-rez is harder than we think it is, but all of us have done it with photos.
Also this one:- Do any HW4 Vehicles have FSD Beta activated ?So we are waiting on 11.4.7 to be released and then slow rollout and tested, hoping it will be a stable version (I'm sick of 11.4.4) . This means it will likely be best case of at least a month before new HW4 owners have shot of getting it. And this assumes we see 11.4.7 SOON, like yesterday AND it's stable.
there’s actually a few of us on here with HW4 and FSDb. Gigs of data upload frequently, real world data is definitely getting to them.
No, I don't think they need to learn that at all before speculating.I have HW4 and FSD is not enabled. It feels like EAP before FSD was introduced so hard to judge it by anything.
I think people should understand what diffusion is before speculating on why HW4 video feed can or cannot be down rez.
Diffusion is taking pixels and working backwards to form an image, so it's more of a predictive model. So basically a good diffusion model pretty much eliminates the need for high resolution. In fact the lowest number of pixels that can predict the correct image is best due to less compute needed. So if Tesla can achieve extremely good predictive accuracy using 1.3mp, then I suspect they will need to matrix the high resolution up and convert it down to 1.3mp to free up computer power.
That image is wonderful, what is the source?Threads of the day:
FSD discussion Anyone buying a CT or Highland now will be in the FSD wilderness until Spring 24
Tesla Network prerequisites For those thinking beyond FSD or here or Super bull thread
Industrial chic:
View attachment 968828