nativewolf
Active Member
Keep joking by pointing out some of the personal commentary but when you look into substantive comments he makes it’s a lot more meaningful.
You’re being selective about comments right out of a political playbook to character Assisi are when I’m talking about a conversation rooted in technical arguments, which he would make and has made. Do I have to point out all those comments myself?
Tesla uses a modified version of Yolo called hydranet in identify objects and estimate voxels. Those voxels are only accurate down to about 1 meter. There's a much more advanced algorithm called complex Yolo which takes direct ranging data to generate voxels, and is accurate to within about 2-3 centimeters.
…
During AI day, Karpathy mentioned using pointcloud data as part of the labeling process for their training data. The customer cars don't collect pointcloud data. So that would mean it can't actually be used for the training they were describing.
You can't just throw data into a neural net and expect it to learn. You need very precise, high quality, cleaned, and formatted data. Just grabbing video snippets of customers cars doesn't work.
…
The problem is, 250,000 scenes could mean anything. Were those all included in training? Were they real world or simulated (in which case there's no need to label)? How long was each scene? Did it include all camera data, or a subset?
…
Those numbers are wrong. The DGX-H100 gets 8 PFLOPs TF32, 16 PFLOPs FP16/BF16 (TF16 isn't a thing), 32 PFLOPs FP8/INT8. On top of that, the DGX is a complete system, which includes power consumption for the CPUs, network cards, everything. The training tiles are just the DOJO chips, and don't include all the support hardware. In terms of network bandwidth, Tesla later admitted they haven't even started on an interconnect, and those numbers were purely aspirational.
But more importantly, the Dojo specs are what they're aiming for. Not only do they still need to finish the actual chip, they still need to develop the interconnect, compiler, and a functional PyTorch build. That's years of work, at minimum. By which time Nvidia will be another generation ahead. This is just another case of Tesla hyping up theoretical tech as if it's already finished.
…
It’s so disappointing to see these types of responses and dare I say elitism/ego in a community that typically prides itself on information seeking.
We’ve seen Gali, Rob, and Dave all take a drastically different tone on fsd since they entered the beta. That’s for a reason.
Hey, if you just don’t like the dudes tone or find yourselves too disinterested to get into a back-and-forth with someone of a different opinion then (shrugs). At that point yu do actually start to sound like a cult.
For the record I do believe tesla will nail mass deployment of robotaxis first.
I believe they will too but by the time they do it won’t be a business lime the current opportunity. The problems with Tesla and real l4/5 range from the actual business to the software to hardware. All of which are moving bits and nothing will be finished very quickly, until software is really done we are waiting on finalizing hardware needed - they may require more cameras etc. it’s going to be a long bit and when the folks visiting the factory say 6 years min maybe 10 than it is just senseless to attack voices of realism. But rest assured you will get downvoted.Keep joking by pointing out some of the personal commentary but when you look into substantive comments he makes it’s a lot more meaningful.
You’re being selective about comments right out of a political playbook to character Assisi are when I’m talking about a conversation rooted in technical arguments, which he would make and has made. Do I have to point out all those comments myself?
Tesla uses a modified version of Yolo called hydranet in identify objects and estimate voxels. Those voxels are only accurate down to about 1 meter. There's a much more advanced algorithm called complex Yolo which takes direct ranging data to generate voxels, and is accurate to within about 2-3 centimeters.
…
During AI day, Karpathy mentioned using pointcloud data as part of the labeling process for their training data. The customer cars don't collect pointcloud data. So that would mean it can't actually be used for the training they were describing.
You can't just throw data into a neural net and expect it to learn. You need very precise, high quality, cleaned, and formatted data. Just grabbing video snippets of customers cars doesn't work.
…
The problem is, 250,000 scenes could mean anything. Were those all included in training? Were they real world or simulated (in which case there's no need to label)? How long was each scene? Did it include all camera data, or a subset?
…
Those numbers are wrong. The DGX-H100 gets 8 PFLOPs TF32, 16 PFLOPs FP16/BF16 (TF16 isn't a thing), 32 PFLOPs FP8/INT8. On top of that, the DGX is a complete system, which includes power consumption for the CPUs, network cards, everything. The training tiles are just the DOJO chips, and don't include all the support hardware. In terms of network bandwidth, Tesla later admitted they haven't even started on an interconnect, and those numbers were purely aspirational.
But more importantly, the Dojo specs are what they're aiming for. Not only do they still need to finish the actual chip, they still need to develop the interconnect, compiler, and a functional PyTorch build. That's years of work, at minimum. By which time Nvidia will be another generation ahead. This is just another case of Tesla hyping up theoretical tech as if it's already finished.
…
It’s so disappointing to see these types of responses and dare I say elitism/ego in a community that typically prides itself on information seeking.
We’ve seen Gali, Rob, and Dave all take a drastically different tone on fsd since they entered the beta. That’s for a reason.
Hey, if you just don’t like the dudes tone or find yourselves too disinterested to get into a back-and-forth with someone of a different opinion then (shrugs). At that point yu do actually start to sound like a cult.
For the record I do believe tesla will nail mass deployment of robotaxis first.
Thanks for the informative post