Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
Keep joking by pointing out some of the personal commentary but when you look into substantive comments he makes it’s a lot more meaningful.

You’re being selective about comments right out of a political playbook to character Assisi are when I’m talking about a conversation rooted in technical arguments, which he would make and has made. Do I have to point out all those comments myself?

Tesla uses a modified version of Yolo called hydranet in identify objects and estimate voxels. Those voxels are only accurate down to about 1 meter. There's a much more advanced algorithm called complex Yolo which takes direct ranging data to generate voxels, and is accurate to within about 2-3 centimeters.




During AI day, Karpathy mentioned using pointcloud data as part of the labeling process for their training data. The customer cars don't collect pointcloud data. So that would mean it can't actually be used for the training they were describing.

You can't just throw data into a neural net and expect it to learn. You need very precise, high quality, cleaned, and formatted data. Just grabbing video snippets of customers cars doesn't work.


The problem is, 250,000 scenes could mean anything. Were those all included in training? Were they real world or simulated (in which case there's no need to label)? How long was each scene? Did it include all camera data, or a subset?


Those numbers are wrong. The DGX-H100 gets 8 PFLOPs TF32, 16 PFLOPs FP16/BF16 (TF16 isn't a thing), 32 PFLOPs FP8/INT8. On top of that, the DGX is a complete system, which includes power consumption for the CPUs, network cards, everything. The training tiles are just the DOJO chips, and don't include all the support hardware. In terms of network bandwidth, Tesla later admitted they haven't even started on an interconnect, and those numbers were purely aspirational.

But more importantly, the Dojo specs are what they're aiming for. Not only do they still need to finish the actual chip, they still need to develop the interconnect, compiler, and a functional PyTorch build. That's years of work, at minimum. By which time Nvidia will be another generation ahead. This is just another case of Tesla hyping up theoretical tech as if it's already finished.




It’s so disappointing to see these types of responses and dare I say elitism/ego in a community that typically prides itself on information seeking.

We’ve seen Gali, Rob, and Dave all take a drastically different tone on fsd since they entered the beta. That’s for a reason.

Hey, if you just don’t like the dudes tone or find yourselves too disinterested to get into a back-and-forth with someone of a different opinion then (shrugs). At that point yu do actually start to sound like a cult.

For the record I do believe tesla will nail mass deployment of robotaxis first.
Keep joking by pointing out some of the personal commentary but when you look into substantive comments he makes it’s a lot more meaningful.

You’re being selective about comments right out of a political playbook to character Assisi are when I’m talking about a conversation rooted in technical arguments, which he would make and has made. Do I have to point out all those comments myself?

Tesla uses a modified version of Yolo called hydranet in identify objects and estimate voxels. Those voxels are only accurate down to about 1 meter. There's a much more advanced algorithm called complex Yolo which takes direct ranging data to generate voxels, and is accurate to within about 2-3 centimeters.




During AI day, Karpathy mentioned using pointcloud data as part of the labeling process for their training data. The customer cars don't collect pointcloud data. So that would mean it can't actually be used for the training they were describing.

You can't just throw data into a neural net and expect it to learn. You need very precise, high quality, cleaned, and formatted data. Just grabbing video snippets of customers cars doesn't work.


The problem is, 250,000 scenes could mean anything. Were those all included in training? Were they real world or simulated (in which case there's no need to label)? How long was each scene? Did it include all camera data, or a subset?


Those numbers are wrong. The DGX-H100 gets 8 PFLOPs TF32, 16 PFLOPs FP16/BF16 (TF16 isn't a thing), 32 PFLOPs FP8/INT8. On top of that, the DGX is a complete system, which includes power consumption for the CPUs, network cards, everything. The training tiles are just the DOJO chips, and don't include all the support hardware. In terms of network bandwidth, Tesla later admitted they haven't even started on an interconnect, and those numbers were purely aspirational.

But more importantly, the Dojo specs are what they're aiming for. Not only do they still need to finish the actual chip, they still need to develop the interconnect, compiler, and a functional PyTorch build. That's years of work, at minimum. By which time Nvidia will be another generation ahead. This is just another case of Tesla hyping up theoretical tech as if it's already finished.




It’s so disappointing to see these types of responses and dare I say elitism/ego in a community that typically prides itself on information seeking.

We’ve seen Gali, Rob, and Dave all take a drastically different tone on fsd since they entered the beta. That’s for a reason.

Hey, if you just don’t like the dudes tone or find yourselves too disinterested to get into a back-and-forth with someone of a different opinion then (shrugs). At that point yu do actually start to sound like a cult.

For the record I do believe tesla will nail mass deployment of robotaxis first.
I believe they will too but by the time they do it won’t be a business lime the current opportunity. The problems with Tesla and real l4/5 range from the actual business to the software to hardware. All of which are moving bits and nothing will be finished very quickly, until software is really done we are waiting on finalizing hardware needed - they may require more cameras etc. it’s going to be a long bit and when the folks visiting the factory say 6 years min maybe 10 than it is just senseless to attack voices of realism. But rest assured you will get downvoted.

Thanks for the informative post
 
I believe they will too but by the time they do it won’t be a business lime the current opportunity. The problems with Tesla and real l4/5 range from the actual business to the software to hardware. All of which are moving bits and nothing will be finished very quickly, until software is really done we are waiting on finalizing hardware needed - they may require more cameras etc. it’s going to be a long bit and when the folks visiting the factory say 6 years min maybe 10 than it is just senseless to attack voices of realism. But rest assured you will get downvoted.

Thanks for the informative post

1. If you noticed on Tesla's FSD package they changed the conditions a few years ago to really restrict the definition of "FSD complete" is. I suspect that Robotaxis will only be legal with an additional "upgrade package" that has new hardware for cameras (perhaps even adding new cameras).

2. I suspect that Tesla's robotaxi-specific vehicle will have more cameras and a different resolution quality than current Tesla cars.

3. Tesla can roll out robotaxi today and just geofence it to residential neighborhoods and disable left turns. Totally possible. I don't think they want to cheat, but in a "worst case" scenario a robotaxi "MVP" pathway is still possible there.

4. Even though I was a former Telecom VC that looked at a lot of hardware infrastructure (and vision-based machine learning) systems for applications like monitoring cable lines, I am STILL out of my depth in this area. I have no clue if that Reddit dude (or anyone else) is blowing steam up my ass. But I don't know if the Tesla people are (and honestly Duoma seems sorta like a charlatan to me). There's no "Sandy Munroe" in the Tesla influencer world with strong real AI credentials that can dive in deep. That's what makes it a black box for me and why I'm desperately trying to information seek in this area (even though we know it won't be needed for a $10 trillion+ Tesla valuation with cars + energy storage).

Always on the search for more info...
 
3. Tesla can roll out robotaxi today and just geofence it to residential neighborhoods and disable left turns. Totally possible.

Except, it's not.

Because FSDBeta is explicitly missing numerous, required, functions for anything higher than L2 driving- (a >L2 capable OEDR being the biggest) and Tesla has no plans to add them to that code. They explicitly said so in legal responses to CA DMV that were made pubic a while back.

They do go on to say they intend to develop FUTURE software that will have those capabilities- but FSDBeta is not that and never is going to or intended to be that. FSDBeta, the current stuff that still isn't finished enough for wide release- is L2 and will remain that way. Whatever RTs use will be future software after that.

If you really want to dig into the technical reasons that's true I'd again encourage further discussion to the FSD forum specifically- lots of threads on this already.
 
I suspect that Tesla's robotaxi-specific vehicle will have more cameras and a different resolution quality than current Tesla cars.
The new cameras are higher resolution.

Once the problem has been solved, the software can potentially be optimised to run on older hardware.

Not having cameras cover the required field-of-view or cameras being blinded are examples of situations where the old hardware can't adapt.

There is a wide variance is the quality of human vision and when not blinded, and even when partially blinded or obscured, cameras need to be compared to human vision, and in particular the standard of vision required to drive.

If the sensor suite has "the standard of vision required to drive" the remainder of the problem is training the Neural Nets. IMO getting it working on more capable hardware is a likely path to getting it working on less capable hardware.

The people who can accurately comment of Tesla's FSD progress, and the problems that still need to be solved, are those working in the FSD team and only those working in the team. Everyone else is guessing on the basis of a ridiculously insufficient set of data.
 
The people who can accurately comment of Tesla's FSD progress, and the problems that still need to be solved, are those working in the FSD team and only those working in the team. Everyone else is guessing on the basis of a ridiculously insufficient set of data.
Do you have any idea how ridiculous you sound? The people who can most accurately comment on Tesla's FSD progress are those testing it in the real world. There are lots of those people. Many are unqualified to offer any kind of opinion because they understand little about how to accurately observe anything. But that still leaves plenty of people with direct experience and the ability to interpret and describe it.

The Tesla team can most accurately assess the progress of those things that haven't yet made it out to their legion of beta testers.

Personally I have observed only marginal improvements since last year, when I first got the FSD Beta software. They have come slowly and steadily, but with many steps backward. And I'm mostly driving around Bay Area suburbia, which ought to be pretty easy and familiar to Tesla's developers. There have been no "step changes", which I've found very disappointing.
 
Last edited:
This is good for a laugh (or a cry):
YouTuber's Gutted Tesla Model S Plaid Brakes Fail, Hits the Tire Wall at Over 160 MPH

A YouTuber shared the moment his car’s freshly installed carbon-ceramic brakes failed at over 160 mph (257 kph). The Model S Plaid stops abruptly in the track’s tire wall. Fortunately, he escaped with minor injuries. The man, however, still blames Tesla for this unfortunate event.
Wait, so this dude replaced the stock brakes with carbon-ceramic ones, but he somehow didn't know he can't use the street standard brake fluid on the track? This one's a real winner, someone get him a Darwin Award.
 
It is obviously a warranty issue. Anyone can see this.

The driver needs to be sent back to the OEM for a brain replacement, if the warranty period hasn't lapsed already.

I recently replaced the brake fluid on my bicycle, the manufacturer specifies two grades, one has a higher boiling point than the other. Guess which one I put in? A rocket surgeon isn't needed to work this out.
 
Last edited:
Wait, so this dude replaced the stock brakes with carbon-ceramic ones, but he somehow didn't know he can't use the street standard brake fluid on the track? This one's a real winner, someone get him a Darwin Award.
Yep, and from the product page of the brake kit he installed:

A professional brake bleed with high performance fluid is a requirement for the functionality of our brake system (and any brake system).

So he didn't follow instructions.
 
Do you have any idea how ridiculous you sound? The people who can most accurately comment on Tesla's FSD progress are those testing it in the real world. There are lots of those people. Many are unqualified to offer any kind of opinion because they understand little about how to accurately observe anything. But that still leaves plenty of people with direct experience and the ability to interpret and describe it.

The Tesla team can most accurately assess the progress of those things that haven't yet made it out to their legion of beta testers.

Personally I have observed only marginal improvements since last year, when I first got the FSD Beta software. They have come slowly and steadily, but with many steps backward. And I'm mostly driving around Bay Area suburbia, which ought to be pretty easy and familiar to Tesla's developers. There have been no "step changes", which I've found very disappointing.
Could I ask an ”uneducated” question please. The assumption seems to be that if enough data is thrown at a NN, it will eventual learn to do the right thing. Has this ever been proven for a moderate task? Ever since my first day as a programmer (IBM mainframe, punch card area), AI has promised to obsolete the programmer, and 40 years later the software guys are as necessary as ever!
 
  • Informative
Reactions: Surfer of Life
Good thing this didn't crash TSLA. Keywords: Tesla fire car crash.


Absolutely nothing to do with Tesla, Inc.
 
Last edited: