Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
Yeah, no. The nVidia do-it-all, throw-sugar-against-the-wall approach will not stack up to Dojo in the end, for a simple reason: (purpose-built hardware)

Elon: "We think that it does have a fundamental architectural advantage because it's designed not to be -- the GPU is trying to do many things for many people. It's trying to do graphics, video games. It's doing crypto mining. It's doing a lot of things. Dojo is just doing one thing, and that is training. And we're also optimizing the low-level software."​


Maybe, maybe not. Nvidia with the H100 Hopper line have already achieved an order of magnitude performance upgrade vs the older A100 Ampere based line that tesla was/is still installing for training as of the last AI day. It’s possible that further iterations of Nvidias solutions (which they are increasingly tailoring to specific use cases) will eventually make any advantages from custom-solution efforts futile. Tesla will have to continue committing development resources to constantly new chip designs to keep any sort of advantage.
 
Pictures of the Cybertruck sail storage shown in Tesla China magazine:




Judging by the door handles, they appear to be taken of one of the original prototypes, I had never seen these pics of the sail storage before.

I can’t wait for it to launch, and to get mine. Not advce, but It is possible that our chairs may no longer be under appreciated, once it does.
 
Last edited:
Maybe, maybe not. Nvidia with the H100 Hopper line have already achieved an order of magnitude performance upgrade vs the older A100 Ampere based line that tesla was/is still installing for training as of the last AI day. It’s possible that further iterations of Nvidias solutions (which they are increasingly tailoring to specific use cases) will eventually make any advantages from custom-solution efforts futile. Tesla will have to continue committing development resources to constantly new chip designs to keep any sort of advantage.
Tesla are better placed to judge this, than we are to speculate on it..

One feature of the Dojo architecture is the interconnect bandwidth, which may be very hard for a generic solution to match, because most general solutions don't have the same data requirements.

And another issue is price, Dojo doesn't necessarily the fastest, just the cheapest, and the cheapest can be the cheapest for Tesla.

Nvidia might have an edge on R&D costs due to scale, but they don't have an cost edge in making chips that is overly significant. In the case of Dojo the R&D is already mostly done.

At worst Dojo is a form of duplicated supply chain that reduces the chances of hitting scaling bottlenecks due to demand for chips exceeding supply.

With AI potentially being the next "gold rush," demand for chips might be high, supplies might be tight, waiting lists long, and prices high.

I don't think Dojo will be considered a core part of the business. So Tesla will only retain it if it makes sense.,

Tesla still makes their own car seats and they could "out source" that, but they have chosen to keep it in house.
I'm sure someone could make a case that someone else has a better car seat.
 
Tesla are better placed to judge this, than we are to speculate on it..

One feature of the Dojo architecture is the interconnect bandwidth, which may be very hard for a generic solution to match, because most general solutions don't have the same data requirements.

And another issue is price, Dojo doesn't necessarily the fastest, just the cheapest, and the cheapest can be the cheapest for Tesla.

Nvidia might have an edge on R&D costs due to scale, but they don't have an cost edge in making chips that is overly significant. In the case of Dojo the R&D is already mostly done.

At worst Dojo is a form of duplicated supply chain that reduces the chances of hitting scaling bottlenecks due to demand for chips exceeding supply.

With AI potentially being the next "gold rush," demand for chips might be high, supplies might be tight, waiting lists long, and prices high.

I don't think Dojo will be considered a core part of the business. So Tesla will only retain it if it makes sense.,

Tesla still makes their own car seats and they could "out source" that, but they have chosen to keep it in house.
I'm sure someone could make a case that someone else has a better car seat.
The seats are not a strong point except on the S/X. the Y and 3 show lack of seat experience.
 
Tesla are better placed to judge this, than we are to speculate on it..

One feature of the Dojo architecture is the interconnect bandwidth, which may be very hard for a generic solution to match, because most general solutions don't have the same data requirements.

And another issue is price, Dojo doesn't necessarily the fastest, just the cheapest, and the cheapest can be the cheapest for Tesla.

Nvidia might have an edge on R&D costs due to scale, but they don't have an cost edge in making chips that is overly significant. In the case of Dojo the R&D is already mostly done.

At worst Dojo is a form of duplicated supply chain that reduces the chances of hitting scaling bottlenecks due to demand for chips exceeding supply.

With AI potentially being the next "gold rush," demand for chips might be high, supplies might be tight, waiting lists long, and prices high.

I don't think Dojo will be considered a core part of the business. So Tesla will only retain it if it makes sense.,

Tesla still makes their own car seats and they could "out source" that, but they have chosen to keep it in house.
I'm sure someone could make a case that someone else has a better car seat.

I think the essential part is in the first paragraph of the Elon Musk Q4 2022 quote: power efficiency. High end GPU systems can easily consume 1kW when training neural networks. Running such a system 24/7 can consume (very roughly) 10K kWh per year. Over the lifetime of the system the electricity cost will be much bigger than the hardware cost. So a 10x reduction in energy is a big cost saver. And that’s without calculating any cost advantages because other kinds of hardware will be less (or not) needed.
The mentioned NVidia H100 is a TPU, not a GPU, so it is already more optimized towards AI training than towards gaming, but it is still a general purpose device capable a all sorts of training (LLMs, vision, ...). While dojo just needs to be good at the dominant thing Tesla needs in their AI training.
 
This is a record quarter for Tesla in Sweden. Model Y is also a top selling model, all powertrains included.
 

Attachments

  • Screenshot_2023-03-28-10-53-10-42_40deb401b9ffe8e1df2f1cc5ba480b12.jpg
    Screenshot_2023-03-28-10-53-10-42_40deb401b9ffe8e1df2f1cc5ba480b12.jpg
    127.3 KB · Views: 98
A couple of years ago, I was chatting with someone who was into custom processors. We ran some back of napkin numbers on when it makes sense to design your own chips optimized for a custom computing requirement.

The rough number was about 100 million dollars in microprocessor spend over may be 3 to 5 years. This doesn't look quite large, but a lot of this is dependent on what efficiency you might be able to get.

With Tesla, of course things are very different. They may be spending something in that vicinity or more if they are building an Nvidia solution. But more importantly they are building more than a GPU. It's a whole new system with vastly different architecture which would mean the breakeven would be higher. But that would probably let Tesla do some things that are physically impossible with just Nvidia GPUs paired with arm or x86 processors. What value would you put on that?

Even if dojo doesn't deliver on it's full promise, I think it will deliver enough for Tesla to pivot towards it. I would be extremely surprised if they shut it down and go back to Nvidia, because it makes zero sense for Nvidia to deliver something for Tesla for their super unique needs. And the generic solution is way worse.
Agreed. The relative speed between processors isn't relevant if enough steps are skipped and/or unnecessary logic is eliminated. I recall that an 95 mbps MIPS processor could run circles around the 325 mbps Intel processors of the day for the specific task it was designed for. I suspect DOJO has the potential to be the same.
 
The relative speed between processors isn't relevant
It becomes relevant when one takes into account all the relevant factors.
Which is never just the "synthetic benchmark" or "theoretical throughput" as you've demonstrated by:
I recall that an 95 mbps MIPS processor could run circles around the 325 mbps Intel processors of the day for the specific task it was designed for.
And even that MIPS wasn't exactly designed for particular task, it was just a bit less burdened with legacy requirements.
One of which being "benchmarks sell" i.e. better the theoretical max, more profit.

IIRC Tesla already said they get less then 10% of theoretical max performance out of their nVidia GPU cluster.
Theoretical max performance is next to meaningles, it is the practical throughput that matters.
By designing your own ASIC, you optimize for that "practical throughput" that (only) you need.
 
Pictures of the Cybertruck sail storage shown in Tesla China magazine:




Judging by the door handles, they appear to be taken of one of the original prototypes, I had never seen these pics of the sail storage before.

I can’t wait for it to launch, and to get mine. Not advce, but It is possible that our chairs may no longer be under appreciated, once it does.
That image was shown at the Cybertruck reveal. It may just be the graphics team pulling something from the shared drive.