Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Nvidia introduces new FSD computer chip

This site may earn commission on affiliate links.

diplomat33

Average guy who loves autonomous vehicles
Aug 3, 2017
12,723
18,692
USA
"NVIDIA has unveiled what they call the “world’s most advanced processor” for use in autonomous vehicles and robots. The new NVIDIA DRIVE AGX Orin chip can perform 200 trillion operations per second, which is almost seven times as many as NVIDIA’s previous Xavier chip (30 trillion operations) and more than Tesla’s FSD Computer (144 trillion).

NVIDIA’s Xavier chip was used in a multi-processor configuration and paired with GPUs in their DRIVE PX Pegasus self-driving computer, which NVIDIA claimed at the time would be able to offer level 5 autonomous driving.

The Orin chip will be capable of scaling from level 2 to level 5 autonomous driving. It will be available to automakers for the 2022 model year. "
Spurned by Tesla, NVIDIA's new Orin self-driving processor ups the game by 7x - Electrek
 
HW4 (FSD Computer 2) is supposed to come out in 2021. From TechCrunch:

“Tesla is now about halfway through the design of the next-generation chip.​

Musk wanted to focus the talk on the current chip, but he later added that the next-generation one would be “three times better” than the current system and was about two years away.”
If HW4 has triple the TOPS of HW3, that's 400+ TOPS to Orin's 200 TOPS.
 
Nvidia might have this in production in 2+ years. Tesla has had HW3 in production cars since almost a year ago. Having it in production and having it in production cars is not the same thing... Nvidias customers will be ~3-5years behind Tesla if this is released according to plans. By then HW4 should be ready and likely some form of FSD has been out gathering data and getting useful feedback for a few years in a few hundred thousands vehicles. Imo Tesla is still increasing its lead, at least against nVidia and Mobileye.
 
HW4 (FSD Computer 2) is supposed to come out in 2021. From TechCrunch:

“Tesla is now about halfway through the design of the next-generation chip.​

Musk wanted to focus the talk on the current chip, but he later added that the next-generation one would be “three times better” than the current system and was about two years away.”
If HW4 has triple the TOPS of HW3, that's 400+ TOPS to Orin's 200 TOPS.

Orin is a single SOC and is 200 TOPS under 70W yet you are comparing it to two chips on a board. Even worse you compared it to 3x boards (6x chips). lol seriously? my goodness HW4 will be a board with multiple chips not a single chip. But ofcourse you already knew this and purposely left it out.
 
Last edited:
I could totally be wrong about the facts of the matter. My interpretation from statements by Tesla and Nvidia was that Tesla Hardware 4/Full Self-Driving Computer v2 would have 400+ TOPS total and that Nvidia’s Drive AGX Orin platform would have 200 TOPS total. It may be the case that Nvidia will also offer a hardware platform with multiple Orins, thereby providing 400+ TOPS total. So far, I haven’t been able to find any information to confirm or deny that.

My personal interest in this topic is not around comparing the hardware acumen of Tesla and Nvidia — I have no dog in that fight — but simply in comparing the TOPS of compute that will be available in HW4 Teslas vs. cars with Drive AGX Orin.

What I “Disagree” with is an approach to discussions of this sort that are personally attacking and/or accusatory. You can make a polite factual correction without accusing someone of being knowingly deceptive.

On a related note, you should not be so intolerant of disagreement that you feel a need to personally attack people who have a different opinion than you. That’s not a good way to have these kinds of discussions. People are allowed to disagree and have different opinions. If someone changes their mind, it should be because you showed them good reasons and evidence for thinking differently. The end game should not be to attack and bully someone until they go away because you can’t tolerate them having a different opinion than you. That’s not discourse; that’s trolling.

The hazard of posting under my real name (or a pseudonym like “strangecosmos” tied to my real name) is that anonymous people online sometimes make false, defamatory claims about me that could damage my reputation. I’ve found that many people are far too credulous and will often just automatically believe slanderous claims without fact-checking them. If I just posted anonymously, I wouldn’t care about that because it couldn’t harm me in the real world. I could just block or ignore the people I don’t want to hear from and not think about it. But since I post under my real name and it can harm me in the real world, I feel some need to engage when trolling/harassing/abusive behaviour occurs.
 
Last edited:
@Trent Eady

Stop hiding behind the disagree button. There's nothing to disagree with. This is blatant fact. You compared a single SOC with a board with multiple chips. Its equavelent to disagreeing that 1+1 is 2.

These are balant misinformations.

Who cares what the underlying number of chips the product has? The thing that is important is the final product in production vehicles and its capabilities.

That's like saying it doesn't count that a Tesla does 1/4 mile in 10.x seconds because they have 2 motors instead of 1 single V8 engine. Tesla's AutoPilot HW4 is 400+ TOPS and Nvidia's product that they have announced so far is 200 TOPS. it's all on paper anyway until its goes into a production vehicle (*cough* HW3: 144 TOPS *cough*)

Competition is good man, let sit back as consumers and enjoy who can make the best product for us.
 
Who cares what the underlying number of chips the product has? The thing that is important is the final product in production vehicles and its capabilities.

That's like saying it doesn't count that a Tesla does 1/4 mile in 10.x seconds because they have 2 motors instead of 1 single V8 engine. Tesla's AutoPilot HW4 is 400+ TOPS and Nvidia's product that they have announced so far is 200 TOPS. it's all on paper anyway until its goes into a production vehicle (*cough* HW3: 144 TOPS *cough*)

Competition is good man, let sit back as consumers and enjoy who can make the best product for us.

Because every chip anaylsis done since like forever has been SoC vs an SoC. But I guess i should leave it to Tesla fans to try to rewrite the rules. If it puts Tesla in a bad negative it doesn't matter or not the way it should be compared. If it puts Tesla in a very good light then that's all that matters and the way it should be compared.

But since "who cares what the underlying # of chips the product has", then if you want to compare boards then Its Tesla's HW4 400 TOPS vs. Orin's 2,000 TOPS.
 
Last edited:
I could totally be wrong about the facts of the matter. My interpretation from statements by Tesla and Nvidia was that Tesla Hardware 4/Full Self-Driving Computer v2 would have 400+ TOPS total and that Nvidia’s Drive AGX Orin platform would have 200 TOPS total. It may be the case that Nvidia will also offer a hardware platform with multiple Orins, thereby providing 400+ TOPS total. So far, I haven’t been able to find any information to confirm or deny that.

My personal interest in this topic is not around comparing the hardware acumen of Tesla and Nvidia — I have no dog in that fight — but simply in comparing the TOPS of compute that will be available in HW4 Teslas vs. cars with Drive AGX Orin.

What I “Disagree” with is an approach to discussions of this sort that are personally attacking and/or accusatory. You can make a polite factual correction without accusing someone of being knowingly deceptive.

On a related note, you should not be so intolerant of disagreement that you feel a need to personally attack people who have a different opinion than you. That’s not a good way to have these kinds of discussions. People are allowed to disagree and have different opinions. If someone changes their mind, it should be because you showed them good reasons and evidence for thinking differently. The end game should not be to attack and bully someone until they go away because you can’t tolerate them having a different opinion than you. That’s not discourse; that’s trolling.

The hazard of posting under my real name (or a pseudonym like “strangecosmos” tied to my real name) is that anonymous people online sometimes make false, defamatory claims about me that could damage my reputation. I’ve found that many people are far too credulous and will often just automatically believe slanderous claims without fact-checking them. If I just posted anonymously, I wouldn’t care about that because it couldn’t harm me in the real world. I could just block or ignore the people I don’t want to hear from and not think about it. But since I post under my real name and it can harm me in the real world, I feel some need to engage when trolling/harassing/abusive behaviour occurs.

Nvidia's Orion goes up to 2,000 Tops which took me like 1 seconds. Second of all its obvious that any chip ever made will be put on a board and that board can contain multiple chips. Especially when the chip is coming from a chip maker.

2x Orin = 400 TOPS, 130W
2x Orin + 2 GPU = 2,000 TOPS, 750W


NVIDIA-Orion-SOC-DRIVE-Variants.jpg


I never attacked you. You do realize what a personal attack is right? "An abusive remark on or relating to somebody's person".
Please don't try to play the victim card.
 
Last edited:
Its interesting reading this in the light of your recent threatening email to a member.

No one has posted any false or defamatory claims about you. The only thing people have done constantly is debunk your assertions with independent facts which you turn back and claim you are a victim of an attack. Which you do in all our interactions including this very one. You have a history of doing this and then you attempt to get the person banned or report false statements to their employer.

This is why most AV engineer dont post in these forums or in r/selfdrivingcars because they will end up encountering Tesla fans that will try to endanger their work

Practice what you preach.
 
Last edited:
Nvidia's Orion goes up to 2,000 Tops which took me like 1 seconds. Second of all its obvious that any chip ever made will be put on a board and that board can contain multiple chips. Especially when the chip is coming from a chip maker.

2x Orin = 400 TOPS, 130W
2x Orin + 2 GPU = 2,000 TOPS, 750W

*blinks*

800 TOPS per GPU? With a GPGPU? That number sounds high to me.

But even if that’s right, 750W isn’t viable for self-driving car purposes. That’s burning 3–4 miles of range every hour, which is potentially a double-digit percentage of the entire car’s power consumption. GPUs are entirely the wrong solution to the problem except perhaps as a temporary way of getting compute power during testing/implementation while you’re still figuring out how much horsepower you need.

So really, it’s 400 TOPS. In a few years. I don’t even know why NVIDIA still thinks GPGPUs make sense.
 
  • Informative
Reactions: APotatoGod
*blinks*

800 TOPS per GPU? With a GPGPU? That number sounds high to me.

But even if that’s right, 750W isn’t viable for self-driving car purposes. That’s burning 3–4 miles of range every hour, which is potentially a double-digit percentage of the entire car’s power consumption. GPUs are entirely the wrong solution to the problem except perhaps as a temporary way of getting compute power during testing/implementation while you’re still figuring out how much horsepower you need.

So really, it’s 400 TOPS. In a few years. I don’t even know why NVIDIA still thinks GPGPUs make sense.

750W is nothing for a SDC. Absolutely mean-less. This is typical tesla fan's line of thinking and rationalization.
Everything someone does better is rationalized and dismissed as being useless. Its quite embarrassing.
 
*blinks*

800 TOPS per GPU? With a GPGPU? That number sounds high to me.

A single number metric has the fidelity of being a single number metric. I don't think it is accurate to compare TOPS between a neural network chip and a general purpose GPU because the gpu has many operations it can do and to run a neural network it will need to do more operations than a purpose built neural network chip.
 
@Trent Eady

Stop hiding behind the disagree button. There's nothing to disagree with. This is blatant fact. You compared a single SOC with a board with multiple chips. Its equavelent to disagreeing that 1+1 is 2.

These are balant misinformations.


Actually... 1 + 1 = 10... Someone's always gonna disagree with anything... ;)
 
A single number metric has the fidelity of being a single number metric. I don't think it is accurate to compare TOPS between a neural network chip and a general purpose GPU because the gpu has many operations it can do and to run a neural network it will need to do more operations than a purpose built neural network chip.

It is because comparisons are done using public benchmarks (with the exception of Tesla ofcourse). This is how GPU, CPU, RAM, etc from different companies are compared. That way its clear cut no one is lying. Obliviously Tesla won't participate in something as fair as that.

Didn't Elon reiterate this year that Level 5 can be done with the current Nvidia's GPU-only hardware?
Can't have it both ways, either the GPU sucks or it doesn't suck.

Now mind you, this is a hardware with only 10 TOPs and around 200 Watts for early AP2.0 versions that Eon is saying is sufficient.
There was a huge debate here when AP2 were unveiled and majority of Tesla fans said the chip was more than enough. When Nvidia came out and said it wasn't they claimed Nvidia was lying in order to sell more powerful chips. Few years later now all of a sudden Tesla fans believe that the AP2 chip is worthless.

Remember the chip Elon said this year was enough is three generations old and only had 10 TOPS and doesn't even use any of the next gen improvement that Nvidia has.
  • It doesn't have Tensor Cores (NN accelerator to accelerate large matrix operations and perform mixed-precision matrix multiply and accumulate calculations in a single operation)
  • It doesn't utilize Tensor RT deep learning inference optimizer.
  • It doesn't have Nvidia's Deep Learning Accelerators (NVDLA)
  • It uses unbearably slow pci-e and not the new NVLink 2.0 which was created to eliminate memory bottlenecks (NVLink 2.0 has data rates transfer of up to 300 GB/s)
  • It uses miserable slow RAM (GDDR5) instead of HBM2 (+200 GB/S).
  • I could go on and on as there are so many new tech.
Years ago Tesla fans said the 10 TOP 200 watts chip was all you need for Level 5 FSD.
After three gen of innovation Tesla fans now say 800 TOP on 300 watts chip is absolutely useless and isn't good for SDC.
 
Last edited:
  • Disagree
Reactions: bxr140