Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

AP 2.0 Computer questions

This site may earn commission on affiliate links.
Ah, the Tim Allen "More Power!" approach.

I think there's a good case for developing a simpler, cheaper solution since any sort of Level 5 system will probably be legislated to require/provide redundancy.

Duplicating a cheaper board will obviously be more cost effective.
 
  • Informative
Reactions: oktane
Sad. Not even the dual GPU model from Nvidia? Surprised they chose the less powerful system. And a single CPU/GPU is supposed to process 4 simultaneous video streams to allow FSD? How?
The faithful will be along shortly to explain this away. To tell you how Tesla will "make it right", and how short sighted you are, and how brilliant EM is, and how you should feel guilty for feeling ripped off by a company that is saving the world from itself. Standby.
 
...what would be the limitation on using a computer like this running a normal Pc or OS X interface.
The fact it's built into the car seems like a major limiting factor.

And a single CPU/GPU is supposed to process 4 simultaneous video streams to allow FSD? How?
Thought it was 8? 3 Front cameras, 4 on the side, and one on the back?

Musk hinted a upgrade might be needed a few months ago so we'll see where that goes.
 
And a single CPU/GPU is supposed to process 4 simultaneous video streams to allow FSD? How?

I'm not sure why that would be surprising? I have a single CPU/GPU machine at home that I can easily use to "process" 20 simultaneous HD video streams.

Whether or not it's possible to do with their current processing set up is fully dependent on the type of processing required(e.g. network architecture) and the particular CPU and GPU in question, not the number of chips in it.
 
Sad. Not even the dual GPU model from Nvidia? Surprised they chose the less powerful system. And a single CPU/GPU is supposed to process 4 simultaneous video streams to allow FSD? How?

I won't defend it away but I tend to think it was done intentionally with the understanding on Tesla's part that they will need to upgrade the computer on cars that have FSD or that want to upgrade when it's available. The benefit here is that they can put cheaper hardware in all cars which represents a lower cash outlay, especially if the take rate on FSD is low and it means that when they eventually do have FSD ready for a public roll-out the cost of the more powerful computer will have come down in price exponentially. So I think the needed upgrade is priced into the cost of the FSD option.

Or... Maybe Tesla truly is trying to do it with this board but given that they're not even using Nvidia's top hardware and the fact that Nvidia says you need two of the top computers to do FSD, something tells me Tesla isn't going to pull it off with the current hardware suite and they both know it and have planned for it.

I think once everyone saw Tesla was trying to do this all with cameras they knew that they were taking a big risk. It still remains to be seen if they can pull it off. The problem is, the system can't be good 99.999999% of the time. It has to be good 100% of the time. That's the difference between AP1 and FSD that scares me because Tesla isn't good at releasing things that aren't have baked in some form or fashion.
 
The problem is, the system can't be good 99.999999% of the time. It has to be good 100% of the time.

Actually, that's better than Tesla needs to be. Humans are far from perfect, and if we wait for software to be perfect, it won't happen.

99.999999% means once every 100 million times.

Americans have a fatality about once every 80 million miles. So we're worse than 99.999999% per mile. More than that, only about 1/2% of accidents are fatalities. So we crash a car every 450,000 miles. If the Tesla system has the same likelihood of fatalities in a collision as a human, it only needs to be 99.9998% good per mile to be safer than a human.

Not that I have any faith Tesla can get to even 99.9998% anytime soon.
 
Last edited:
I am the only one that has disassembled one. It is a single CPU and a single GPU. Entire TDP is probably below 40 watts, so there is no way in hell it is faster than even 1 Mac pro.

Your teardown is awesome, but as someone who knows what TDP is, you should also know that it's not a measure of application specific compute power at all. Even staying within TDP, a 2016 Macbook Pro had a 6360U processor, which has a TDP of 15W. So why can't a 40W processor easily be 3X the speed of a 15W processor, much less 1X?

I'm no fan of Tesla's marketing on AP2, but the Nvidia is a very specific, focused processor delivering about 12 trillion operations per second, while many benchmarks put a 6360U in the sub-20 Gflop range. I get that all we have to go on is Nvidia's specs in this area, but I really doubt they are lying about how many operations it can do.

As a general purpose processor, the 6360U would be an awful processor for video. Which of course is the whole reason we have GPU's in computers as well. I do think it's pretty fair for Tesla to say the Drive PX2 is orders of magnitude more powerful than a general purpose intel processor, given they inherently mean in their application.

Finally, didn't Nivida say the TDP of Drive PX2 is 250W, so a single processor would be more like 125W?
 
The Tegra X2 CPU is 7.5W to 15W max TDP, but the GP106 GPU part can be anywhere from 80-120W depending on clocks / voltage (80W for mobile GP106, 120W for desktop). They of course may be running the GP106 at a lower clock speed and/or voltage, reducing TDP, but it seems likely the total TDP is around 100W or so (15 + 80 + whatever other random bits of gear are on the board).

The size of the heatsink and the fans makes 100W not too unreasonable, assuming the area it's located is sufficiently ventilated (otherwise it would eventually cook in it's own hot air).
 
The OP said "Secondly, this computer has the power of 10-15 mac pros"

Since Mac makes both a MacBook Pro, and a Mac Pro, and he's referencing some opinion that the Tesla/Nvidia computer is as fast as 10-15 of them, I'm assuming he's remembering this quote from the Nivdia CEO (not Tesla):

""The computational capability of the Drive PX 2 is roughly the same as 150 Macbook Pros,"

Now, if he meant the Actual Mac Pro, then fine, but do you believe the Mac Pro is 150X faster than the Macbook Pro? If so, the Mac Pro is 15X more power efficient than the MacBook Pro even though the processor is 3 years older.
 
The OP said "Secondly, this computer has the power of 10-15 mac pros"

Since Mac makes both a MacBook Pro, and a Mac Pro, and he's referencing some opinion that the Tesla/Nvidia computer is as fast as 10-15 of them, I'm assuming he's remembering this quote from the Nivdia CEO (not Tesla):

""The computational capability of the Drive PX 2 is roughly the same as 150 Macbook Pros,"

Now, if he meant the Actual Mac Pro, then fine, but do you believe the Mac Pro is 150X faster than the Macbook Pro? If so, the Mac Pro is 15X more power efficient than the MacBook Pro even though the processor is 3 years older.




Sorry about the confusion my numbers were off. I was referring to the Nvidia CEO and 150 MacBook Pro's (laptop) not the Mac Pro.
 
NVIDIA's comment about Macs is focused solely on the performance of the Mac's CPU, and doesn't account for the GPU included in the machines. It's pure marketing. The flip side is the NVIDIA's gpu's, no matter how fast they are at very specific parallel processing tasks, can't be general processing units (CPU's), try running Windows or Linux on a GPU (hint: you can't). My current Mac Pro, which is obviously faster than a MacBook Pro, but not by orders of magnitude, gets about a teraflop out of the CPU and another 7 teraflops out for the GPU's, so around 8 total. A top of the line MacBook Pro, when including its GPU is gong to be a little under 3 teraflops. The only way you to get anywhere close 150 MacBook Pros is if you look at half precision flops compared to just the CPU. Single precision flops only gets you to 40-50x a single CPU, so really their marketing is based on very specific math (half-precisions, which is fine for vision system model execution) compared to only an unoptimized portion of an overall computer. While it is true it is typical and slightly disingenuous marketing nonesense...
 
The "The computational capability of the Drive PX 2 is roughly the same as 150 Macbook Pros" was said on Jan 5th, 2016. The deeper statement was that the CPU is 8 TFLOPs and the GPU is 24 TFLOPs.

The 2015 MacBook Pro had Intel Iris Graphics 6100 as the video, and a 2.7 GHz dual Core i5. It's not fair to take a quote from early 2016 and compare it against the current MacBook pro. Wikipedia lists the 6100 at 850 Gflops and Intel themselves quotes the processor at 40 Gflops. Way off from your 1 and 3 Terraflops estimates.

I don't believe Tesla has ever said they are using the Drive PX2 nor that the processor is 150X as fast as a MacBook Pro. These are all associations from people between the Nvidia CEO's statement and the assumption that Tesla would use Drive PX2. As frustrated as I am with Tesla's development of AP2, I don't think this is a place to hold something against them.