Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
I listened to the call. He said they evaluated other platforms before deciding to go with NVIDIA. He didn't say that the NVIDIA solution was portable.
His direct quote:
This is all Tesla Vision software so we're not using any third party software or anything for the vision processing. This is a Tesla developed neural net, yeah, and although it's somewhat hardware independent we can actually run this on NVIDIA, AMD or Intel, we did pick the NVIDIA Titan GPU as the main chip for the neural net but it was a pretty tight call between particularly between AMD and NVIDIA, but ultimately we thought NVIDIA had the better hardware.
 
And next week Tesla will have the PX2 instead of the titan - Which supports 16 cameras and is water cooled. Can anyone say AP3 to the outdated AP2 that hasn't even seen its first birthday?

They were probably doing initial testing and demos on a titan and the new titan and the Drive PX 2 use the same architecture (Pascal) so I could easily see how it could have been a slip of the tongue.
 
The TitanX is a much faster GPU than what's in Drive PX 2. It's also more power hungry. The TitanX would be overkill. A TitanX or the NVIDIA Tesla P100 is best suited for training neural networks.
Hmm.. I thought it was the other way around. Oh wait ..... you said TitanX.... The Nvidia CEO was speaking about the "Titan" as it compares to PX2. Maybe TitanX is an upgrade.

I was impressed with the video - whether they used a Titan or a TitanX. Especially by its LIVE object recognition and the ability to learn objects based on pictures and live history.
I also watched videos on Deep Learning and was blown out of the water.
 
Hmm.. I thought it was the other way around. Oh wait ..... you said TitanX.... The Nvidia CEO was speaking about the "Titan" as it compares to PX2. Maybe TitanX is an upgrade.

I was impressed with the video - whether they used a Titan or a TitanX.
Yes Drive PX2 is always compared to an older Titan.. Drive PX 2 came out before the Titan X. Titan X is capable of 11 tflops and 44 DL TOPS whereas the Drive PX 2 is 8 tflops 24 DL TOPS.
DL TOPS = (deep learning teraops [trillion operations per second])

In all the January slides they compared the Drive PX 2 to the 2015 Titan X (Maxwell arch) not the newer 2016 version (Pascal arch).

The Drive PX 3 is going to be with Volta architecture samples will be late 2017.
 
Yes Drive PX2 is always compared to an older Titan.. Drive PX 2 came out before the Titan X. Titan X is capable of 11 tflops and 44 DL TOPS whereas the Drive PX 2 is 8 tflops 24 DL TOPS.
DL TOPS = (deep learning teraops [trillion operations per second])

In all the January slides they compared the Drive PX 2 to the 2015 Titan X (Maxwell arch) not the newer 2016 version (Pascal arch).

The Drive PX 3 is going to be with Volta architecture samples will be late 2017.
Fantastic. Thanks for the information. Very interesting.
 
So to be clear: Do we definitively know if the Tesla software runs on top of Visionworks, or it also substitute for Visionworks?

Visionworks is just a toolkit of functions that are optimized for various image processing tasks.

So the Tesla Vision software isn't necessarily build on top of Visionworks or any other NVidia supplied toolkit, but it's likely used to speed up various image processing tasks.

https://www.khronos.org/assets/uplo...ion-summit/V2_VisionWorks_OpenVX_tutorial.pdf

There are also libraries provided by NVidia that speed up DNN's. Like if you wanted to play around with DNN's and you had an NVidia graphics card you'd want to utilize cuDNN

http://on-demand.gputechconf.com/gtc/2014/webinar/gtc-express-sharan-chetlur-cudnn-webinar.pdf
 
Can it? So it is not using any Nvidia hardware-specific code?

I'm sure it's using NVidia specific code in terms of libraries to speed up various things like DNN's and Image processing.

But, it's still portable in the sense that they are not completely locked in. They just have to use an equivalent toolkit/library from a different vendor. It might really annoy some software developers, but they're not locked in.

Deep neural networks are inherently machine agnostic. When I train an image classification network on a NVidia Titan X in a Ubuntu workstation I get a model that I can then copy to lots of different machines to run an image classification on. Sure the classification might be really slow if I have to run it in an ATOM x86 processor, but it works. Now that's a really example simple, but it can be extended to more complicated things.

In summary it's a whole new ball game now where it's running a lot more general purpose GPU than before.
 
Last edited:
Regarding the camera color, just because the demo video shows black and white doesn't necessarily mean the cameras aren't color, it could have been to simply highlight the fact they were showing video from the car vs the video camera.