Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Blog Report: Tesla is Working With AMD on New Chip Designed for Autopilot

This site may earn commission on affiliate links.
Tesla has worked with AMD to build its own chip for its autonomous driving efforts, according to a report from CNBC.

A source familiar with the matter told CNBC that Tesla is currently testing the first implementation of the processor.

The project is reportedly being led by Jim Keller, a longtime chip designer and current head of the Autopilot division. Keller worked at AMD and Apple prior to joining Tesla last year. He was the designer of Apple’s A4 and A5 iPhone chips.

The report says more than 50 people are working on the initiative.

Autopilot hardware currently leverages Nvidia graphics processing units after making a change last year from Mobileye components. Mobileye was acquired this year by Intel, which works closely with Alphabet’s Waymo self-driving efforts.

By designing a chip of its own, Tesla can become less dependent on other companies and achieve greater efficiency with chips designed specifically for the workload of autonomous driving.

 
Last edited by a moderator:
... so it is turning out that self driving is pretty tough if custom silicon is required. Is this a large change of story of being a 'software only' problem and depending on pushing code out on GPU's... (general / graphic, your pick)

What, AP 3.0 hardware required next? 2.0 hardware is not even being fully utilized, 2.5 is out and Model 3's must be made "with something" so can't wait around for 3.0 hardware.

At least I hope they've made the computers frame rail mountable in the car "slide in style" ... we're going to need easy change outs to realize this dream.
 
  • Like
Reactions: zmarty
... so it is turning out that self driving is pretty tough if custom silicon is required. Is this a large change of story of being a 'software only' problem and depending on pushing code out on GPU's... (general / graphic, your pick)

What, AP 3.0 hardware required next? 2.0 hardware is not even being fully utilized, 2.5 is out and Model 3's must be made "with something" so can't wait around for 3.0 hardware.

At least I hope they've made the computers frame rail mountable in the car "slide in style" ... we're going to need easy change outs to realize this dream.

I agree. I think multiple processor iterations will happen. After all Elon himself said each year they can buy double the processing power for the same price.

Its crazy how AP 2.0 has not even reached full feature parity with AP1. Lets hope Tesla's progress in AP2.0 will transfer to future hardware updates.
 
... so it is turning out that self driving is pretty tough if custom silicon is required. Is this a large change of story of being a 'software only' problem and depending on pushing code out on GPU's... (general / graphic, your pick)

What, AP 3.0 hardware required next? 2.0 hardware is not even being fully utilized, 2.5 is out and Model 3's must be made "with something" so can't wait around for 3.0 hardware.

At least I hope they've made the computers frame rail mountable in the car "slide in style" ... we're going to need easy change outs to realize this dream.
You're a bit worked up dude - of course they are always going to be working on the next thing.
 
You're a bit worked up dude - of course they are always going to be working on the next thing.
Exactly.

While I don't believe they'll EVER reach FSD with AP2.0, the fact that they're working on the next hot thing doesn't imply that they're giving up hope on AP2.

Just like AP1 will eventually get on ramp to off ramp capability, since there's been tons of AP1 improvement since AP2 came out. O wait....
 
Of course they are working on a custom chip. The huge advantages of doing so are going to be obvious to anyone in the business who isn't already committed to a different solution. GPUs like the one's installed in HW2 and HW2.5 aren't optimized for neural networks, they're optimized for graphics. While they perform better than CPUs that doesn't make them ideal, they just happen to be the best thing you can buy off the shelf today. It takes a long time to develop and field new computing architectures, and deep learning neural networks are really new and rapidly advancing. And while it's not yet clear what the ideal deep learning chip would look like we know enough to say that it's not a GPU. Google's work with the giant vector/array unit used in the first generation TPU clearly shows that simple (though large) computation units that are focused on the needs of tensor computation greatly outperform the more general architectures used in modern commercial GPUs.

That said, NN optimized compute hardware is going to keep moving forward at a rapid pace in parallel with advances in the software itself. If you want to provide the best possible thing to your customers *today* what should you do? If you have guts and vision you provide a system with adequate sensors and easily upgradable compute hardware and software. That is exactly what Tesla is doing. They aren't waiting until the technology is perfected to start selling hardware that can use it, they are selling hardware that will be capable of doing the deed once the software catches up. And the one part of the hardware that might need to change is swappable. It's brilliant.

So while you might not have everything you want right now, at least you haven't spent $100k on something that's going to turn in to a doorstop when the software inevitably arrives. It might take some time, and it *might* require some changes to the part of the car that does the thinking, but the framework is there in HW2.

From the stuff you see in lay news coverage it's hard to see that the software is catching up at a breathtaking pace. The secrecy that all this commercial stuff is shrouded in that makes it seem like not much is happening. But if you read the source research material you'll find that the technology itself is advancing faster than almost anybody thought it would. Every single month there are remarkable new discoveries that contribute to advancing the state of the art. It's only a matter of time before the underlying tech becomes capable enough to allow your HW2 Tesla to do the amazing things that it's creators are working towards.
 
... so it is turning out that self driving is pretty tough if custom silicon is required.

Custom silicon isn't about difficulty, it's about cost.

That's why Apple has their own SOC. It's why miners have ASICs for bitcoin. It isn't hard to make a phone or solve random math problems for money. But you can do it cheaper if you customize your hardware a bit.

Elon wants to sell millions of cars. When you get quantities high enough the development cost per chip is cheap and the savings per chip are greater than that cost.
 
I would never have dreamed that a custom chip would make sense for this application. I would have been absolutely certain that the hundreds of millions scale that NVidia operates under would have allowed them to design a far better optimized / better performing chip than what Tesla can.

Alas... I've been wrong before. Frequently, in fact, according to my wife.
 
I would never have dreamed that a custom chip would make sense for this application. I would have been absolutely certain that the hundreds of millions scale that NVidia operates under would have allowed them to design a far better optimized / better performing chip than what Tesla can.

Alas... I've been wrong before. Frequently, in fact, according to my wife.


If NVIDIA's part were focused on running neural networks your insight would be spot on. But NVIDIA's part is primarily focused on graphics acceleration with NNs being an afterthought. Not that their marketing will tell you that, but it's the bare truth. As of right now the only pure NN silicon to be revealed in any detail is Google's first gen TPU from a couple of years ago and they aren't selling it. NN accelerators are being added to all the major compute architectures but most of them are still a year or more away from shipping silicon, and all of those are owned by companies not named Tesla.

FSD can eat all the NN horsepower you can throw at it right now, but Tesla is limited in terms of how much they can spend on silicon, and how much wattage and volume they can devote to it. The commercial stuff out of NVIDIA will keep getting better and cheaper, but they probably aren't going to make a pure NN part because it doesn't make strategic sense right now. By making their own part Tesla can a) get 10x the performance in the same envelope for about the same pricing if they can make them in 100k+ volumes and b) have a lever to negotiate with AMD/Intel/NVIDIA/etc for future parts. It's all about keeping your options open and not letting a vendor get control of a critical part of your business.
 
  • Like
Reactions: phibetakitten
If NVIDIA's part were focused on running neural networks your insight would be spot on. But NVIDIA's part is primarily focused on graphics acceleration with NNs being an afterthought. Not that their marketing will tell you that, but it's the bare truth. As of right now the only pure NN silicon to be revealed in any detail is Google's first gen TPU from a couple of years ago and they aren't selling it. NN accelerators are being added to all the major compute architectures but most of them are still a year or more away from shipping silicon, and all of those are owned by companies not named Tesla.

FSD can eat all the NN horsepower you can throw at it right now, but Tesla is limited in terms of how much they can spend on silicon, and how much wattage and volume they can devote to it. The commercial stuff out of NVIDIA will keep getting better and cheaper, but they probably aren't going to make a pure NN part because it doesn't make strategic sense right now. By making their own part Tesla can a) get 10x the performance in the same envelope for about the same pricing if they can make them in 100k+ volumes and b) have a lever to negotiate with AMD/Intel/NVIDIA/etc for future parts. It's all about keeping your options open and not letting a vendor get control of a critical part of your business.

All of the automakers are pursuing FSD. That should create an enormous potential market for Nvidia (100s of millions per year). Why wouldn't it make sense for them to build a NN-specific chip? They would seem to be vastly better positioned than Tesla.
 
All of the automakers are pursuing FSD. That should create an enormous potential market for Nvidia (100s of millions per year). Why wouldn't it make sense for them to build a NN-specific chip? They would seem to be vastly better positioned than Tesla.

I agree. And I think much of the business community and market agrees with you as well. NVIDIA is very well positioned for this market. Incidentally, so is Intel when viewed strategically, though they do not currently have an appropriate candidate. Intel has been on a buying spree with the apparent intention of fleshing out their ability to compete in this space which can either be viewed as canny strategy, insurance, or desperation depending on who you listen to.

The challenge that NVIDIA faces is leveraging their existing capabilities to create something that has an enduring competitive advantage so that it can enjoy the kind of margins they need to satisfy their existing shareholders. For example, just about anybody could make a TPU clone and leap ahead of the existing GPU parts for pure NN applications (even Tesla can do this), but if NVIDIA does that they are discarding a lot of their advantages and competing toe-to-toe with a bunch of well funded low overhead startups. That's a fight they could win, but it could come at the cost of potentially not being able to leverage their existing general purpose GPU capabilities. So instead they are walking a line where they add NN specific capabilities to their GPUs to close the gap with pure play NN chips in terms of performance while keeping a lot of the very advanced capabilities they already have. This gives them the opportunity to offer what a startup cannot while leveraging their existing chip volume and also providing NN capabilities to all of the more general purpose customers that they have - keeping them ahead of AMD in that market. So in general it's win-win for them, but it involves taking the risk that the pure play market might leap forward. Adding NN to a GPU means that for pure NN customers the GPU parts might add overhead without bringing enough advantages to pay for the overhead.

This embrace and absorb strategy is also what Intel or any other established player wants to do. Startups want to flip the table but established players want to absorb the new market into their existing business model and technical capabilities. At this point it is very hard to say which way the market will break. Conceivably a larger player could pursue both strategies at once but history suggests they won't. They are always more willing to risk missing out on a big market that may or may not form than they are willing to risk an extended decline in their operating margins that comes from marginalizing their existing product lines.

By not developing/marketing a pure play part they are taking what they see as a more tolerable risk. But not doing that means that customers like Tesla will see upside in doing their own development in the space. Whether they end up using that part or not is a matter of how well each side executes and how the technology comes along. Both NVIDIA and Tesla are taking what they see as the lowest risk path. NVIDIA would like as much of the business as they can get, but will be OK even if they only get the part that their existing customer base requires. Tesla must have the highest possible technical capability if they are going to live up to their potential and they don't want to bet the farm on NVIDIA because they know NVIDIA has other priorities as well. So Tesla will maintain relationships with the other players and also pursue their own development to maximally hedge their options. Over the last 6 months technical developments have been favoring pure play, so that's currently the lead horse. But the race has many lengths yet to run.
 
Report: Tesla snubbing Nvidia, developing its own self-driving chip

Out of the comments to that article:

"While not the same market AMD won the console market specifically because they are willing to make custom designs with really good terms for the licensee. Nvidia seems to want to control the whole widget. For a company like Tesla they want to control the widget not give it to a third party."

Control of the chip design could be a huge thing for Tesla. If Nvidia owns the chip co-designed with Tesla and Tesla is just a customer, then Tesla is at their mercy for pricing and production. If Tesla owns the chip co-designed with AMD, they can take *thier* design to any chip fabricator and not be at the mercy of a sole source supplier.