Note that on the conference call they mentioned that they have functional, tested, compatible field-test units already, and successfully tested them on all vehicle variants. This is what Pete Bannon said on the CC:
"My team is leading currently the Hardware 3 development. The chips are up and working, and we have drop-in replacements for S, X and 3, all have been driven in the field. They support the current networks running today in the car at full frame rates with a lot of idle cycles to spare. So, I think we're all really excited about what Andrej and his team will be able to do with this hardware in the future."
This suggests they are in a
very advanced stage:
- they already taped out the layout a couple of times and bootstrapped the AI chip, for a target process - for example for 14nm. They know who is making the chips and they have probably negotiated the exact volume pricing as well.
- they have developed and tested the glue, the firmware, the interfaces and the host CPU software (x86 - see below).
- they have a drop-in computing blade for all relevant vehicle platforms
Note that they have very narrow compatibility constraints, because they have full control of the entire software environment, which probably sped up the R&D and productization process significantly - just 2-3 years since late 2015 is super fast for an entirely new chip.
My other guess is that they eliminated the ARM aspect of Nvidia's blade entirely, and are interfacing the x86 host CPUs (which I suspect runs the main vehicle control loop and Autopilot logic) to the AI chip directly. But this is very speculative, just based on the probable design of their system - and my guess could be wrong: it would cost Tesla very little to license a generic ARM core and integrate it into their AI-chip (Pete Bannon has done that at Apple) - but maybe they avoided even that step.