Gerardf
Active Member
Firstly, let me defend why I'm posting this here: the AI Chip is a huge deal to Tesla's valuation and I don't think it has been understood by the market yet, let alone priced into $TSLA:
The troll is actively spreading disinformation with statements like this:
- Waymo has a current valuation of around 175 billion dollars. Tesla is ahead of them both with the AI chip and by having hundreds of thousands of cars on the road providing feedback.
- The 'autonomy market' is worth trillions of dollars:
The troll's suggestion that Elon somehow does a drop-in replacement with some ARM chip is either fundamentally confused or intentionally confused: the replacement will be for Tesla's NVidia's GPU module, which is a self-contained computing node in a simple, modular form factor.
No-one truly working on self-driving is going to confuse ARM chips with GPU and TPU chips, these are drastically different chips that have drastically different size, power budget and thermal envelopes:
I.e. the troll either doesn't know and is just a bullshitting SA author on the wrong side of a $TSLA trade, with free time to spread the chaos in his mind to other minds, or he knows it perfectly well and is lying intentionally about Tesla's AI chip to disrupt the flow of information on this forum.
- Most ARM chips are general purpose CPUs with a couple of watts of power use typically, and much lower idle power usage. They are the Swiss army knifes of computing.
- GPUs are proprietary special-purpose vector computing CPUs with an effective processing capability of thousands of (very simple) CPUs.
- TPUs are even more special-purpose vector computing CPUs designed for convolutional neural network (CNN) processing.
- Tesla's AI chip is a new, grounds-up chip designed by two of the world's leading chip designers: Pete Bannon of Apple A5/A6 fame and Jim Keller of x86-64 fame. It's a new CPU, with functional prototypes probably manufactured as custom ASICs - but if Tesla wants to they could build their own chip manufacturing plant as well and make the chips themselves.
- Tesla's AI chip is, according to Elon, even more special-purpose, it was specifically designed for CNN AI processing:
- Elon mentioned that the AI-Chip computation model is neural network centric: I read this as the Tesla AI-Chip using minifloats (very small size floating point data that fits into 16 bits), similar to Google TPU v3's bfloat format. This is a major advantage over GPUs, which typically do not support minifloats and waste a lot of RAM, computing bandwidth, chip real estate and power on calculating with 32-bit floats.
- I also understood Elon's Q2 CC comments: "So, it's a huge number of very simple computations with the memory needed to store the results of those computations right next to the circuits that are doing the matrix calculations. And the net effect is an order of magnitude improvement in the frames per second." (note that I fixed the complications/computations mistake Elon made when he said this). I.e. the Tesla AI-chip's memory is merged/embedded with functional units on the design level, which avoids memory bus traffic entirely while a single matrix multiplication is ongoing (!). It's also possible that all memory is embedded in the Tesla AI Chip, an external DRAM interface is only used to initialize the networks, to feed input data and to communicate the results of the computation. This is possibly another big source of speedups.
- Elon mentioned the following curious bit of information, which I think most people have missed: "whereas the current NVIDIA's hardware can do 200 frames a second, this is able to do over 2,000 frames a second and with full redundancy and fail-over. So, it's an amazing design and we're going to be looking to increase the size of our chip team and our investment in that as quickly as possible." (emphasis mine)
- I.e. the AI chip is fully redundant and supports fail-over: Tesla extended redundancy to the functional units as well. If so then this is groundbreaking as well: competitors don't have redundancy of functional GPU or TPU units at all, they only have redundancy of RAM modules (ECC RAM). Neither NVidia nor Google TPUs have functional redundancy (fail-over) features, at all. (This might be an interesting scoop for @ZachShahan to look into?) I.e. the Tesla AI Chip will be robust against various types of hardware failure - it will in fact be pretty close to space rated and I'd not be surprised if SpaceX was interested in this as well.
Make no mistake: the Tesla AI chip is a ground-breaking design that took years to complete, build and test by a team of top CPU designers. No competitor comes even close to this design currently (not Google, not Nvidia, probably not Intel/MobilEye either) - and Tesla has working field units (!).
If Tesla wins the autonomy race, or just gets ahead of the pack and utilizes first mover advantage, then when correctly priced in $TSLA should be worth above $1,000 today.
IMHO an excellent post, but maybe better in the general thread than market action.
Actually, I feel it deserves a thread where it will not disappear in the noise when we are 2 weeks and 1000 posts further.
You worked hard on that one and personally I want to be able to find it back.