Opted to post my reply to @Krispykreme here for dear of retribution in the Investor thread...
It may not make as much sense out of context, but here goes:
It may not make as much sense out of context, but here goes:
Despite your condescending salutation to @mongo and claims of being in the autonomous driving industry, it appears your grasp on some of these things may not be up to snuff.
Next gen HW is almost always consumes less power per flop than previous. Combine this with using specific-purpose compute cores (likely tensor processors), rather than more general purpose GPU cores, and your order of magnitude assertion is likely off target by a significant factor.
Memory may not even need to be substantially larger. The memory size needs to be large wnough to hold the bueral-network. Just because the module has drastically more compute horsepower doesn't necessarily mean the memory size has to increase correspondingly. Heck, it could be the SAME size as the current design.
Perhaps your seeming bad take on this whole situation is assuming this is a scaled up GPU platform. It's almost assuredly a tensor CPU (TPU), which is what Google and the other big-boys have done. A GPU is better than a CPU at a number of NN chores, but a TPU is yet in a different league.
Custom silicon (an ASIC) can be a variety of designs... and even incorporate multiple architectures on the same die. Memory like SRAM can even be incorporated.
There may very well be licensed CPU core as part of the design, but it's not what will be doing the heavy lifting. The real horsepower (and cost) is going to be the NN processing, again likely TPU design. And if you hadn't been aware, Jim Keller who designed this beast for Tesla is perfectly capable of a "from scratch" design...
It's likely there are NO GPU licensing fees at all.
But relatively cheap as compared to the NN processing portion of the system.
Good... that matters for probably about 5% of the overall system.
I've yet to see you make a compelling case for why it's needed. Dad.
You seem to not have a real grip on things yourself. It kind of reminds me of the old guard who insisted that you needed specific hardware for everything all the while folks went ahead and didn't it in software anyway and they were left standing in the dust wondering what happened.
We are talking a minimum factor of 10 in power consumption and cooling required.
Next gen HW is almost always consumes less power per flop than previous. Combine this with using specific-purpose compute cores (likely tensor processors), rather than more general purpose GPU cores, and your order of magnitude assertion is likely off target by a significant factor.
What are you talking about- this is probably most absurd thing I have heard.
Do you even realize the amount of data bandwidth is needed? Embedded memory isn’t cheap. Certainly more expensive
Than external memory.
Memory may not even need to be substantially larger. The memory size needs to be large wnough to hold the bueral-network. Just because the module has drastically more compute horsepower doesn't necessarily mean the memory size has to increase correspondingly. Heck, it could be the SAME size as the current design.
I got to laugh on this one too.
Just because nVidia uses terms GPU. Doesn't mean its exclusive.
Perhaps your seeming bad take on this whole situation is assuming this is a scaled up GPU platform. It's almost assuredly a tensor CPU (TPU), which is what Google and the other big-boys have done. A GPU is better than a CPU at a number of NN chores, but a TPU is yet in a different league.
Do you even understand what a custom ASIC is? Nothing is build from Scratch anymore. Tesla most likely will be use a version of ARM core for the main CPU. Probably some sort of licensed GPU as well.
Custom silicon (an ASIC) can be a variety of designs... and even incorporate multiple architectures on the same die. Memory like SRAM can even be incorporated.
There may very well be licensed CPU core as part of the design, but it's not what will be doing the heavy lifting. The real horsepower (and cost) is going to be the NN processing, again likely TPU design. And if you hadn't been aware, Jim Keller who designed this beast for Tesla is perfectly capable of a "from scratch" design...
It's likely there are NO GPU licensing fees at all.
Even Apple own processors are ARM based. Those are not free.
But relatively cheap as compared to the NN processing portion of the system.
Yes. So I understand how ARM license works
Good... that matters for probably about 5% of the overall system.
Son- please go read up what lidar does and why it is needed. You are embarrassing yourself.
I've yet to see you make a compelling case for why it's needed. Dad.
I work in this field. You have zero clue on AD.
You seem to not have a real grip on things yourself. It kind of reminds me of the old guard who insisted that you needed specific hardware for everything all the while folks went ahead and didn't it in software anyway and they were left standing in the dust wondering what happened.