Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

HW4 computer mass production in Q4 2021

This site may earn commission on affiliate links.
"Tesla is working on HW4.0 self-driving chip with semiconductor company TSMC with a timeline for mass production in Q4 2021, according to a new report coming out of China."

"Mass production wouldn’t happen until Q4 2021 – meaning that we aren’t likely to see those chips inside Tesla production vehicles until 2022."

Tesla is working on HW 4.0 self-driving chip with TSMC for mass production in Q4 2021, report says - Electrek

So it looks like we are about 2 years away from the next gen FSD computer (HW4) getting into Tesla cars.

I promise I'll save the date and be back here in 2022 and have a good laugh when Elon will tweet "what a massive improvement HW4 will be" and that in the next 6 months, FSD for sure will work this time
 
Thanks for this. It makes sense.

From the new AMD leak, the picture seems clearer now:
HW will be CoWoS that includes Broadcom cpu, AMD gpu and Tesla chip.


Per TSMC, CoWoS is Chip-on-Wafer-on-Substrate. The "chip" there typically is consists of multiple silicons or multiple dies of different sources. The "wafer" is an interposer, aka an RDL (redistribution layer), or a miniature single-or-multiple-layer PCB made with semiconductor process to provide only interconnects for the dies above it, and the interposer's manufacturing cost is rather cheap. The "substrate" is considered as part of the final package.

Below two illustrations show a wafer (where you get about 30 CoWs), and one CoW before putting on a package substrate. Those 25 chips in electrek means 25 CoWs. Hope this helps.

View attachment 587810

View attachment 587813
 
TL;DR -- each chip will be 5-10x the transistor count than the NVIDIA A100, plus be transistor level optimised for FP32 operations... and Tesla have ordered thousands. Dojo will outperform the combined compute power of every single NVIDIA AI chip in operation across all the compute clouds globally. This is why Tesla needs to build Dojo as the alternative would be to take the entire public cloud compute and Amazon or Google etc wouldn't want this.

If this is true then it shows how completely screwed Tesla are.

Dojo won't even be up and running for another 1.5-2 years, yet they are lying and saying FSD will be available for beta this year, and robotaxi in the next 6 weeks.

And when it does come online it will be so far beyond anything currently in existence there are huge questions over if it will even work, and if the results of all that training will be useful. Remember that they have to boil it down to something that runs in the car, not on the supercomputer.

Then there is the other huge flaw with relying so much on NNs and machine learning to this extent: there is no way to understand how it works. It's a black box, and when it misbehaves you can't understand why. All you can do is add that case to your training and try to build a better one next time.

Every other manufacturer is using logic and algorithms so that they can control the behaviour.
 
No they aren't. Plenty of people with MCU1 have had their AP computer replaced with the HW3 one at no cost.

I agree and paid for FSD on our sept 2017 90D. I intend to wait until HW4 is available before I have the upgrade from HW2.5 to enable FSD even if it requires waiting a couple more years. Amazing how many folks are so impatient. I don't care about being left out in the initial roll out phases. I just don't want them to try to say "your HW 3 will be fine" and refuse to up it to HW4. And Tesla will do whatever it takes to make it work.

I'm just saying....
 
If this is true then it shows how completely screwed Tesla are.

Dojo won't even be up and running for another 1.5-2 years, yet they are lying and saying FSD will be available for beta this year, and robotaxi in the next 6 weeks.

And when it does come online it will be so far beyond anything currently in existence there are huge questions over if it will even work, and if the results of all that training will be useful. Remember that they have to boil it down to something that runs in the car, not on the supercomputer.

Then there is the other huge flaw with relying so much on NNs and machine learning to this extent: there is no way to understand how it works. It's a black box, and when it misbehaves you can't understand why. All you can do is add that case to your training and try to build a better one next time.

Every other manufacturer is using logic and algorithms so that they can control the behaviour.

I’ve since changed my mind on these chips. I don’t think the original article was properly translated so I don’t think there’s enough info to speculate. It may very well be their HW4 chip.

As for the rest of your post: you don’t really make a point other than they are late?