Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon: "Feature complete for full self driving this year"

This site may earn commission on affiliate links.
I envy you @diplomat33.

I really do.

There was a time when I also felt that naive excitement about all things Tesla and all things Tesla Autopilot. I just can’t feel it anymore.

Thank you ... I guess. I do feel a bit sad for you. I wish you could get that excitement back. Maybe if/when Tesla succeeds and releases AP3 with more FSD features, and you are enjoying the features in your car, you can get that excitement back? I wish that for you very much.
 
Thank you ... I guess. I do feel a bit sad for you. I wish you could get that excitement back. Maybe if/when Tesla succeeds and releases AP3 with more FSD features, and you are enjoying the features in your car, you can get that excitement back? I wish that for you very much.

The feeling is probably mutual, the feeling sorry part. I feel sorry for you as I tend to think you’ll be let down. :)

Neither of us wants the bad outcome of course.
 
There was a time when I also felt that naive excitement about all things Tesla and all things Tesla Autopilot. I just can’t feel it anymore..

I have always felt generally pessimistic about all things Tesla Autopilot, but I'm feeling relatively more positive as more progress is made, better talent in charge, and improve processing capabilities should yield better results.
 
Hahaha, you honestly don't believe that do you?

You still can't see that? Tesla did not go for camera instead of Lidar so it could equip tens or hundreds thousands of cars to do machines learning or recruit top chip designers and spend good effort to create the AI chip years ago because it did not figure out the basics. It's much easier to do superficial piecemeal things that get you some publicity but irrelevant to the final outcome like most others do.
 
So, four NN chips on the FSD computer. Not two. If the earlier estimates of TOPS per NN processor are correct, this isn't that far off from nVidia's estimates for computational power required for L5 (estimated 160-240 for Tesla not including whatever small boost the Samsung chip provides, 320 total for nVidia). I'm going to wager that the Tesla chips operate closer to their theoretical throughput due to a more optimized memory system.

No its two chips not 4. A chip is called an SoC. A NN chip has multiple accelerators.
 
  • Informative
Reactions: APotatoGod
No its two chips not 4. A chip is called an SoC. A NN chip has multiple accelerators.

Careful, there. An SoC is a very specific kind of chip — a CPU with a northbridge and southbridge built in.

Also, I would be really surprised if "multiple accelerators" turned out to be multiple dies inside a single chip. I mean, anything is possible, but for this sort of application, there probably is very little advantage to doing so (unless you absolutely can't avoid sharing VRAM — TRAM? :D — across the two dies for some reason), and from a thermal perspective, there would be a big advantage to putting them in separate physical packages.
 
Last edited:
You still can't see that? Tesla did not go for camera instead of Lidar so it could equip tens or hundreds thousands of cars to do machines learning or recruit top chip designers and spend good effort to create the AI chip years ago because it did not figure out the basics. It's much easier to do superficial piecemeal things that get you some publicity but irrelevant to the final outcome like most others do.

The tens of thousands of cars are not doing any machine learning.

Also Tesla clearly had not figured this out in 2016 given that they updated the hardware suite only 9 months later with more redundancy even though they called the 2016 suite as ”Level 5 capable hardware”. And we all know on the software front how little Tesla of 2016 actually knew and how much new people they had to cycle through to make actual progress.

I do give one thing to Tesla: they deployed an interesting driver’s aid platform to tens of thousands, hundreds of thousands of cars that they can update and iterate on in software. Had they marketed it very differently (and not as ”Level 5 capable hardware” and all that jazz) this story would look much better.
 
The tens of thousands of cars are not doing any machine learning.

Also Tesla clearly had not figured this out in 2016 given that they updated the hardware suite only 9 months later with more redundancy even though they called the 2016 suite as ”Level 5 capable hardware”. And we all know on the software front how little Tesla of 2016 actually knew and how much new people they had to cycle through to make actual progress.

I do give one thing to Tesla: they deployed an interesting driver’s aid platform to tens of thousands, hundreds of thousands of cars that they can update and iterate on in software. Had they marketed it very differently (and not as ”Level 5 capable hardware” and all that jazz) this story would look much better.

Tesla just came out with this wonderful AI chip and we all wonder where it came from. Tesla actually recruited Jim Keller and Peter Bannon in 2015 and probably planned for this long before that. There is no question it's the necessary ticket to success but during the intervening years all those clueless "critics" could only drivel on irrelevant things. I don't even remember any of you have once mentioned how computing power is the bottleneck and the area needs most improvements. One at least needs to bark at the right tree. Frankly I really have no interest in hearing those amateurish opinions. You certainly have the right to say Tesla is not moving fast enough but we should put things in their right perspectives to not look stupid.
 
Tesla just came out with this wonderful AI chip and we all wonder where it came from. Tesla actually recruited Jim Keller and Peter Bannon in 2015 and probably planned for this long before that. There is no question it's the necessary ticket to success but during the intervening years all those clueless "critics" could only drivel on irrelevant things. I don't even remember any of you have once mentioned how computing power is the bottleneck and the area needs most improvements. One at least needs to bark at the right tree. Frankly I really have no interest in hearing those amateurish opinions. You certainly have the right to say Tesla is not moving fast enough but we should put things in their right perspectives to not look stupid.

Elon does engineering work by the first principle and not analogy. Unfortunately you critics can only criticize by analogy and not the first principle.
 
  • Disagree
Reactions: rnortman
You still can't see that? Tesla did not go for camera instead of Lidar so it could equip tens or hundreds thousands of cars to do machines learning or recruit top chip designers and spend good effort to create the AI chip years ago because it did not figure out the basics. It's much easier to do superficial piecemeal things that get you some publicity but irrelevant to the final outcome like most others do.

Yeah. That's why the Lidar cars are out driving themselves around today while Teslas still are trying to figure out whether the tree limb is in the middle of the road or fifteen feet above it. Put down your Kool-Aid cups, Tesla fans.
 
Careful, there. An SoC is a very specific kind of chip — a CPU with a northbridge and southbridge built in.

Also, I would be really surprised if "multiple accelerators" turned out to be multiple dies inside a single chip. I mean, anything is possible, but for this sort of application, there probably is very little advantage to doing so (unless you absolutely can't avoid sharing VRAM — TRAM? :D — across the two dies for some reason), and from a thermal perspective, there would be a big advantage to putting them in separate physical packages.

Becareful for what? Mobileye's Eyeq5 chip for example has 4+ accelerators.
Nvidia Xavier chip also has multiple nn accelerators
 
No its two chips not 4. A chip is called an SoC. A NN chip has multiple accelerators.
From earlier information, the SoC is a Samsung Exynos device (the specifics of which are unknown), and the NN accelerators are PCI-express devices connected to it. Now EM has said there are two NN accelerators per SoC. Two SoC for redundancy, two NN PCI-e devices per SoC, or four total.

I wonder why they are using two NN accelerators per cluster rather than designing the NN accelerator from inception with twice the compute and memory bandwidth. I have some ideas but none of them are compelling. PCI express is a shared bus, and has Samsung ever integrated a >16 lane PCIe bus in a SoC?
 
Last edited:
From earlier information, the SoC is a Samsung Exynos device (the specifics of which are unknown), and the NN accelerators are PCI-express devices connected to it. Now EM has said there are two NN accelerators per SoC. Two SoC for redundancy, two NN PCI-e devices per SoC, or four total.

I wonder why they are using two NN accelerators per cluster rather than designing the NN accelerator from inception with twice the compute and memory bandwidth. I have some ideas but none of them are compelling. PCI express is a shared bus, and has Samsung ever integrated a >16 lane PCIe bus in a SoC?

But the software that @verygreen did his analysis on could have been made to run on devkits.

Regardless, I can see why you don't want to design a SoC from scratch because it will take an even more daunting task. But rather utilize an existing SoC and adding your own hardware NN accelerator to it (which is a microprocessor).

No matter how you look at it, its still defined as two chips (SoC), with each having two dedicated NN hardware accelerators (microprocessors).
 
Last edited: