Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
Last edited:
Original July 7th shareholder meeting letter is below and I believe was published on May 28th

July 10th - New shareholder meeting/battery day date announced.
Aug 11th - Split announcement

Personally I haven't seen evidence of an impending share split but I have been closing out my covered calls faster than usual lately. lol
Thank you @phantasms @Tim S
It is interesting that this year we haven't even heard about the date of AGM, whereas last several years those were already held well before this time of the year.
I did find that even in 2019, the date was announced over 40 days in advance (May-01-2019 for Jun-11-2019 AGM).
I couldn't find when 2018 AGM was announced, but it was held in June of that year. Same in 2017, 2016, it was in June.

I must add, on June-06, Elon tweeted that he expects Annual shareholder meeting in late July, early August.
With AI Day invitations going out, I won't be surprised with AGM announcement any day now, who knows, maybe on the same day.

Any thoughts, anyone?

@StealthP3D @The Accountant
 
Last edited:
Green has found (since mid 2020)- and IIRC James Douma confirmed, that Tesla had run out of compute on node A in HW3, and had to spill stuff over (increasingly so since then) to node B.

Which is fine for L2 ADAS since you don't need full node redundancy- you have a human as your backup.

But it's a non-starter for L4+ where a human can't be required to even be in the vehicle.

Thus (barring some future complete re-write that vastly cuts down what needs to run, by a lot- which seems the opposite of the direction they've been going in running more, and more complex and larger, NNs over time) HW3 is insufficient for L4+ FSD.

If the vision neural nets takes 100% of one of their two NPUs and the second has to verify it, they can choose to verify it once every second and have have extra capacity to run a lighter verification(sanity check) the rest of the time and shadow mode neural networks and other things the rest of the time on the second NPU.

Also if they need to decrease the workload they can always remove some layers of the neural network. Performance will go down, but that might be able to be compensated for by training for longer on a larger dataset or having a better architecture. Performance per computation is improving very fast, Tesla even acquired a company that were experts at this:
 
I’m out of my depth and I’m hoping a chip expert will chime in here.

It seems to me that the car FSD hardware and the dojo training supercomputer would be very different beasts.

The former is highly specialised, low power. The latter is all about processing speed and should be flexible enough for any type of machine learning - e.g. robotics, voice, written language, image, video.

Does it make any sense to use the same chip?

Here’s a pic of the HW3 chip, which does not resemble (to me) anything in the new leaked exploded view (dojo?) assembly.*


*Edit: which does look v sexy I might add. If there is such a thing as chip porn, that one is centrefold material.
 
Last edited:
Some interesting stuff spotted on Twitter regarding recent Panasonic earnings call:

- new cell production line at Giga Nevada coming online this month
- Japan cell factory aiming to produce 4680 cells

3967DC01-5E75-4D64-A9E4-E467ED82D912.jpeg
 
Does it make any sense to use the same chip?
Both chips will benefit from optimization for pixel processing and both will benefit from thermal efficiency (for different reasons).

The best part is one part. However we know very little about Dojo.

I would not be surprised to see HW4 and Dojo use essentially the same chip but in different optimized packaging.

On another topic, AI day is a recruitment event and it makes sense to tease a LOT as the event approaches to explode viewership. I would not be surprised to see even more teases and leaks as the event approaches. It is a good thing to get more of the technically curious to watch and want to be part of history IMO. Exciting stuff!🙂
 
Tesla AI Day August 19th: Updated and New Predictions based upon Tesla's hint published by Rob Maurer

Tesla Begins Sending Out AI Day Invitations

My 1st prediction that the FSD 4.0 chip architecture and board will be announced was correct.

Prediction #2: it's being installed in all new S/3/X on 8/19 to avoid the Osbourne effect.

Tesla is working on robotics with the "Leonardo da Vinci of Robots" [1]. It should be obvious what the "secret project" is when you check out UCLA Samueli School of Engineering [*] Professor Dennis Hong's lab webpage:

RoMeLa | The Robotics & Mechanisms Laboratory at UCLA

The lab studies: "humanoid robots and novel mobile robot locomotion strategies", including autonomous robots.

Prediction #3: One of the lab's robots is running Tesla's code running on FSD 4.0 to navigate the real world!! It is funny that one of the lab's robots is called "Darwin" after the scientist of course - not the award winners.

Prediction #4: Attendees on 8/19 will witness at least one of Prof. Hong's UCLA teams' RoMeLa robots navigate the real world running Tesla code on FSD 4.0.

If my predictions are accurate, this will be huge!! For example, Tesla Robots will be able to do chores around the house to help seniors live independently with dignity in their homes longer without having to hire outside help. Just like our Tesla vehicles, firmware downloads will enable new features over time. A brand new Tesla product category!!

This is why Elon said during the 2021 Q2 conference call:

"long term, people will think of Tesla as much as an AI robotics company as we are a car company, or an energy company. I think we are developing one of the strongest hardware and software AI teams in the world... So a long story but I think, yeah, probably others will want to use it too and we will make it available."

For insight into what Tesla is doing with the DOJO server technology, one only has to look at what Elon tweeted on 2020-09-20:

1628072824006.png


See what Google did with their generations of TPUs (Tensor Processing Units):

TPUv2 and TPUv3 supercomputers figure 4.jpg

Google has announced their TPUv4 but it isn't generally available to Google Cloud customers yet:

Google claims its new TPUs are 2.7 times faster than the previous generation
"This year’s MLPerf results suggest Google’s fourth-generation TPUs are nothing to scoff at. On an image classification task that involved training an algorithm (ResNet-50 v1.5) to at least 75.90% accuracy with the ImageNet data set, 256 fourth-gen TPUs finished in 1.82 minutes. That’s nearly as fast as 768 Nvidia A100 graphics cards combined with 192 AMD Epyc 7742 CPU cores (1.06 minutes) and 512 of Huawei’s AI-optimized Ascend910 chips paired with 128 Intel Xeon Platinum 8168 cores (1.56 minutes). Third-gen TPUs had the fourth-gen beat at 0.48 minutes of training, but perhaps only because 4,096 third-gen TPUs were used in tandem." [VentureBeat]

Prediction #5: Tesla did in fact develop the current fastest DNN (Deep Neural Network model) training supercomputer in the world as Elon predicted a year ago. A product called something like "Tesla AI Cloud" will be announced on August 19th or some time thereafter.

If true, this is also huge for investors!! For example, $AMZN makes most of its profits from AWS [1].

I believe $TSLA will be the first company to solve real-world real-time visual machine navigation (auto, robot, whatever) using their DOJO supercomputers to train their DNN (Deep Neural Network models). The photo shows 25 chiplets per assembly. Those will have the fastest interconnection bandwidth. I expect these supercomputers contain 1,024+ of these assemblies interconnected by some kind of super-fast bus topology. This is DNN training silicon and is distinct from FSD 4.0 which is used for inference only to infer what the video feeds are "seeing" in the real world.

Tesla Dojo supercomputers can be used for general-purpose machine learning (ML) training. ML training basically involves billions+ of matrix multiplications [2] of vectors and matrices with forward and backward propagation to calculate what are called "parameters" (probabilities) inside a multiple-layer model. The word "deep" in deep learning comes from the fact that often these models have over a hundred or more layers.

Sophisticated Wall $treet investors have already figured this stuff out. That is what started moving $TSLA higher and broke us back up through $700.

[1] Dr. Dennis Hong on Achieving the Impossible With Robotic Inventions
[2] How Amazon Makes Money
[3] A Complete Beginners Guide to Matrix Multiplication for Data Science with Python Numpy

[*] My son is an MSCS student at UCLA's Samueli School of Engineering but I don't think he's taken any classes with Prof. Dennis Hong:

1628073042210.png
 
Both chips will benefit from optimization for pixel processing and both will benefit from thermal efficiency (for different reasons).

The best part is one part. However we know very little about Dojo.

I would not be surprised to see HW4 and Dojo use essentially the same chip but in different optimized packaging.
DOJO is a stationary NN training computer. That requires floating point math, back propagation, metric crap tons of memory, and high bandwidth interconnects.
HW4 is a mobile NN inference engine that (if following HW3) requires only fixed point forward propagation, a much lower power consumption target, lower cost, less memory, and very little interconnect bandwith (final results only for operation and cross checking).

To the whole: 'HW3 is underpowered because they are using node B line of thought':
Tesla has [near] ZERO motivation right now to optimize the beta NN code until either they fill both nodes or they are ready for level 4 or 5 true-FSD release and are adding redundancy.
Especially since they are replacing the V8 NN with the V9 Tesla Vision stuff which generates bloat and redundant code (per James/Rob interview)
Change of venue:
Investor Engineering Discussions
 
Last edited:
Max-Pain update is out for yesterday's Options data. As we'd expect on such a low volume, low volatility day the Max Pain value remains at $690.

There was downward movement again in the "Put/Call" ratio (generally bullish), down to 1.25 from 1.30 yesterday, with 43K contracts added net (about +8%).

And so we await the next piece of news to move the SP... ;)

Cheers!
 
I would like to pre-emptively say, can we please not over-hype AI Day with unfounded rumors?

The Model S Plaid delivery day was an incredible event, but it couldn't live up to the imagined expectations... And now the same folks seem to be playing the same game.

Segueing back to on topic…

If society collapses, what could be better than a Cybertruck and a solar roof 🤔
1628076405025.png

I dare anyone to find another CEO who can be such a total embodiment of his company like Elon.
Trevor Milton of course - be careful how you word things....
Tesla AI Day August 19th: Updated and New Predictions based upon Tesla's hint published by Rob Maurer

Tesla Begins Sending Out AI Day Invitations

My 1st prediction that the FSD 4.0 chip architecture and board will be announced was correct.

Prediction #2: it's being installed in all new S/3/X on 8/19 to avoid the Osbourne effect.

Tesla is working on robotics with the "Leonardo da Vinci of Robots" [1]. It should be obvious what the "secret project" is when you check out UCLA Samueli School of Engineering [*] Professor Dennis Hong's lab webpage:

RoMeLa | The Robotics & Mechanisms Laboratory at UCLA

The lab studies: "humanoid robots and novel mobile robot locomotion strategies", including autonomous robots.

Prediction #3: One of the lab's robots is running Tesla's code running on FSD 4.0 to navigate the real world!! It is funny that one of the lab's robots is called "Darwin" after the scientist of course - not the award winners.

Prediction #4: Attendees on 8/19 will witness at least one of Prof. Hong's UCLA teams' RoMeLa robots navigate the real world running Tesla code on FSD 4.0.

If my predictions are accurate, this will be huge!! For example, Tesla Robots will be able to do chores around the house to help seniors live independently with dignity in their homes longer without having to hire outside help. Just like our Tesla vehicles, firmware downloads will enable new features over time. A brand new Tesla product category!!

This is why Elon said during the 2021 Q2 conference call:

"long term, people will think of Tesla as much as an AI robotics company as we are a car company, or an energy company. I think we are developing one of the strongest hardware and software AI teams in the world... So a long story but I think, yeah, probably others will want to use it too and we will make it available."

For insight into what Tesla is doing with the DOJO server technology, one only has to look at what Elon tweeted on 2020-09-20:

View attachment 691907

See what Google did with their generations of TPUs (Tensor Processing Units):

View attachment 691904
Google has announced their TPUv4 but it isn't generally available to Google Cloud customers yet:

Google claims its new TPUs are 2.7 times faster than the previous generation
"This year’s MLPerf results suggest Google’s fourth-generation TPUs are nothing to scoff at. On an image classification task that involved training an algorithm (ResNet-50 v1.5) to at least 75.90% accuracy with the ImageNet data set, 256 fourth-gen TPUs finished in 1.82 minutes. That’s nearly as fast as 768 Nvidia A100 graphics cards combined with 192 AMD Epyc 7742 CPU cores (1.06 minutes) and 512 of Huawei’s AI-optimized Ascend910 chips paired with 128 Intel Xeon Platinum 8168 cores (1.56 minutes). Third-gen TPUs had the fourth-gen beat at 0.48 minutes of training, but perhaps only because 4,096 third-gen TPUs were used in tandem." [VentureBeat]

Prediction #5: Tesla did in fact develop the current fastest DNN (Deep Neural Network model) training supercomputer in the world as Elon predicted a year ago. A product called something like "Tesla AI Cloud" will be announced on August 19th or some time thereafter.

If true, this is also huge for investors!! For example, $AMZN makes most of its profits from AWS [1].

I believe $TSLA will be the first company to solve real-world real-time visual machine navigation (auto, robot, whatever) using their DOJO supercomputers to train their DNN (Deep Neural Network models). The photo shows 25 chiplets per assembly. Those will have the fastest interconnection bandwidth. I expect these supercomputers contain 1,024+ of these assemblies interconnected by some kind of super-fast bus topology. This is DNN training silicon and is distinct from FSD 4.0 which is used for inference only to infer what the video feeds are "seeing" in the real world.

Tesla Dojo supercomputers can be used for general-purpose machine learning (ML) training. ML training basically involves billions+ of matrix multiplications [2] of vectors and matrices with forward and backward propagation to calculate what are called "parameters" (probabilities) inside a multiple-layer model. The word "deep" in deep learning comes from the fact that often these models have over a hundred or more layers.

Sophisticated Wall $treet investors have already figured this stuff out. That is what started moving $TSLA higher and broke us back up through $700.

[1] Dr. Dennis Hong on Achieving the Impossible With Robotic Inventions
[2] How Amazon Makes Money
[3] A Complete Beginners Guide to Matrix Multiplication for Data Science with Python Numpy

[*] My son is an MSCS student at UCLA's Samueli School of Engineering but I don't think he's taken any classes with Prof. Dennis Hong:

View attachment 691908
Related threads:
Tesla AI day
Dojo discussion
Tesla humanoid robot
 
Both chips will benefit from optimization for pixel processing and both will benefit from thermal efficiency (for different reasons).

The best part is one part. However we know very little about Dojo.

I would not be surprised to see HW4 and Dojo use essentially the same chip but in different optimized packaging.

On another topic, AI day is a recruitment event and it makes sense to tease a LOT as the event approaches to explode viewership. I would not be surprised to see even more teases and leaks as the event approaches. It is a good thing to get more of the technically curious to watch and want to be part of history IMO. Exciting stuff!🙂
I would be extremely surprised if FSD 4 uses 1 out of the 25 chip cluster of this dojo chip. 25 chips per wafer will make the chip one of the biggest monolithic die in the world. To put in perspective, the ps5 chip gets around 137 chips per wafer. Fsd 3 gets 200 chips per wafer. Each 7nm wafer cost around 5-7k while the 14nm wafer fsd 3 uses is cheaper. Factoring defects, these chips are not something Tesla can afford to put in millions of production cars as it's material cost is about 10x more than fsd 3 hw.