Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

TSLA Market Action: 2018 Investor Roundtable

This site may earn commission on affiliate links.
Status
Not open for further replies.
Almost certainly ARM. HW2/2.5 already has ARM SoC(s) to feed the GPU and do non-NN processing / communications / etc. If not ARM, then I would bet on RISC-V because it's zero licensing cost (vs cheap for ARM). X86 is extremely unlikely, but not impossible (i.e., if their ASICs were integrated with a custom AMD SoC, much like PS4 / Xbox One SoCs but with NN cores instead of GPUs). Doing x86 with non-custom AMD SoC would mean using off the shelf components and there's just no reason for that.

ARM also has better low power characteristics than most x86 SoCs, and is the more mature platform for embedded and automotive applications all around.

There's also a (mild) security advantage: probably the most vulnerable part of a Tesla is the Intel MCU board, it's where the browser and the apps are running. Many of the attacks are CPU specific, and once an attacker gains access through the Intel platform an ARM board would have to be attacked to gain access to vehicle control software. The chance of the same attack working against both boards increases if both boards are running an x86 SoC and decreases if they are heterogeneous.

So my guess is ARM too - in fact I think they'd be wise to keep the two NVIDIA Parker chips in the HW3 platform as-is, as with the AI chip it's not a limit to their ability to scale up AutoPilot processing power.
 
After more than a century, the Wall Street people still haven't figured out how to value companies properly. Let's look at a few recent examples: Nvidia was valued at $13B three years ago. The company was spending $2B to develop AI chips. The CEO said the value of that AI chip is going to be huge. Very few people listened. Three years later Nvidia is valued $156B. Nvidia was mispriced by a wide margin three years ago. Similar example with Netflix. Lot's of people on Wall Street don't know why Netflix's market cap went up 40 fold in 6 years. Anyone who pay great attention to quarterly earnings probably don't understand long term investment. CC is important, but the most important part is not the earnings numbers.
You have a good point, perhaps a read of Terry Smith's " Accounting for Growth " (how to strip away the camouflage on company accounts.) would be beneficial. Dated but I'm sure still as relevant today as when it was written
 
Allot of nice Tesla's in Hong Kong. I honestly never really like them until I seen the Tesla car scene over there. I know this is a finance thread.... Any thread deserves a sexy model S IMO. There is never a wrong thread for sexy photo's
Tesla.jpg
 
Suffered a heart attack on the highway. Used Autopilot to drive to the nearest Hospital.

Tesla's Autopilot takes the wheel as driver suffers pulmonary embolism | ZDNet

Although a pulmonary embolism will increase right heart strain due to resistance in the pulmonary arteries, and can lead to heart failure and hypoxia, it is not a “heart attack” as in myocardial infarction.

I’m learning a lot about investing from you all, so figured I’d share a pearl from my work



When is Tesla offering their own insurance, they Are much safer cars. I gues this would scale along with their body shops?
 
Multi-ported RAM is not really a thing these days and shared RAM busses are electrically messy, so it's more likely that there would either be a central chip that all the NN chips access memory through (possibly with some kind of built-in cache - similar to some rumors for AMD Navi and Zen 3 architectures), or each would have it's own memory bus and they would use some kind of inter-chip communication to access data in the other memory banks (similar to AMD Zen Epyc/Threadripper CPUs out right now). Regardless, the most likely way to scale up performance past a single chip is to use many of them, versus going for larger monolithic dies.

Yeah.

So the reason I suggested a shared RAM design is that I think there's a chance that the Tesla AI NN chip has a really radical design: DRAM integrated onto the NN CPU die itself. This is a relatively modern technique that Intel (Haswell and later) and IBM (Power chips) are using:



(Having all the weights in SRAM doesn't seem possible currently: the simplest SRAM cell design would require about ~48 billion transistors for 1 GB of weight data which would result in too large dies - and indications are that they are using at least that much weight data.)

The Tesla NN chip might have gone one step further and basically integrated the NN forward calculation functional units into the DRAM cells themselves. One possible design would be that there's an NN input/output SRAM area in the 10-30 MB size range, and the functional units propagate those values through the neural net almost like real neurons.

Such a design would have numerous advantages:
  • Heat dissipation properties would be very good, as all the functional units would be distributed across the die evenly in a very homogeneous layout.
  • Execution time would be very deterministic as there's effectively no caching required.
  • Lack of caching also frees up a lot of die area to put the eDRAM cells on.
  • This design would also allow very small gate count mini-float functional units and very high inherent parallelism.
  • Scaling it up to higher frequencies would also be easier, due to the lower inherent complexity and the lower critical path length.
  • All of this makes it very power efficient as well, i.e. a very high NN throughput for a given die size, gate count and power envelope.
In such a design external RAM modules have a secondary role: they are basically just for initializing the internal "neurons" (multiplier and saturated-add functional unit) and "axons" (weight value) with the static neural net, and to store the output results.

Other designs are possible too - such as self-contained all-in-one 'neuron' functional units that are programmable to perform a given loop of weight calculations with no external communications other than the input fetches from other functional units, the eDRAM cell fetches and the output stores (i.e. intermediate state would not be stored anywhere external outside the functional unit, it's all within small local registers in the functional unit itself with no bus access to them whatsoever) - but the basic idea is to have the NN weights data on-die.

If that's the NN chip design Tesla invented then I'd expect the NN chips on multi-chip boards to share any external RAM, as it's not a performance bottleneck anymore.

But maybe I'm missing some complication that makes such a design impractical - for example the latency of eDRAM cell fetches would be a critical property.
 
Last edited:
Just in: Tesla Shanghai factory land secured
"In Chinese. Will retweet Bloomberg or sth when they get to it"

Kelvin Yang on Twitter

Breaking: Oct 17, Tesla (Shanghai) Co., Ltd. successfully acquired 864885 square meters (a total of 1297.32 acres) of industrial land in Q01-05, Shanghai Lingang Equipment Industrial Zone, and officially with Shanghai Planning and Land Resources Administration. $TSLA #TeslaChina

vincent on Twitter
 
In such a design external RAM modules have a secondary role: they are basically just for initializing the internal "neurons" (multiplier and saturated-add functional unit) and "axons" (weight value) with the static neural net, and to store the output results. (Other designs are possible too - but the basic idea is to have the NN weights data on-die.)

 
Market action and chip design all in one convenient thread, how nice. Seems it could be its own thread, rather than burying it here.

Yeah , but then I wouldn't read it - I spend half my life just tracking this thread and Twitter :confused:

This China news could propel us past $300 today - note that I'm yet to predict anything correctly with regards to SP, so...

upload_2018-10-17_10-49-20.png
 

Attachments

  • upload_2018-10-17_10-49-3.png
    upload_2018-10-17_10-49-3.png
    189.9 KB · Views: 54
Yeah , but then I wouldn't read it - I spend half my life just tracking this thread and Twitter :confused:

This China news could propel us past $300 today - note that I'm yet to predict anything correctly with regards to SP, so...

View attachment 344576

“Tesla expects the factory to produce its first cars in three years, according to an earnings release in August”

CNBC conveniently forgets about the Oct 2nd Tesla delivery update which said that shanghai factory plans have been accelerated.
 
Last edited:
Yeah , but then I wouldn't read it - I spend half my life just tracking this thread and Twitter :confused:

This China news could propel us past $300 today - note that I'm yet to predict anything correctly with regards to SP, so...

View attachment 344576

This is awesome news and comes in at the right time. Positive macros and positive Tesla news combined with a high beta are usually positive for the SP.

I am delighted that China is moving ahead quickly. The expansion of production capabilities have been one of my larger concerns. It feels like Elon has been careful on that front to avoid more discussions around Debt, CF and profitability.

Investing means more CapEx which I personally would have liked to happen earlier although its food for shorts. Having China locally funded makes this a pretty slick deal.

I wonder when we see the GF1 expanding too. Its the only location where they can make space for the Semi and Y and Roadster unless they want to do them in China as well which I doubt. Maybe the Y for the local market though.

On top of it they have the Battery, Battery Pack and Motor business under one roof as well. Thats how Elon said all future GF will be designed and it makes total sense as you avoid risks and costs to move the parts around.

We are moving to the model of, raw material in on one side and finished product out at the other and all of that for the market nearby.
 
“Tesla expects the factory to produce its first cars in three years, according to an earnings release in August”

CNBC conveniently forgets about the Oct 2nd Tesla delivery update which said that shanghai factory plans have been accelerated.

Of course they *forget* it, but they need some kind of negative damper on what is incredibly bullish news.

In any case, we all know that Tesla can just throw up a few tents, stick in some Grohmann machinery, throw-together GA from some old bits of junk they found in some alley and be spitting-out cars by valentines day 2019 :D
 
Status
Not open for further replies.