Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

TSLA Market Action: 2018 Investor Roundtable

This site may earn commission on affiliate links.
Status
Not open for further replies.
In my view that kind of fund managers are unlikely to change their view. Based on his statement, he doesn't understand those companies at all. If he can't understand Tesla, Netflix, BMW in the first place, he won't understand them later. I think his fund will be wiped out at some point.

To give you an extreme example, if a company holds 100 ton of gold, meanwhile the company does some gold mining, made a little bit of money by the year end because the mining activity was very costly, P/E is 200. This manager would view that as very expensive. They were taught a P/E of 10~15 is more appropriate. Netflix has large amount of content that people will continue to pay $10 a month to subscribe, and they are ready to expend into china. Their subscription could double to $20B a year. They also could monetize on these 200 million subscribers in the future, in addition to the monthly fee... Not saying NFLX is a good buy now, I'm saying that manager doesn't know what he is doing.

Edit: the manager talked about Netflix P/E is 200. Does he understand that's because Netflix is using all the revenue to produce more content, which is the right thing to do. He should have shorted Amazon in the past, because Amazon never had good earnings.
Netlix is an enigma to me. Too hard to predict where it's headed in 5, 10 years. I'm not comfortable investing in it at this valuation. But I agree with your perspective. Essentially it's about scalability. An inherent advantage of tech, and esp Netflix.
Tesla has the most upside to me. It's still in the early innings of its life, and of the macro paradigm shift.
 
(tldr; A valuation model based on Netflix)

Axioms:
1) Tesla should be able to achieve 40% of Netflix's terminal net profit margin. (e.g. 10% vs. 25% would be a good guess)
2) Netflix revenue growth is pretty steady at 30%+ and will likely continue near that for a long time
3) Tesla also has a long stretch of revenue growth and averaged over time almost certainly exceeds 30%.
4) Tesla should exceed 30B$ in revenue next year
5) Netflix should exceed 20B$ in revenue next year
6) Netflix is trading 175B$ market cap after hours
7) Netflix has Price:Forward Sales of 8.75

I'd argue that this means Tesla should be valued at:

40% * 8.75 * 30B = 105B$ or 620 PPS. Right now.

The only thing questionable is whether 10% net margin is a reasonable expectation since it defies typical 'auto industry' margin. I think we are familiar with the arguments. But that's the subject that will clear up the most rapidly by simply proving that 25% automotive margin on the model 3 is possible. Op. ex then clearly eventually dilutes to 10-11% and then after taxes you are ballpark 10%.

That progression is basically my hope for buying the June '19 400 CALLS I am gonna pick up tomorrow. 620 would be more than a 10x return, so while the odds of this playing out may not be all that high, it is a game of mathematical expectation.

Actually, please don't bid against me tomorrow. :p
 
It matters to me.

I have shares, Aug19 calls, a boatload of late Nov18 calls, and some weeklies.

I’m equally concerned about short term movements and long term movements.
All I hold are Feb 19 and Aug 19 calls. No shares. I just see other companies' Q3 earnings reports as distractions at best. WE are the ones cooking with oil right now.
 
  • Love
Reactions: AZRI11 and Unpilot
I believe the clock starts ticking now on the land auction in Shanghai. It’s early morning in china and I believe if Tesla is the sole bidder for the plot then the announcement can come as soon as ..today China time.

Wonder if this impacts SP.

Securing land in a zone that both has no tariffs and qualifies for Chinese government rebates while remaining solely owned by Tesla is a tremendous competitive advantage in the largest market on the planet.
 
I believe the clock starts ticking now on the land auction in Shanghai. It’s early morning in china and I believe if Tesla is the sole bidder for the plot then the announcement can come as soon as ..today China time.

Wonder if this impacts SP.

Securing land in a zone that both has no tariffs and qualifies for Chinese government rebates while remaining solely owned by Tesla is a tremendous competitive advantage in the largest market on the planet.

Having followed Tesla for so long, I have to think Bloomberg didn't publish the land news to help Tesla. They were likely tipped by Tesla shorts to write about it so hopefully other parties would join the bid to at least make it hard for Tesla. Maybe quadruple the price. You know there are lots of parties really want to destroy Tesla. It would be a miracle if there are no other bidders. Another possibility is that the auction organizer tells other potential bidders to just go away, because this land is prepare for Tesla. I don't hold my hope high, but will be very happy for Elon if Tesla does get the land. In the long run, I think Tesla will get a level playing field.

Even if Tesla get the land, that's not a "tremendous competitive advantage", I can only say potentially removed a "tremendous competitive disadvantage".
 
Is the computer being replaced with ARM, or X64? If ARM, is it Tesla grown or an existing solution?

Almost certainly ARM. HW2/2.5 already has ARM SoC(s) to feed the GPU and do non-NN processing / communications / etc. If not ARM, then I would bet on RISC-V because it's zero licensing cost (vs cheap for ARM). X86 is extremely unlikely, but not impossible (i.e., if their ASICs were integrated with a custom AMD SoC, much like PS4 / Xbox One SoCs but with NN cores instead of GPUs). Doing x86 with non-custom AMD SoC would mean using off the shelf components and there's just no reason for that.

HBM and interposer.
I don't know NN use case specifically but HBM might be overkill for Tesla's purpose (though I am aware that Google is going with HBM on some of their TPU designs). If it does come with HBM (and thus likely interposer packaging) that would explain Elon's comment about it costing the same as the old Nvidia hardware. HBM is expensive right now, and interposer's aren't free.

Or the Tesla AI chip has a monster size and/or low yields. Given that this thing has to run in a harsh automotive environment and requires a combination of high performance and low power, it may be an SOI chip, which is also more expensive.
There's no need for SOI. The TDP shouldn't be a problem, and plenty of automotive grade IC get by fine on just about any process that isn't garbage. Especially on Model 3, since it's liquid cooled (S/X might be air cooled? I'm less familiar with them), TDP and operating temp range should be a non issue. As for being large - this seems unlikely. Being specialized, I doubt that even at it's much higher performance than the old Pascal GPU architecture for NN processing, that it is larger than Pascal. It likely yields better due to smaller size (meaning for a given average defect rate per wafer, more 100% functional parts).

The Parker chips contain a Pascal GPU: Introducing Parker, NVIDIA’s Newest SOC for Autonomous Vehicles | NVIDIA Blog
While it appears that the GP102 has 3328 CUDA cores.

BTW., if they are really using a GP102-equivalent Pascal chip with 3,328 CUDA cores, then that's about 85% of the available GPU computing power (the two Parker chips have 256/256 CUDA cores) - so I think the discrete Pascal GPU is probably processing most of the cameras. The 7% CUDA cores available on the Parker side might be able to process down-scaled images only, with reduced size neural nets.

I.e. exactly as you originally suggested.

I am pretty sure they're using GP106, not GP102. So it's actually 1280 + 256 (+ 256 on HW2.5), or an 80/20% theoretical split (~71%/~29% on HW2.5). But it's actually more complicated than that, since clock speeds will matter. For example a typical base clock of 1480 MHz for the GP106, and a GPU clock on the Parker SoC's ranging from 854 MHz to 1465 MHz on the Parker SoC(s) ... If just one Parker SoC is used and at the lowest clock, then you'd have a ~90%/10% split, and at the other end with both Paker SoCs and fastest clock, it would be a ~71% / 29% split. We have no idea what clocks are involved here, plus it is likely the Parker SoCs are less efficienct / have more latencies and overhead due to likely having slower memory than the GP106 does, and other reasons, since iGPUs are often second class citizens when it comes to accessing data and so forth. I would guess the realistic performance splits to be somewhere around 95%/5% to 85%/15% depending on clocks and whether both Parker SoCs are in use.

Yeah, so visual input processing is the most computing intense part of full self-driving. Tesla has 8 cameras, and if you want to process each at 100 fps (one frame every 10 milliseconds), at the native HD resolution of the cameras, that's a lot of processing.

Right now they process everything, all frames from all 8 cameras with a single discrete GPU I believe, on an Nvidia GP102 based board.

But now that they have their own discrete NN chip, the Tesla AI chip, in future iterations (HW4, HW5) they could use the following computer topology within the board, with very little additional cost (the AI chips probably cost only a few dollars to make each - most of the cost is in making the board):

Code:
   [AI Chip #1]           [AI Chip #2]
               \         /
                [GPU RAM]
               /         \
   [AI Chip #3]           [AI Chip #4]

I.e. four chips and shared RAM of say 16 GB high-speed GPU RAM with multiple access channels so that all CPUs can use the RAM all the time without slowing down each other.

(There's also the question of whether the Tesla AI chip uses separate RAM modules - a possible alternate design would be for the RAM to be integrated into the AI chip itself, as a sort of very fast transistor based SRAM. This would have a number of other advantages as well, such as close proximity of NN 'weight' data with the functional units representing 'neuron' nodes.)

But assuming that RAM is separate from the chip, the above board layout is a possible topology, where Chip 1 would handle cameras 1-2, Chip 2 would handle cameras 3-4, etc. While not all cameras have the same pixel count, the processing overhead is still similar and scales with the complexity of their neural networks.

Note that this way the total computing throughput of the system can be increased by a factor of 2x, 4x and 8x with very little additional cost other than a higher power envelope.

I'm reasonably sure HW3 is going to feature one AI chip (they want to keep it simple initially, and it appears the chip is plenty fast already) - if it features two chips it will be for redundancy and fail-over perhaps, not to increase performance.

All of this is speculation though - I'm sure we'll hear more about the details once the HW3 release gets closer ...

Multi-ported RAM is not really a thing these days and shared RAM busses are electrically messy, so it's more likely that there would either be a central chip that all the NN chips access memory through (possibly with some kind of built-in cache - similar to some rumors for AMD Navi and Zen 3 architectures), or each would have it's own memory bus and they would use some kind of inter-chip communication to access data in the other memory banks (similar to AMD Zen Epyc/Threadripper CPUs out right now). Regardless, the most likely way to scale up performance past a single chip is to use many of them, versus going for larger monolithic dies.

They could also alternate between two chips on each frame, at a cost of 1 frame of delay if they get over the capacity of a single chip solution. I would prefer a new chip with twice the processing power, it could leave the rest of the system without changes.

I think an important but underappreciated advantage of the Tesla chip is the power advantage. GPU’s may use hundreds of Watts, while the Tesla chip may use a lot less. In an robotaxi situation where the chip runs 24/7 full tilt, the cost advantage in power may be much bigger than the cost of the GPU itself.
"Chiplet" design is the future for affordable computing power. Just look at the pricing of the big CPUs from Intel and AMD now. AMD has up to 4 smaller dies, versus a single monolithic for Intel, and this means massively cheaper production costs as your yields are much higher with smaller dies. There are of course tradeoffs, but few if any of those matter for highly parallel tasks like processing NNs.
 

I assume you are aware of the already changing, more positive narrative, we are seeing lately in the media. I think the increased and pointed attacks on the negative bias in the media are actually paying off. I think the scope of what you are proposing is too large and to some degree unnecessary. Daily events for 30 journalists would be overkill, maybe once a week should be sufficient. Not to mention diverting Model 3's away from customers to allow journalists in every country a 2 day driving experience seems like a poor idea at this point in time.
 
@beachbum77

Really Donn? So you are saying that if GM would just stop exporting the Bolt to Canada and Asia, they would have steamrolled the Model 3?

upload_2018-10-16_16-28-23.png
 
I assume you are aware of the already changing, more positive narrative, we are seeing lately in the media. I think the increased and pointed attacks on the negative bias in the media are actually paying off. I think the scope of what you are proposing is too large and to some degree unnecessary. Daily events for 30 journalists would be overkill, maybe once a week should be sufficient. Not to mention diverting Model 3's away from customers to allow journalists in every country a 2 day driving experience seems like a poor idea at this point in time.
I would still say media coverage is negative toward Tesla, but yes I do agree slightly more positive the last couple weeks with Model 3 sales growing rapidly and taking away sales from other auto manufacturers. But Tesla is far from out of the woods in regards to winning the media war. I think it's just a small respite right now.
 
But Tesla is far from out of the woods in regards to winning the media war.
Completely agree but I think baring any negative surprises the tide will continue to turn. A lot of people seem to be tired of the unfair negativity and are fighting back. I'm not against your idea of Tesla taking greater control of the narrative, just the scope.
 
So popular mechanics has a new article titled "In defence of Elon Musk". I spent around half an hour reading it and my perspective has changed on the significance of some of the more erratic parts of Elons last few months. Even though I always supported him and understood he was under immense pressure from many sides, I was somewhat concerned about what was going on with the pedo tweets and the lack of a formalised approach to "funding secured ". Honestly, the last few months now feels like a footnote in an incredible story that is about to unfold. Nothing more. Please read if you have the time.

In Defense of Elon Musk

This is F@cking Fantastic! - everyone should read this.
 
Status
Not open for further replies.