Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

TSLA Market Action: 2018 Investor Roundtable

This site may earn commission on affiliate links.
Status
Not open for further replies.
New computer may be on par cost wise (or lower) so better GM.

HW3 should be significantly cheaper yet more powerful: the current NVidia GP102 based MCU should be in the $600-$700 range - at 12 TFLOPs it should be equivalent to the Nvidia Titan Xp with 3,860 CUDA cores at ~1.6 GHz, whose retail price was $1,200 last year.

The Tesla AI chip with 5x-20x the performance should be a few dollars to make (!), marginal cost of the replacement computer with RAM and everything should be less than $300 - possibly less than $200 direct costs. (Plus a lot of R&D investments already spent, of course.)

I.e. an eventual margin improvement of $400-$500, +1.1-1.5% on the $35k version, not too bad from a 5x-20x speedup of NN computing capacity.

Tesla will also be able to use multiple AI chips in the future: for example two discrete AI chips on a single board handling 4-4 cameras.

(Note how the processing workload of 8 cameras can be split into 2, 4 and 8 parts, allowing a lot of future hardware parallelism - I don't think the number of cameras was an accidental design choice...)
 
Last edited:
Should be much cheaper: the current NVidia GP102 based MCU should be in the $600-$700 range - at 12 TFLOPs it should be equivalent to the Nvidia Titan Xp with 3,860 CUDA cores at ~1.6 GHz, whose retail price was $1,200 last year.

The Tesla AI chip with 5x-20x the performance should be a few dollars to make (!), marginal cost of the replacement computer with RAM and everything should be less than $300 - possibly less than $200 direct costs. (Plus a lot of R&D investments already spent, of course.)

I.e. an eventual margin improvement of $400-$500, +0.7-0.8%, not too bad from a 5x-20x speedup of NN computing capacity.

Tesla will also be able to use multiple AI chips in the future: two chips handling 4-4 cameras.

(Not how the processing workload of 8 cameras can be split into 2, 4 and 8 parts, allowing a lot of future hardware parallelism - I don't think the count of cameras is an accidental choice.)
I know sleep is for looser's...but I have to ask...do you sleep?
 
So popular mechanics has a new article titled "In defence of Elon Musk". I spent around half an hour reading it and my perspective has changed on the significance of some of the more erratic parts of Elons last few months. Even though I always supported him and understood he was under immense pressure from many sides, I was somewhat concerned about what was going on with the pedo tweets and the lack of a formalised approach to "funding secured ". Honestly, the last few months now feels like a footnote in an incredible story that is about to unfold. Nothing more. Please read if you have the time.

In Defense of Elon Musk
 
from looking at the charts, it seems that there is some correlation to the number of shares traded and increases to the stock price. As trading falls, so does the price. Am I imagining things, or is this real? If so, it would suggest to me a low consistency-ish level of selling with fluctuations in buying pressure.
Haha just discovered this page:

tesla bears club

There are some real losers out there.

That isn't the word I'd use:
Here’s a good strategy.
Buy October $50 puts.
As the expiry date gets close (and if RMS TSLA is still afloat) then buy November $50 puts.

Repeat as necessary.

It’s only a matter of time before it’s PAYDAY!

That's a strategy for lining the pocket of whoever is selling the puts. That is a classic case of being delusional
 
HW3 should be significantly cheaper yet more powerful

They said on the last conference call,
Elon Musk said:
And it costs the same as our current hardware and we anticipate that this would have to be replaced, this replacement, which is why I made it easy to switch out the computer, and that's all that needs to be done.
 
What? I think having Tesla semis with Tesla covered car carriers hauling Tesla vehicles to new owners is great idea.
(no reason to have 8 cars drive to a state when you can have one semi do it)
The SP should benefit if/when Tesla announces (wishful thinking) as cars roll off the line, they drive themselves to the brake check and camera alignment track, then proceed to the appropriate staging area awaiting transport. Or, failing brake check, back to be reworked. No human drivers involved. And if they really need to be like legacy auto makers, they could even blow their own horn off the line. Perfect use case for FSD.
 
So popular mechanics has a new article titled "In defence of Elon Musk". I spent around half an hour reading it and my perspective has changed on the significance of some of the more erratic parts of Elons last few months. Even though I always supported him and understood he was under immense pressure from many sides, I was somewhat concerned about what was going on with the pedo tweets and the lack of a formalised approach to "funding secured ". Honestly, the last few months now feels like a footnote in an incredible story that is about to unfold. Nothing more. Please read if you have the time.

In Defense of Elon Musk

Removed "like" rating.
Added "love" rating.
 
What an awesome day. So glad I was holding through the correction.

Also want to say thanks again to everyone whether from europe or the states or wherever you're from. Thank you for believing in Elon and Tesla, Thank you for sharing your thoughts on this beautiful positive critical thinking thread who is always giving me confidence in this company and stock so I can be part of this great mission without freaking out everyday.

See you tomorrow when we break the resistance through 280.
 
Fortunately Tesla sees the market for their products a bit less myopically. Have you seen their supercharger map lately? Not just for customers from San Jose.

True. I was just pointing out that even if it only serves customers within a 50 mile radius, that's still a boon. Add to that fact that it has appeared that the "last mile" is the toughest from a logistics standpoint and you end up with a nice improvement in delivery capability. Ship a car to a hub destination and let it drive itself to the customer's house.
 
HW3 should be significantly cheaper yet more powerful: the current NVidia GP102 based MCU should be in the $600-$700 range - at 12 TFLOPs it should be equivalent to the Nvidia Titan Xp with 3,860 CUDA cores at ~1.6 GHz, whose retail price was $1,200 last year.

The Tesla AI chip with 5x-20x the performance should be a few dollars to make (!), marginal cost of the replacement computer with RAM and everything should be less than $300 - possibly less than $200 direct costs. (Plus a lot of R&D investments already spent, of course.)

I.e. an eventual margin improvement of $400-$500, +1.1-1.5% on the $35k version, not too bad from a 5x-20x speedup of NN computing capacity.

Tesla will also be able to use multiple AI chips in the future: for example two discrete AI chips on a single board handling 4-4 cameras.

(Note how the processing workload of 8 cameras can be split into 2, 4 and 8 parts, allowing a lot of future hardware parallelism - I don't think the number of cameras was an accidental design choice...)

Is the computer being replaced with ARM, or X64? If ARM, is it Tesla grown or an existing solution?
 
They said on the last conference call,

Elon Musk said:
"And it costs the same as our current hardware and we anticipate that this would have to be replaced, this replacement, which is why I made it easy to switch out the computer, and that's all that needs to be done."​

I believe the 'cost' there refers to the cost to customers, i.e. no price hike necessary due to HW3.

It is extremely unlikely for an entirely new piece of hardware to have the same cost as the old hardware. The more probable explanation is that the new hardware is cheaper, so there's no cost/price increase to customers.

But that's just speculation, maybe there's some cost I'm overlooking:
  • For example NVidia might have given Tesla a really sweet deal for the GP102 chips (Pascal micro-architecture based), as a bait-and-switch for the much more expensive Xavier based chips they are offering currently.
  • Also, I estimated the direct marginal manufacturing costs of the new MCU board, while Elon might have included the very significant R&D costs - it will be some time until the AI chip recoups the money invested into developing it.
  • Or the Tesla AI chip might be using some bleeding edge fab process that is significantly more expensive than the couple of dollars I estimated.
 
Last edited:
Sorry but can you explain this part for me? Thanks.

Yeah, so visual input processing is the most computing intense part of full self-driving. Tesla has 8 cameras, and if you want to process each at 100 fps (one frame every 10 milliseconds), at the native HD resolution of the cameras, that's a lot of processing.

Right now they process everything, all frames from all 8 cameras with a single discrete GPU I believe, on an Nvidia GP102 based board.

But now that they have their own discrete NN chip, the Tesla AI chip, in future iterations (HW4, HW5) they could use the following computer topology within the board, with very little additional cost (the AI chips probably cost only a few dollars to make each - most of the cost is in making the board):

Code:
   [AI Chip #1]           [AI Chip #2]
               \         /
                [GPU RAM]
               /         \
   [AI Chip #3]           [AI Chip #4]

I.e. four chips and shared RAM of say 16 GB high-speed GPU RAM with multiple access channels so that all CPUs can use the RAM all the time without slowing down each other.

(There's also the question of whether the Tesla AI chip uses separate RAM modules - a possible alternate design would be for the RAM to be integrated into the AI chip itself, as a sort of very fast transistor based SRAM. This would have a number of other advantages as well, such as close proximity of NN 'weight' data with the functional units representing 'neuron' nodes.)

But assuming that RAM is separate from the chip, the above board layout is a possible topology, where Chip 1 would handle cameras 1-2, Chip 2 would handle cameras 3-4, etc. While not all cameras have the same pixel count, the processing overhead is still similar and scales with the complexity of their neural networks.

Note that this way the total computing throughput of the system can be increased by a factor of 2x, 4x and 8x with very little additional cost other than a higher power envelope.

I'm reasonably sure HW3 is going to feature one AI chip (they want to keep it simple initially, and it appears the chip is plenty fast already) - if it features two chips it will be for redundancy and fail-over perhaps, not to increase performance.

All of this is speculation though - I'm sure we'll hear more about the details once the HW3 release gets closer ...
 
Last edited:
Status
Not open for further replies.