Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

TSLA Market Action: 2018 Investor Roundtable

This site may earn commission on affiliate links.
Status
Not open for further replies.
I can’t say too much on their architecture.

At least what I am working on. The general CPU is an ARM (I won’t disclose). GPU is still needed to process the raw image from camera and imaging radar. So i don’t buy Elon comment on a drop in ARM to replace the GPU. Because they serve different purpose.

The GPU is in 12/14 nm process. Hasn’t moved down to 7nm yet. With global foundry bailing out just couple days ago. GPU volume won’t be attractive to likes of TSMC and Samsung. So that move isn’t happening anytime soon.

The AP 2.0 box uses a mix version of drive PX2 platform (2 Parker and 1 pascal GPU). Which runs between 8 TOPS to 20 TOPS. Parker run around 1.5 Watts each. The GPU is what consumes most power. Tesla box only uses one GPU instead of 2. So it’s probably in range of 100W.

The nVidia Pegasus platform- which is the first gen true level 5 platform (which at least what Nvidia believes). Runs at 320 tops with total thermal power at 500 watts. This is close to 5 times more. 10 times may be an exaggeration on my part. But the problem is significantly different. :). In order to not run Pegasus at full bandwidth as no one really know what urban will require.. A version of PX2 May be retained to perform level 2 and below function.

From the proposal I have seen- the power needed is close to 750 watt.

But then again Elon probably knows better than everyone :). May be he can do it with less than 200 watt on AP 3.0. You never know.

That's pretty different from what I understood.

If I recall correctly, the drop in replacement for the arm cpu is an upgrade for the info tainment system. It makes sense as cpu handles os type operations. The new in house design was meant as a replacement of the GPU and I do not expect this to come online soon. Still need to look up when the design started. But shortest turn around time from design to tapeout should be 1.5 years. I give it 2 years and they can use shuttle service to test the interim chips.

If the GPU design is replaced with an ASIC instead, it'd be able to be taped out at 7nm and potentially fit into the thermal envelop.

Now all these tech talk got me curious, going to reread past elon quotes to see what chips are they designing for to replace and the pegasus platform.

Anyone got a pic of the pcb board from tear downs?
 
  • Like
Reactions: SpaceCash
It really depends on what you are trying to solve. If Elon think he can solve L4/L5 without GPU, all powers to him. It easy to get 10x performance out of arm thru lithography change without impacting power consumption that much.

I think this is fundamentally the difference between what Tesla thinks vs rest of industry. So likes of Waymo, zoox, cruise,aptiv,......... are all wrong and Tesla is the sole righteousness. You never know.
See my update in post you responded to... Actually here it is:


Peter Bannon - Tesla, Inc.


Sure. So, like two years ago when I joined Tesla, we did a survey of all of the solutions that were out there for running neural networks, including GPUs. We went and talked to other people like at ARM that were building embedded solutions for running neural networks. And pretty much everywhere we looked, if somebody had a hammer, whether it was a CPU or a GPU or whatever, they were adding something to accelerate neural networks. But nobody was doing a bottoms-up design from scratch, which is what we elected to do.

We had the benefit of having the insight into seeing what Tesla's neural networks looked like back then and having projections of what they would look like into the future, and we were able to leverage all of that knowledge and our willingness to totally commit to that style of computing to produce a design that's dramatically more efficient and has dramatically more performance than what you can buy today.

Elon Reeve Musk - Tesla, Inc.

Cool. Thanks. Yeah, I mean, essentially the key is to be able to run the neural net at a fundamental, at a bare metal level so that it's especially doing the calculations in the circuits itself and not in some sort of emulation mode which is how a GPU or a CPU would operate. So, you want to do basically a massive amount of localized matrix multiplication with the memory right there.
 
It really depends on what you are trying to solve. If Elon think he can solve L4/L5 without GPU, all powers to him. It easy to get 10x performance out of arm thru lithography change without impacting power consumption that much.

I think this is fundamentally the difference between what Tesla thinks vs rest of industry. So likes of Waymo, zoox, cruise,aptiv,......... are all wrong and Tesla is the sole righteousness. You never know.
BTW, this is the guy leading the team:

Peter Bannon - Tesla, Inc.

Let's see. I started working designing computers at Digital Equipment Corporation in 1984, back when they were refrigerator-sized, and I've been working on smaller and smaller designs ever since. I was a Intel Fellow working on a team for a little while, then I was VP of Architecture and Verification at PA Semi, which was acquired at Apple.

I led the design of the first ARM 32-bit processor that went into the iPhone 5. I built the team that designed the first ARM 64-bit processor in the world which went into the iPhone 5S. And then I worked on performance modeling and performance improvements at Apple for eight years. And then two years later, I came to Tesla and designed the neural network accelerator that's part of Hardware 3 and helped architect the rest of the Hardware 3 solution that will be in the car next year.
 
~from the office of LORD VETINARI

A new poster created quite a splash in today’s comments. Most responses to them were negative; some long-term and valued members responded by allowing they learned from them.

Regardless:
1. This was a NEW poster.
2. Every single one of his posts not only had absolutely nothing to do with the action of the stock market and how it affects Tesla; rather, they could arguably have fit either well or extremely well in other, very conspicuous, TMC forum sites.

This poster, and anyone else who so blatantly sticks twigs in hornets’ nests, will be treated the same as has happened to this poster: You will not be hearing from him for some time. What he posts OUTSIDE of the Investors’ Forums is quite a different matter. All others may learn from this: it is very, very good for you to depart from this sector for a while and learn what is discussed on other TMC sites. It is not that you might learn from this; rather, you will.

~Vetinari
 
We are talking a minimum factor of 10 in power consumption and cooling required.



What are you talking about- this is probably most absurd thing I have heard.

Do you even realize the amount of data bandwidth is needed? Embedded memory isn’t cheap. Certainly more expensive
Than external memory.



I got to laugh on this one too.

Just because nVidia uses terms GPU. Doesn't mean its exclusive.

Do you even understand what a custom ASIC is? Nothing is build from Scratch anymore. Tesla most likely will be use a version of ARM core for the main CPU. Probably some sort of licensed GPU as well.

Even Apple own processors are ARM based. Those are not free.



Yes. So I understand how ARM license works :)



Because is for rear end collision. The idea was to detect fast move rear approaching object. The occupant can be actively protected instead of reacting to crash.




So is windshield wiper standard on the wing camera? Why BS?



Son- please go read up what lidar does and why it is needed. You are embarrassing yourself.




Who know? But I highly doubt bat




HD mapping has is purpose. Its actually not just related to new areas. Then again- kind of pointless explaining to you.
Driver attention monitoring is to make sure that the driver is not asleep at wheel and can take over control within seconds. Its a safety net. Intention of AD is to assist driver and prevent driver from making mistakes. But it was never intended to take driver completely out of equation. May be one day when entire population of vehicle is self driving. Not right now.



Let me put in perspective for you on the amount of effort in developing a system. Waymo to date has already spent over $2 billion just to get where it is today. It still haven't miniaturize all of the components and still uses expensive mechanical Lidar.

It still have massive team working on both HW and SW.



I work in this field. You have zero clue on AD.

Interesting. I also work in this field(that being inference-time running of neural networks on custom hardware). I’m a little confused on your suggestion that:

A: power requirements would increase from moving to this chip
and
B: this chip should be called a GPU

In my experience, neither is true. Typically, these types of implementations take place in a quantized domain and use hardware convolution blocks(they’re actually much slower or unable entirely at the large fully connected layers GPUs are so good at). When it comes to those convolution operations, they’re far and away faster than a GPU implementation while also far less power hungry(GPUs being typically used at training time mainly because floating point operations are required for gradient calculations during backprop).

EDIT: I saw the mod note after writing this. Should I remove it? I felt like there were lots of replies from people who didn’t have as much knowledge about such things and that it might be useful in that regard, but if not, I’m happy to delete it.
 
Last edited:
Let Elon be Elon.

It's pretty clear that the pedo guy is in fact a pedo guy. As someone earlier pointed out, said guy has pictures of clearly underage girls plastered on his Facebook, some of them on beds.

To my understanding, this is hearsay. This didn’t come out with the first round and i’m sure millions of Elon fans searched every possible public information about this guy. Only one instagram picture with his something like 30 years old girlfriend was found. So please don’t recirculate this without any even remotely believable evidence.
 
Ok trying to be tolerant here :oops:

Ever since Google announced their TPU it seems to me the cool kids in the valley all began to think about specialized AI chip. This include Microsoft Apple Facebook Amazon ...... I think most of them already at 3-4 iterations. Of course the problem with customized chip is non-standard instruction set and optimizing compilers, you have to develop all those, or you can get around it and just optimize for a very specialized application, and the hell with instruction level interface and optimization.

When Tesla announced their own chip I was not surprised at all. There are several of my friends from Apple hardware team moved to Tesla in the last couple of years.

So when the new poster hinted that GPU is the golden standard, everybody use them and will remain so in the foreseeable future, I am really really surprised, because I know for a fact that Apple already has their own chip specialized at convolutional network forward propagation. Why don't they use that in their own self driving car but to buy chip from Nvidia? I wonder what did I miss?
 
See my update in post you responded to... Actually here it is:


Peter Bannon - Tesla, Inc.


Sure. So, like two years ago when I joined Tesla, we did a survey of all of the solutions that were out there for running neural networks, including GPUs. We went and talked to other people like at ARM that were building embedded solutions for running neural networks. And pretty much everywhere we looked, if somebody had a hammer, whether it was a CPU or a GPU or whatever, they were adding something to accelerate neural networks. But nobody was doing a bottoms-up design from scratch, which is what we elected to do.

We had the benefit of having the insight into seeing what Tesla's neural networks looked like back then and having projections of what they would look like into the future, and we were able to leverage all of that knowledge and our willingness to totally commit to that style of computing to produce a design that's dramatically more efficient and has dramatically more performance than what you can buy today.

Elon Reeve Musk - Tesla, Inc.

Cool. Thanks. Yeah, I mean, essentially the key is to be able to run the neural net at a fundamental, at a bare metal level so that it's especially doing the calculations in the circuits itself and not in some sort of emulation mode which is how a GPU or a CPU would operate. So, you want to do basically a massive amount of localized matrix multiplication with the memory right there.

Ok thanks for this quote. It clarifies a few thing. It is indeed a custom ASIC so it will definitely fit into the current thermal envelop and also not need any licensing. The GPU will probably stay to do some general pixel by pixel pre processing of the image (which they are really good at), but can also be run at lower power. In this case the mod is right to ban KK.

Bashing tesla's architecture without actually understanding it, thinking that tsla use the same architecture as KK's own Autonomous driving team.
 
I have to repeat again, what happened to this great forum and supurb thread ?

Please keep this thread for “market action” discussions. This used to be the most important investment thread on this forum. It now looks like a chat box and fight club.

- General discussions should be in “general” thread. If posted here, they should be moved there.
- Safety standardand other tech discussions that do not influence investements but about tech, not in ANY investor thread
- Shorts derailing discussion should be warned only once, then removed and blocked (by ip address preferably)
- shorts with genuine questions should have a separate thread, not get answers in the two main investor roundtable threads.

Etc, etc

Note that the moderators can not do all that. We all need to post things in the correct thread and not engage in discussions in the wrong thread. If not, we will lose more moderators.

And oh, we are way to nice to shorts posting here, if that continues we wil see the SeekingAlpha comment-hell here once more commenters from there come here.

P.S. Moderators, people who post things like this : Elon Tweeting about Thailand and Reputation
Should be removed from forum forever, no matter post history.
 
Last edited:
Ok trying to be tolerant here :oops:

Ever since Google announced their TPU it seems to me the cool kids in the valley all began to think about specialized AI chip. This include Microsoft Apple Facebook Amazon ...... I think most of them already at 3-4 iterations. Of course the problem with customized chip is non-standard instruction set and optimizing compilers, you have to develop all those, or you can get around it and just optimize for a very specialized application, and the hell with instruction level interface and optimization.

When Tesla announced their own chip I was not surprised at all. There are several of my friends from Apple hardware team moved to Tesla in the last couple of years.

So when the new poster hinted that GPU is the golden standard, everybody use them and will remain so in the foreseeable future, I am really really surprised, because I know for a fact that Apple already has their own chip specialized at convolutional network forward propagation. Why don't they use that in their own self driving car but to buy chip from Nvidia? I wonder what did I miss?

GPU is just not designed for the kinds of computations that deep-learning/AI commonly use. You don't need 64-bit precision or 32-bit precision or even 16-bit precision for this kind of thing. That's why the TPU's are working with 8-bit precision and everyone has their own specialized silicon which does exactly the work they want it to.

GPU has been the standard so far because if you are going to rely on a 3rd party solution like Nvidia and just drop hardware into your automobile with pre-packaged software library and tell the car to go drive itself, a GPU is certainly easier to program for and in the case of Nvidia they have developed all the software for you anyways. Most auto manufacturers aren't researching any kind of automation themselves, but are relying on Nvidia to supply their solution.

Tesla and GM/Cruise are the only automakers who are trying to create their own in-house solutions which are fitted to their own cars they make themselves. Google/Waymo are approaching from another direction, developing an entire hardware and software suite in-house but not making any cars. It's not clear if Google/Waymo's strategy will be to buy ordinary cars from manufacturers and then modify them with their hardware and software to make them autonomous driving but if so it will differ from both the "outsource to Nvidia" and "develop car and hardware for car in-house like Tesla and GM/Cruise" solutions.

So that's where we are but this is just the beginning and no one knows where we will end up. No one has anything remotely close to a working self-driving car right now so everything I just said is speculation.
 
GPU is just not designed for the kinds of computations that deep-learning/AI commonly use. You don't need 64-bit precision or 32-bit precision or even 16-bit precision for this kind of thing. That's why the TPU's are working with 8-bit precision and everyone has their own specialized silicon which does exactly the work they want it to.

GPU has been the standard so far because if you are going to rely on a 3rd party solution like Nvidia and just drop hardware into your automobile with pre-packaged software library and tell the car to go drive itself, a GPU is certainly easier to program for and in the case of Nvidia they have developed all the software for you anyways. Most auto manufacturers aren't researching any kind of automation themselves, but are relying on Nvidia to supply their solution.

Tesla and GM/Cruise are the only automakers who are trying to create their own in-house solutions which are fitted to their own cars they make themselves. Google/Waymo are approaching from another direction, developing an entire hardware and software suite in-house but not making any cars. It's not clear if Google/Waymo's strategy will be to buy ordinary cars from manufacturers and then modify them with their hardware and software to make them autonomous driving but if so it will differ from both the "outsource to Nvidia" and "develop car and hardware for car in-house like Tesla and GM/Cruise" solutions.

So that's where we are but this is just the beginning and no one knows where we will end up. No one has anything remotely close to a working self-driving car right now so everything I just said is speculation.

Good and interesting post, but IMHO wrong place. Not directly related to market action.

I would suggest all continue that discussion in an other / new thread.
( Respectfull request and IMHO not your mistake, very many now use this thread for general discussions )
 
Yes... maybe.

I don’t recall ever requiring Lidar to drive my car. AI is the key. Vision should be the only sense needed. Anything beyond that and AI is fluff.

Does AI have the reasoning skill of a human?
Will it any time in the near future?
No? Then it needs better senses to compensate for its failings.

That said, I don't think LIDAR is key. I fully agree with Elon - if you're going to have a transmit-and-receive system, why would you double up on the same spectrum as your cameras, and face that spectrum's same limitations? Radar is the way to go. Sure, the radar spectrum has its own weaknesses, but they're different weaknesses from the optical spectrum.

That said, I don't see them yet using radar to its full potential. As far as anyone can tell, they're just using it as an object finder / distance to object calculator, like LIDAR. But radar has the potential to be so much more than that. I wrote an article on M3OC a while back about this, but the short version: have you ever seen a satellite radar map (generally made using SAR, but any radar can do it, incl. small vehicle-borne phased arrays)?

Mgn_f45n019_1.gif


That's Baltis Vallis (Venus), the longest riverbed in the solar system. Here you're not focused on the timing of the echos, but rather brightness (signal intensity). What determines the brightness? It's partially the material and the angle, but beyond that, you're looking at the roughness of the surface on the scale of the wavelength. Where you see white, that's rough areas, while darkness is flat areas. By changing the wavelength, you can probe the surface roughness on different scales - anything from texture the size of grains of sand or smaller, to texture the size of potholes or larger.

Think of what this means applied to a road. With some smart software, you should be able to discern potholes, ice, bad shoulders, debris, rocks, pavement changes, etc before you get to them, and take corrective action in advance (maneuvering, reducing speed, etc), even when visibility is bad or the changes are subtle. Every type of road defect should have its own characteristic radar reflection. Even if there's a defect in how the road lines are painted that might possibly lead a less capable driver assist to drive off the road, the fact that you have a characteristic "pavement vs. not pavement" radar signature should allow your software to avoid that defect.

I imagine the radar they're using currently is locked into a single band. But even that could potentially be useful information. And there's always potential for more capable radars in the future. This sort of "information from senses that we don't have" gives autonomous vehicles the potential in the future to compensate for their lack of human-level reasoning skills.

Another example of how "information beyond our senses" can mitigate problems is the case of flooding. You don't want your car stopping on the freeway due to a puddle, but you also don't want it just driving into deep water and drowning your car (and potentially killing you). We as humans assess water depth by a lot of complicated reasoning which just isn't going to be easy to teach an AI. But, an AI can potentially get a leg up by other means: namely, historic driving data. If you build up a detailed profile of all the roads your fleet has driven on in the past - including heights - then you don't need to assess water depth, only height. Wherever the water reaches, you know how much the road drops down ahead of you (unless nobody's driven it before, wherein excess caution is due!), and thus how deep the water will get.

Current systems are a lot more primitive than this. And these sort of changes take time. So while I'm not much of an AP/FSD optimist in the short term, I do think there's a lot of potential in the longer term.
 
Status
Not open for further replies.