Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
If macros are green in the morning, I'm hoping we open over 690 and hit 700 so I can sell some covered calls for end of September. I don't want anything hanging over into Octobers release of Q3 numbers.

Macros are green (Futures up substantially) just after 12 am ET:

Nasdaq 100 Sep 21 (NQ=F)​

CME - CME Delayed Price. Currency in USD​
15,145.00 +58.25 (+0.39%)
As of 12:07AM EDT. Market open.​
 
I plead and beg you all, NO MORE PICTURES of GLJ!
Co-plead.

Initially it was kind of fun for the sheer absurdity and blatant lying behind a poker face. For many here, that emotion has evolved into repulsion, a very negative territory. Bringing up a degenerate troll like him has no, or negative value. Parallelly, I'm happier to not read anything about the toilet boy.

@Krugerrand says it so eloquently a while back:
... don't waste a second of your limited time on those whose purpose is to remain stubbornly ignorant, or to propagate misinformation, lie, or other evilness.
 
Optimus already has an official Twitter page (joined June 2021 - and several unofficial pages) and has begun recruiting for Tesla. Optimus and its NN are already working! ;)



This could be a fan account. Twitter account handler can be changed anytime after set up, as long as the new handler is available. Theoretically, you can change your twitter account to @elonmusk if Elon accidentally deleted his account AND there is no hold placed on it by Twitter. We can hope, as Elon likes to delete things? j/k
 
More OT... the headline says it all so the paywall really isn't an issue. Legacy auto media are laundering GM's BEV disaster out of existence - again. Remember the 2017 Motor Trend Car Of The Year? 2017 North American Car Of The Year? The first affordable BEV? The first "to $35,000" ? Neither do they... any more. The competition is *still* coming
GM's major push into electrified vehicles begins this fall
 
Agree with those above - can we ban GLJ form this thread? there is an unhealthy fixation on him from Tesla bulls which is exactly what he wants. Just ignore him. Don’t engage him, and please don’t share any of his ramblings on here and he will quickly go the way of previous TSLAQ media darlings - which is flushed down the toilet upon which they reside.
 
Every time you post about this moron GJ, I guarantee he is getting a nickel deposited into his accounts by his employers.

Our nonstop fixation on him makes him yell ”Mission accomplished!” every time he wakes up in the morning and visits this board or its equivalent. His employer’s are probably celebrating as well looking at the traction and bandwidth they are occupying with the pittance they pay this guy. They are getting a tremendous return on their investment.

Can we create a GJ board for everyone who wants to post the daily blather from this self satisfied nincompoop who is bragging right now about how many people listen and react to him?

Agree with those above - can we ban GLJ form this thread? there is an unhealthy fixation on him from Tesla bulls which is exactly what he wants. Just ignore him. Don’t engage him, and please don’t share any of his ramblings on here and he will quickly go the way of previous TSLAQ media darlings - which is flushed down the toilet upon which they reside.
Not exactly the purpose of this thread but good enough:
Help Fight the FUD
 
Re-post. Pre-market opens in 15 mins. If you really don't want to use another thread for discussing Dojo I suggest you at least throw a random $ symbol in every now and then...

Other threads:
Investor Engineering Discussions
Off topic galore
 
I honestly think it was Grimes in the outfit and the Spec is based on her
More OT... the headline says it all so the paywall really isn't an issue. Legacy auto media are laundering GM's BEV disaster out of existence - again. Remember the 2017 Motor Trend Car Of The Year? 2017 North American Car Of The Year? The first affordable BEV? The first "to $35,000" ? Neither do they... any more. The competition is *still* coming
GM's major push into electrified vehicles begins this fall
It seems the press have forgotten the EV1 and Bolt disasters already, and that GM have yet to prove they can make a safe reliable EV.
 
Guess I didn’t catch this the first time. Here Karpathy is saying at 22min that car could colaborate with other cars and effectively map parts of the road:

So I guess what he is saying is that maybe you can save the store feature vector of the spatially aware recurrent neural network and then when you go for a drive you can download these feature vectors over the air or before the drive. Basically a ”map”, ”a HD-map”, but not a normal map, a map for the neural network, a collection of memories from driving through the road. Whatever is useful to get a lower loss at training. Maybe you can just store snapshots of your own car’s ”memories” if you drive the same road often.

Not exactly sure how this works, can the neural network read outside of its write-area? I guess the vector is not that large compared to 8 cameras raw input.

Anyway pretty interesting stuff, if anyone has some relevant papers please share them!

And would be fun to know how many kB/km the feature vectors are.
 
Maybe you can just store snapshots of your own car’s ”memories” if you drive the same road often.

No, a single car does not and will not learn by itself.
NN in learning mode (on Dojo) can and will 'combine knowledge" about a certain part of the road if seen from different cars driving through it, given it has enough internal variables (memory) to not loose its details.

And would be fun to know how many kB/km the feature vectors are.

I haven't googled it ... are there any approximations on NN memory capacity in general or. how its memory space grows with number of internal variables?
I expect it could not be described in exact number of X bytes, only vaguely? And that the memory capacity grow exponentialy, i.e. double the variable count to quadruple memory capacity?
 
Elon has just replied to Lex Fridman tweeting about his thoughts on AI Day saying "Summarized well". Here is the text of Lex's AI Day reaction video:

Lex Fridman

Tesla AI Day presented the most amazing real-world AI and engineering effort I have ever seen in my life. I wrote this, and I meant it.

Why was it amazing to me? No, not primarily because of the Tesla Bot.

It was amazing because:

  • I believe the autonomous driving task, and the general, real-world, robotics perception and planning task, is a lot harder than people generally think, and I also believed that
  • The scale of effort in algorithm, data, annotation, simulation, inference compute and training compute required to solve these problems was something no one would be able to do in the near-term.
  • Yesterday was the first time I saw in one place just the kind and the scale of effort that is a chance to solve this, the autonomous driving problem, and the general, real-world, robotics perception and planning problem.
  • This includes:
  • The neural network architecture and pipeline,
  • The autopilot compute hardware in the car,
  • Dojo compute hardware for training,
  • The data, and the annotation,
  • The simulation for rare edge-cases, and, yes,
  • The generalised application of all of the above, beyond the car robot, to the humanoid form.
Let’s go through the big innovations:

The neural network:

  • Each of these is a difficult, and I would say brilliant design idea, that is either a step- or a leap-forward from the state of the art in machine learning.
  • First is to predict in vector-space, not in image-space. This alone is a big leap beyond what is usually done in computer vision, that usually operates in the image-space, in the 2-dimensional image.
  • The thing about reality is that it happens out there in the 3-dimensional world, and it doesn’t make sense to be doing all the machine learning on the 2-d projections of it on to images. Like many good ideas, this is an obvious one, but a very difficult one.
  • Second is a fusion of camera sensor data before the detections (the detections perfomed by the different heads of the multi-task neural network). For now the fusion is at the multi-scale feature level.
  • Again, in retrospect, an obvious but a very difficult engineering step, of doing the detection and the machine learning on all of the sensors combined, as opposed to doing them individually and combining all the decisions.
  • Third is using video context to model not just vector-space, but time. At each frame, concatenating positional encodings, multi-cam features, and ego kinematics, using a pretty cool spatial recurring neural network architecture, that forms a 2-d grid around the car where each cell of the grid is a RNN (recurring neural network).
  • The other cool aspect of this is that you can then build a map in the space of RNN features, and then do planning in that space, which is a fascinating concept.
  • Andrej Karpathy, I think, also mentioned some future improvements, performing the fusion earlier in the neural network. Currently the fusion of space and time are late in the network. Moving the fusion earlier on takes us further toward full, end-to-end driving with multiple modalities, seamlessly fusing – integrating – the multiple sources of sensory data.
  • Finally, the place where there is currently – from my understanding – the least amount of utilisation of neural networks is planning. So, obviously optimal planning in exospace (?) is intractable, so that you have to come up with a bunch of heuristics. You can do those manually, or you can do those through learning. So the idea that was presented was to use neural networks as heuristics, in a similar way that neural networks were used as heuristics in the Monte Carlo tree search for Mu-Zero and AlphaZero to different games, to play Go, to play chess. This allows you to significantly improve on the search through action space, for a plan that doesn’t get stuck in the local optima and gets pretty close to the global optimum.
  • I really appreciated that the presentation didn’t dumb anything down, but maybe in all the technical details it was easy to miss just how much brilliant innovation there was here.
  • The move to predicting in vector-space is truly brilliant. Of course you can only do that if you have the data, and you have the annotation for it, but just to take that step is already taking a step outside the box of the way things are currently done in computer vision. Then fusing seamlessly across many camera sensors. Incorporating time into the whole thing in a way that’s differentiable with these spatial RNNs. And then of course using that beautiful mess of features, both on the individual image side, and the RNN side, to make plans, using neural network architecture as a heuristic, I mean all of that is just brilliant.
  • The other critical part of making all of this work is the data and the data annotation.
  • First, is the manual labelling. So to make the neural networks that predict in vector space work, you have to label in vector-space. So you have to create in-house tools, and as it turned out, Tesla hired an in-house team of annotators to use those tools, to then perform the labelling in vector-space, and then project it out into the image-space. First of all, that saves a lot of work, then second of all, that means you’re directly performing the annotation in the space in which you are doing the prediction.
  • Obviously, as was always the case, as is the case with self-supervised learning, auto-labelling is the key to this whole thing. One of the interesting things that was presented was the use of clips of data: that includes video, IMU, GPS, odometry and so on, for multiple vehicles in the same location and time, to generate labels of both the static world and the moving objects and their kinematics. That’s really cool, you have these little clips, these buckets of data from different vehicles, and they’re kind of annotating each other. You’re registering them together to then combine a solid annotation of that particular part of road at a particular time. That’s amazing because the more the fleet grows, the stronger that kind of auto-labelling becomes, and the more edge-cases you are able to catch that way
Speaking of edge-cases, that’s what Tesla is using simulation for, is to simulate rare edge-cases that are not going to appear often in the data, even when that data set grows incredibly large.

And also, they are using it for annotation of ultra-complex scenes where accurate labelling of real-world data is basically impossible, like a scene with a hundred pedestrians, which is I think the example they used. So I honestly think the innovations on the neural network architecture and the data annotation is really just a big leap.

Then there’s the continued innovation on the autopilot computer side.

  • The neural network compiler that optimises latency, and so on.
  • There’s, uh, I think I remember really nice testing and debugging tools, for variance of candidate-trained neural networks to be deployed in the future, or you can compare different neural networks together. That’s almost like developer tools for to-be-deployed neural networks.
  • And it was mentioned that almost ten thousand GPUs are currently being used to continually retrain the network. I forget what the number was but I think every week or every two weeks the network is fully retrained, end-to-end.
The other really big innovation, but unlike the neural network and the data annotation this is in the future, so to-be-deployed still, it’s still under development – is the Dojo computer, which is used for training.

  • So the Autopilot computer is the computer on the car that is doing the inference, and the Dojo computer is the thing that you would have in the data centre, that performs the training of the neural network.
  • There’s a – what they’re calling a single training tile – that is nine petaflops (laughing). It’s made up of D1 chips that are built in-house by Tesla. Each chip with super-fast I/O, each tile also with super-fast I/O, so you can basically connect an arbitrary number of these together, each with a power supply and cooling.
  • And then I think they connected a million nodes, to have a compute centre. I forget what the name is, but it’s 1.1 exoflops. So combined with the fact that this can arbitrarily scale, this is basically contending to be the world’s most powerful neural network computer.
  • Again, the entire picture that was presented on AI Day was amazing, because the – what would you call it? – the Tesla AI Machine can improve arbitrarily through the iterative data engine process of auto-labelling plus manual labelling of edge-cases – so the labelling stage, plus data collection, re-training, deploying. And again you go back to the data collection, the labelling, re-training, deploying. And you can go through this loop as many times as you want to arbitrarily improve the performance of the network.
I still think nobody knows how difficult the autonomous driving problem is, but I also think this loop does not have a ceiling. I still think there’s a big place for driver sensing, I still think you have to solve the human-robot interaction problem to make the experience more pleasant, but dammit (laughing) this loop of manual and auto-labelling that leads to re-training, that leads to deployment, that goes back to the data collection and the auto-labelling and the manual labelling is incredible.

  • Second reason this whole effort is amazing is that Dojo can essentially become an AI training as a service, directly taking on Amazon Web Services and Google Cloud. There’s no reason it needs to be utilised specifically for the Autopilot computer. The simplicity (laughing) of the way they described the deployment of PyTorch across these nodes – you could basically use it for any kind of machine learning problem. Especially one that requires scale.
  • Finally the third reason all of this was amazing is that the neural network architecture and data engine pipeline is applicable to much more than just roads and driving. It can be used in the home, in the factory, and by robots of basically any form, as long as it has cameras and actuators, including, yes, the humanoid form.
As someone who loves robotics, the presentation of a humanoid Tesla Bot was truly exciting. Of course, for me personally, the lifelong dream has been to build the mind, the robot, that becomes a friend and companion to humans, not just a servant that performs boring and dangerous tasks. But to me these two problems should, and I think, will be solved in parallel.

The Tesla Bot, if successful, just might solve the latter problem, of perception and movement and object manipulation. And I hope to play a small part in solving the former problem, of human-robot interaction, and yes, friendship. I’m not going to mention love when talking about robots. Either way, all this to me paints an exciting future. Thanks for watching. Hope to see you next time.
 
Guess I didn’t catch this the first time. Here Karpathy is saying at 22min that car could colaborate with other cars and effectively map parts of the road:

So I guess what he is saying is that maybe you can save the store feature vector of the spatially aware recurrent neural network and then when you go for a drive you can download these feature vectors over the air or before the drive. Basically a ”map”, ”a HD-map”, but not a normal map, a map for the neural network, a collection of memories from driving through the road. Whatever is useful to get a lower loss at training. Maybe you can just store snapshots of your own car’s ”memories” if you drive the same road often.

Not exactly sure how this works, can the neural network read outside of its write-area? I guess the vector is not that large compared to 8 cameras raw input.

Anyway pretty interesting stuff, if anyone has some relevant papers please share them!

And would be fun to know how many kB/km the feature vectors are.
The car has normal maps for navigation, I wonder if the vector maps can somehow be embedded in those regular maps and if map differences or newly mapped areas detected by cars can be uploaded to the mothership.

If map updates or partial map updates can be pushed to the fleet independent of software updates, Tesla can improve FSD just by updating maps.

The car already predicts the future path of the road, if the map agrees with the path predicted by the NN, FSD can be more confident.

It is likely the predicted path of the road is a vector, on the map is a vector, comparing them shouldn't be difficult..

Of course I'm merely guessing here.
 
Guess I didn’t catch this the first time. Here Karpathy is saying at 22min that car could colaborate with other cars and effectively map parts of the road:

So I guess what he is saying is that maybe you can save the store feature vector of the spatially aware recurrent neural network and then when you go for a drive you can download these feature vectors over the air or before the drive. Basically a ”map”, ”a HD-map”, but not a normal map, a map for the neural network, a collection of memories from driving through the road. Whatever is useful to get a lower loss at training. Maybe you can just store snapshots of your own car’s ”memories” if you drive the same road often.

Not exactly sure how this works, can the neural network read outside of its write-area? I guess the vector is not that large compared to 8 cameras raw input.

Anyway pretty interesting stuff, if anyone has some relevant papers please share them!

And would be fun to know how many kB/km the feature vectors are.

What you describe is pretty similar to the kind of HD map that Waymo and other approaches to autonomy so heavily rely on. When perception (i.e.: vision) is good enough to recognize the surroundings in real-time, it is no longer needed. But same as with human drivers, it may be helpful to know the road layout beyond the visible horizon for a smoother ride.

Unlike the standard Lidar + HD solution that relies on cm-precision geometry, I think that primarily the lane geometry and topology would be helpful to keep and features that are partially occluded or other hints that would also help a human to drive better on subsequent encounters of the same road situation (e.g. avoid pot hole on right lane at the end of the curve).

This would not need much more storage than a "regular" map as used for route calculation and guidance. Actually, it would be nice if Tesla updated the underlying OSM map for speed limits and other attributes.
And, as I see @MC3OZ also just posted above, the information could be merged with or linked to the standard map for an even more efficient representation of just the FSD attribution on top.
 

Wowsers! First-Order to First-Delivery in 46 days!

Tesla set to open orders for MIC Model Y in Germany tomorrow, delivery in Q3 (Jul 8, 2021)

Keep in mind, at a cruise speed of 17 knots, 30 of those days would be spent at sea. And that doesn't include loading in Shanghai, transit time in the Suez canal, and unloading in Zeebrugge.

That's not a "logistics nightmare" like one analyst said last week; that's a logistics miracle!

Unmöglich! :O
 
Unbelievable, a totally non-negative article on Dojo in Germany at Heise.de (one of the oldest and biggest German IT / tech magazines), who are generally very anti-Tesla when it comes to EVs.
In fact Tesla automotive is not mentioned at all in the article, only in the last paragraph a quick reference to state that Dojo will - for example - be used to train Teslas own "driving assistant".

Too good to be true....? Or bullish AF?