Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
Gotta love your carebear takes on everything.

How about the bull side? Lathrop ramp is going better than expected, thus Tesla is improving margin quicker and able to fulfill orders faster, hence the slightly quicker ship time

Lol since I bought LEAPs on the move down and am more leveraged than owning pure shares (for now) it wouldn't make sense for me to be a "carebear". I'm more of a careful bull now to make sure, I dunno, I don't end up covering my eyes at information possibly signalling demand weakening in 2022 before a 75% stock price drop?

You are right it might mean Lathrop ramp is currently going better than expected, I definitely think that is part of it.

However the earnings potential of 2024 was always assuming Lathrop was fully ramped to 40 GWh. People were assuming $500 / kWh in revenue and gross margins of 50% e.g. COGs of $250 / kWh.

If you cut prices 22%, now your revenue is $390 / kWh. With the same COGs, now your gross margins are 36%.

These are still excellent margins! They just kill the high end of the probability distribution of earnings outcomes. Tesla could still earn 4-5 billion on Megapacks in 2024, e.g. more than $1 as I was estimating - they just won't be earning anything crazy like $2 or $3 from it.

In fact, I am going to be honest writing this out makes me a little more bullish on it than I first was - while this reduces chances of very high profits, the signaling of likely increased production rates reduces risks of very low profits as well.

Increasing confidence of getting around $1 in Energy in 2024 is a good thing.
 
Does V12 pure NN method imply that it will be easier or harder for others to follow the same path?

Easier, but caveats…
They need the data.
They need the training compute.
There is still much code that’s used to select and curate the training data.
There’s still many iterations to go.

Tesla has two years lead?? (Guess) And importantly, the most cars that are ready, the most affordable cars, and the most capacity to make more.
 
Easier, but caveats…
They need the data.
They need the training compute.
There is still much code that’s used to select and curate the training data.
There’s still many iterations to go.

Tesla has two years lead?? (Guess) And importantly, the most cars that are ready, the most affordable cars, and the most capacity to make more.
My interpretation is that computation power is going to be the major factor about how quickly it progresses from here.

Tesla already has tons of tons of data, the fleet is growing in size dramatically every quarter especially as Tesla increases quarterly deliveries, and the number of miles of data for FSD has gone into an S-curve over the past 2 quarters.

I actually do not think there's still much coding left. From what I gather, it's just instructional direction they need to give the neural nets. Much like a instructor showing good and bad examples to a student until the student makes the correct decision and assessment on their one. That's not really coding per say and would require nearly the same amount of software engineering time. I mean Elon emphasized that repeatedly, they didn't code any of it's actions and in a case for properly addressing stop signs, they had to force feed it the "correct" way to stop at a stop sign. But again, that's not coding. That's simply an intructor telling a student "here's examples of the proper way to do this".

Lastly, I could be way off here, but I don't think there will be "iterations" anymore after V12. Tesla will simply upload more training data in the brain behind the scenes with FSD continually improving its awareness of situations and become "smarter" in your garage while you're sleeping as it downloads/updates wit more/better data. Now I could see V12 still being "beta" release. But I don't think there iterations of it. It'll just go from V12 beta to V12 release build and that will be that.
 
Easier, but caveats…
They need the data.
They need the training compute.
There is still much code that’s used to select and curate the training data.
There’s still many iterations to go.

Tesla has two years lead?? (Guess) And importantly, the most cars that are ready, the most affordable cars, and the most capacity to make more.
Only if they can convert image space to vector space and then a vector space the AI can learn from.

This part was the most difficult. Once it's done, then the world is yours for the taking.
 
Lastly, I could be way off here, but I don't think there will be "iterations" anymore after V12. Tesla will simply upload more training data in the brain behind the scenes with FSD continually improving its awareness of situations and become "smarter" in your garage while you're sleeping as it downloads/updates wit more/better data.

Yes, Elon says this explictly at around the 9:30 min mark in the video posted by Herbert Ong. They don't change the program binary after retraining, they just upload a new set of 'weights' for the neural network. Results from retraining live in that matrix, not in the binary.

do-you-hear-ac2226.jpg


Cheers to the MATRIX!
 
Last edited:
Interesting live stream. Elon is giving lots of info about v12. Zero c++ code, all neural net, trained by just watching video. No labeling. No explicit designation of what a traffic light is, what a lane is, what a roundabout is, etc. it just learns what to do.
I wonder what they'll do with Buffalo. They were meeting the employment requirement with thousands of labelers.
 
With V12 being fully neural nets, now computing power becomes essential for a S curve of improvement in FSD.

Luckily it sounds like Dojo has gone live and Tesla has gotten a big order from Nvidia to help out.

While not exactly fun to think that they have to re-train for Hardware 4,0, the reality is that if Tesla's computer power combined with how neutral nets can exponentially learn faster than hard coding, we may see a re-write by the neural nets do what took Tesla's coders/engineers 2 years to do in the matter of 3-6 months. If that actually happens, then the computing knowledge of the neural nets is about to reach escape velocity.

Edit: What @Singuy just said.
I'm not sure about pre -v12, but v12 will have to be less than generic in it's driving rules. What I mean is, we all know driving 'properly' in Rome is different than driving 'properly' in Cincinnati, Delhi, Adelaide, etc. Not sure how they handle that, but maybe there's a 'master/mistress' set of rules, and a secondary set of regional rules. I'm not an AI programmer (clearly!), but the NN will have to take this into account. In other words, some input data will be inapplicable in certain regions. They must have prescriptive (programmed) rules for this in pre-v12, like "in the UK? Drive on the LEFT side of the road" duh.
 
That was an interesting stream by Elon with FSD v12.

I have to wonder if the learning rate of FSD will increase with V12 being pure AI training? This could be quite the step change for FSD once v12 releases, and it could be quite the boon for TSLA if the training rate does increase drastically. It could mean TSLA feels the impact of a solved FSD sooner than most people believe.

Elon Musk Talks FSD 12 on My Twitter Spaces! | Whole Mars Catalog (36 min ago)

 
Easier, but caveats…
They need the data.
They need the training compute.
There is still much code that’s used to select and curate the training data.
There’s still many iterations to go.

Tesla has two years lead?? (Guess) And importantly, the most cars that are ready, the most affordable cars, and the most capacity to make more.
Depending on how much data is needed, getting the data may be tricky/time consuming.

The data collection process needs to be in a large number of cars to be efficient.

It is hard to get al of the data with just a small fleet driving around, or just driving around in one Geofenced location only gets data for that location.
Then we need to consider time of day including night driving, different weather conditions,, road works, road signs, pedestrian behaviour etc.

The genius part of the Tesla solution is many of us paid them for the privilege of driving around and gathering data for them. :)

When you don't have enough data to train FSD to drive around unsupervised, humans need to be supervising. Unless you train and test the NNs you might not know exactly what data you need.
 
Depending on how much data is needed, getting the data may be tricky/time consuming.

The data collection process needs to be in a large number of cars to be efficient.

It is hard to get al of the data with just a small fleet driving around, or just driving around in one Geofenced location only gets data for that location.
Then we need to consider time of day including night driving, different weather conditions,, road works, road signs, pedestrian behaviour etc.

The genius part of the Tesla solution is many of us paid them for the privilege of driving around and gathering data for them. :)

When you don't have enough data to train FSD to drive around unsupervised, humans need to be supervising. Unless you train and test the NNs you might not know exactly what data you need.
I'm wondering about the driver input. It doesn't seem like the professional drivers Tesla is using would provide enough data, and a car driven by NN trained by the bozos on the road around me seems unsafe. I suppose the bad driver factor is less with Tesla owners, but still.
 
  • Like
Reactions: KBF, GSP and Skryll
Threads of the day:
Tesla Data Centres Elon answer's Omar's (very good) questions - expects more GPU/TPU data centres (ie. DOJO to the moon) in future than CPU

FSD Beta v11.x Improved camera output quality on 11.4.7

FSD V12 (end to end AI) Elon's livestream

Launch is Imminent CT chat

Wiki - Falcon Super Heavy/Starship - General Development Discussion All 33 lit up? 2 engines had early shut down Video

Supercharger Revenue Non Teslas just tap card and start charging on new screens (V3/4)

Tesla Network prerequisites Tesla to use facial recognition

Discussion: Tesla Megapack Battery Storage Price and lead time reductions

1hr to crew 7:
 
Last edited:
It's really impressive that they made end2end work so well. Whatever kinks the system have they will now just fix with more/cleaner data. Mainly what has happened is that a lot of the complexity has been moved from the software in the car to the software in the datacenters. In the car they will just be running one major input->control output neural network and a few shadow mode neural networks to find new data and maybe to supervise the system.

Basically what they will do is give the neural network tons of examples of filtered good drivers and tell it to predict how they would drive in a given situation.

They are adding some complexity to the data labeler/data-selector. For example they have tons of drivers not stopping at stop lights but NHTSA forces them to drive unhumanlike so they will have to remove normal drivers from the dataset of those situations and only keep NHTSA-style drivers. The messy C++ code will move to figuring out which data to include.

Then there are many other parts of the world with different typical local driver's styles and different unhuman like government agency rules. This will complicate things a bit as the AI needs to learn how to drive in different countries etc. Maybe they can just feed in a variable for which jurisdiction they are in and the car learn which of its modes it should use in each particular situation. Or they can just gather enough of data for each country in each situation with different rules. This complexity will be moved offline, but it will still be complexity they need to deal with.

There will be some signs with text that the car will have to learn to read. With enough examples and enough training the car should learn to read basic car literacy and basic math. For example:
1693031267141.png


If you feed the neural network the day, time, gps, navigation screen and then the video where it sees the image above it might just "learn" to read the gist of it given enough examples of good drivers. ChatGPT could probably solve it, so clearly it is possible. But it's gonna take a massive amount of examples and a massive amount of training to get there... Maybe there is some better way of solving these, but if they actually want to be able to drive in situation where the driver needs to stop and read and think, something will have to be done about these...
 
I'm wondering about the driver input. It doesn't seem like the professional drivers Tesla is using would provide enough data, and a car driven by NN trained by the bozos on the road around me seems unsafe. I suppose the bad driver factor is less with Tesla owners, but still.
On the contrary, the NN can learn from mistakes! Slamming on the brakes, veering sharply, and of course crashing, can all be learnt from as actions to avoid. Einstein and others have said we learn best from our mistakes.
 
It's very interesting to see how V12 is approaching FSD.

It sounds, at least from what Elon was saying, the jump from V11 to V12 similar to what Alphabet's DeepMind did to their AlphaGo.

AlphaGo is the famous NN program that beat Lee Sedol in the game of GO. It went on to beat every other top ranked GO players in the world including Ke Jie, who was the world ranked No.1 at the time of Lee Sedol's game.

DM didn't stop there. It went on to develop another version of AlphaGo called AlphaZero. The main difference between Zero and Go was that Zero never got any training from the game strategies that humans developed throughout the centuries. Instead, it was given a set of rules of how GO is played and it just went on to train itself by playing against itself.

DM then put Zero to go up against GO for 100 games. The result was 100-0 with Zero beating GO every single game.

Ke Jie then famously expressed his opinion after hearing about that result and having previously played with AlphaGO before: "In the game of GO, human's knowledge was a burden and not an asset."

Tesla basically went from teaching FSD how to do things to just let FSD work its own way out based on data input. This is truly something else. The fact that the car or Robot one day would roam the world without any previous knowledge... not even maps is next level.
 
It's really impressive that they made end2end work so well. Whatever kinks the system have they will now just fix with more/cleaner data. Mainly what has happened is that a lot of the complexity has been moved from the software in the car to the software in the datacenters. In the car they will just be running one major input->control output neural network and a few shadow mode neural networks to find new data and maybe to supervise the system.

Basically what they will do is give the neural network tons of examples of filtered good drivers and tell it to predict how they would drive in a given situation.

They are adding some complexity to the data labeler/data-selector. For example they have tons of drivers not stopping at stop lights but NHTSA forces them to drive unhumanlike so they will have to remove normal drivers from the dataset of those situations and only keep NHTSA-style drivers. The messy C++ code will move to figuring out which data to include.

Then there are many other parts of the world with different typical local driver's styles and different unhuman like government agency rules. This will complicate things a bit as the AI needs to learn how to drive in different countries etc. Maybe they can just feed in a variable for which jurisdiction they are in and the car learn which of its modes it should use in each particular situation. Or they can just gather enough of data for each country in each situation with different rules. This complexity will be moved offline, but it will still be complexity they need to deal with.

There will be some signs with text that the car will have to learn to read. With enough examples and enough training the car should learn to read basic car literacy and basic math. For example:
View attachment 968349

If you feed the neural network the day, time, gps, navigation screen and then the video where it sees the image above it might just "learn" to read the gist of it given enough examples of good drivers. ChatGPT could probably solve it, so clearly it is possible. But it's gonna take a massive amount of examples and a massive amount of training to get there... Maybe there is some better way of solving these, but if they actually want to be able to drive in situation where the driver needs to stop and read and think, something will have to be done about these...
The problem is not reading the sign. The problem are the rules or even the road design. If the area around schools is designed to be safe for pedestrians, you don't need rules and signs like this.
If they can't change the road, go for the safe option and limit the speed to 25 at all times.