Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

All US Cars capable of FSD will be enabled for one month trial this week

This site may earn commission on affiliate links.
I thought it was important to get a free trial to everyone to increase data collection because they are no longer compute constrained? What was the point of the free trial if they don't need you running FSD to collect data?
Free trials (limited time offers, like a week or a month) are pretty common in the business world to try something with the hope of paid subscription or purchase upon completion of the trial
 
  • Like
Reactions: gt2690b
Free trials (limited time offers, like a week or a month) are pretty common in the business world to try something with the hope of paid subscription or purchase upon completion of the trial
Of course they are. But many people in this thread claim the reason for the trial now is that Tesla is no longer compute constrained and needs DATA right now, and it has nothing to do with money or quarterly results.

I'm trying to understand why Tesla needs people to actively use FSD to train.

@BitJam ?
 
Again, love SFSD, amazing
Weird one today, perplexing
With this code, never happened bef
2023 MYP 12.3.4, SFSD driving from a store to home, driving toward my street and SFSD puts rhe blinker on early, which I thought was strange as 200-250 feet before the turn vs 100 feet, and that early, there is a driveway into a school, the Tesla starts its giggle turn move, jesturing a turn and then decides to do the correct turn on my street

Why the never before confusion, why with map data it got confused?

With AI vs Map data, which takes priority?
 
I'm trying to understand why Tesla needs people to actively use FSD to train.
Thanks for asking! The answer is simple. Tesla needs quality data. All the driving data from all the cars is probably way too much for them to process and is actually not very useful. If people are using one of the more recent versions of FSD then the data from when they disengage is an absolute gold mine. It's an extremely powerful and effective filter. Sure, not every disengagement is valuable but they odds of it being valuable are vastly higher than shadow mode data.

Shadow mode data can be useful when they set triggers so the cars in the fleet only upload particular circumstances Tesla wants to train on. But the fact that they can now get a large quantity of disengagement data is what I find exciting.

BTW: as I had predicted/hoped Tesla fixed their nav data in my remote area of New Mexico. The car now knows how to get me to the grocery store! Streets in a remote area I drive to weekly now have names and the speed limit data is no long suicidally bad. In some places I think it's 5mph too slow but that is much better than 15 or 20 mph too fast which it was previously. I noticed these change this past week.

Oddly, the Software tab says I'm still on 2023.* nav data and there was no recent firmware update. I was hoping they would fix the nav data before releasing v12 to the 2024.8 branch. Wish granted.

PS: navigation in the app is still terrible but they fixed it in the car.
 
Last edited:
  • Informative
Reactions: zoomer0056
Of course they are. But many people in this thread claim the reason for the trial now is that Tesla is no longer compute constrained and needs DATA right now, and it has nothing to do with money or quarterly results.

I'm trying to understand why Tesla needs people to actively use FSD to train.

@BitJam ?
The only anecdotal evidence we have is that there are a few here that watch their routers and see large amounts of data being uploaded from their Tesla on WiFi.
 
Of course they are. But many people in this thread claim the reason for the trial now is that Tesla is no longer compute constrained and needs DATA right now, and it has nothing to do with money or quarterly results.

I'm trying to understand why Tesla needs people to actively use FSD to train.

@BitJam ?
It turned out confirming the model 2 bumped the stock more than anything
 
The only anecdotal evidence we have is that there are a few here that watch their routers and see large amounts of data being uploaded from their Tesla on WiFi.
The strong non-anecdotal evidence is the $1B Tesla spent in Q1 bringing their total training investment to $10B.

Tesla spent $1bn on AI infrastructure in Q1, [...]

For a while there, we were training constrained in our progress. We are, at this point, no longer training-constrained, and so we're making rapid progress. We've installed and commissioned - meaning they're actually working - 35,000 H100 computers or GPUs... roughly 35,000 H100s are active, and we expect that to be probably 85,000 or thereabouts by the end of this year just for training.
tesla-compute.png


Of course, this huge investment in AI had an adverse affect on their cash flow. It makes no sense for it to be part of an elaborate smoke and mirrors dance. Usually, the simple and obvious answer is the correct one.

For years I've been hearing that Tesla's huge AI advantage is the data available from their fleet. Now that they've vastly increased their AI compute it makes perfect sense for them to make use of their fleet to feed the AI training behemoth they've created. As I explained above, enabling FSD on the cars allows them to vastly increase the quality of data they gather.

The one month free trial is brilliant because it will allow them to get an idea of what data and how much data they will want in the upcoming months when their compute power more than doubles. They are gathering data about gathering data.

Success is not guaranteed but they are giving it their best shot with end-to-end neural nets, massive compute, and massive amounts of quality data. AFAIK there is no better way to solve a difficult problem like FSD.
 
  • Like
Reactions: zoomer0056
If people are using one of the more recent versions of FSD then the data from when they disengage is an absolute gold mine. It's an extremely powerful and effective filter. Sure, not every disengagement is valuable but they odds of it being valuable are vastly higher than shadow mode data.
How will this help them understand a hand wave at a 4 way stop?
Seems like this would be useful for getting a slightly better L2 system that doesn't do insane stuff as often, but I fail to see how manual disengages are going to teach the car to drive like a human in the way people in this thread are imagining from all this amazing data.

For years I've been hearing that Tesla's huge AI advantage is the data available from their fleet.

As anyone in AI will tell you, labeled training data is everything, and nothing here is labeling and training on a hand wave. Massive data isn't that useful unless you know what it means, and random Teslas driving around doesn't give you that much useful data.

The one month free trial is brilliant because it will allow them to get an idea of what data and how much data they will want in the upcoming months when their compute power more than doubles. They are gathering data about gathering data.
Right, it's brilliant if they only learn from disconnects. Which will never teach a car to drive like a human. If it learns from actual human driving, than the last thing you want is more FSD usage because that actually taints your data set. To me this shows Tesla is still on the 99% problem which you can actually experience enough to train via disconnects. It's worthless for getting to the long tail of 99.999999%.

AFAIK there is no better way to solve a difficult problem like FSD.
Except for the fact that actual L3 and L4 systems out in the world did not take this approach.
 
  • Like
Reactions: Transformer
I'm trying to understand why Tesla needs people to actively use FSD to train.
Every disengagement helps
How will this help them understand a hand wave at a 4 way stop?
If you look at where the driver is looking at the instant before they reached for the disenage and catch wavy action enough times you can model it and look for it at 4 ways.

1st in best dressed so long as you can stop before 2nd in hits you is going to get interesting.
 
  • Helpful
Reactions: RabidYak
If you look at where the driver is looking at the instant before they reached for the disenage and catch wavy action enough times you can model it and look for it at 4 ways.
Why are people disengaging at a stop sign for people waving hands?

Every disengagement helps
Not true. Tons of disengagements are not because the system did something wrong, and it's very hard to understand the context of the disengagement, especially when it's for a few pixels in the windshield of another car. Unlabeled disengagements can teach the system the wrong thing.
 
I've gotten out my car to yell at people to take their right of way before. Probably not the best approach, but it helped blow off stream in the moment.

Please don't. Road rage is not good for anyone, and people have various reasons to yield the right-of-way, such as not trusting drivers in Paris, or having a better view than a driver backing out of a driveway into the street.

A broader context must include not only an ability to understand the "why" of a present situation, but to interact with humans as well. A long tail, indeed.

Yes, also the local traffic laws and culture around running red lights and such. And culture-specific hand waves.

Whatever data FSDS is trained on, it doesn't appear to be driving-instructor quality. E.g. FSDS doesn't yet move over to get out of the door zone of parked vehicles or else slow down.
 
  • Like
Reactions: JDOhio