Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

All US Cars capable of FSD will be enabled for one month trial this week

This site may earn commission on affiliate links.
Free trials (limited time offers, like a week or a month) are pretty common in the business world to try something with the hope of paid subscription or purchase upon completion of the trial

Of course they are. But many people in this thread claim the reason for the trial now is that Tesla is no longer compute constrained and needs DATA right now, and it has nothing to do with money or quarterly results.

Well, I don't recall anyone saying that it's all about the data collection and has nothing to do with getting more sales.

Personally, I think it is both. But that doesn't make for an interesting topic.
 
Well, I don't recall anyone saying that it's all about the data collection and has nothing to do with getting more sales.

If you have been paying attention to this thread, there are lots of people defending the completely failed trial rollout as 4D chess on Tesla's part and it being about data collection not revenue. Go back to about page #8, and it's full of stuff like this:

Shortly before the free trial announcement, Elon said FSD training was no longer compute limited. They are going to be getting a metric ship-load of data from the free trials. They are going to be getting a lot of edge cases and they will be getting data from regions where data may be lacking (middle America).

Elon had previously (a year ago?) promised/said that as soon as FSD gets good enough (safe enough) for general use then he would be giving everyone a free trial. Until now, the price and the version hoop jumping were designed to limit the number of beta users in order to prevent inattentive testers from creating accidents that would be catastrophic PR.

The timing of the wide release wasn't based on whether people are ready to purchase or subscribe to FSD, the timing was based on v12 being ready for wide release. IMHO this was a necessary step on the way to actually solving FSD.

Claiming this timing is a smoking gun seems like a feeble grasp at straws. In addition, the amount of money they make from FSD will be lower during the free trial. Giving something away for free is a terrible way to raise more money quickly. People who were planning to purchase or subscribe to FSD are now holding off.

Eventually there will be very little new useful information to be garnered from their current testers. Elon specifically said they over-fit FSD in the SF Bay Area to the detriment of it working well in other parts of the country. The solution for this is to widen the pool of testers and widen the areas where they drive. Which is exactly what they are doing now even if it is temporarily detrimental to their FSD income and their bottom line.

Same poster that said stuff like this when it was pointed out that not being able to roll out to 25%+ of the cars doesn't seem like what would happen if they had been planning the trial instead of it being a fire sale to get revenue:
They may be delaying the v12 rollout to places like Tennessee and New Mexico because it's not ready for wide release in these areas.

They want everyone to try v12. There was a minor technical snag porting v12 to one of the firmware branches. Not a big deal and certainly not the end of the friggin' world as some have portrayed.

I've worked in software for many year. Tesla getting these changes done so quickly in mission critical software where lives and the company's reputation are on the line is absolutely amazing.

We're on week 4 of this "amazing" and "quick" rollout where 25%+ of cars don't have it yet, the clear sign of a very planned rollout based on data collection in a really ready v12 FSD. Weird that people in NM and TN have it, but not Seattle if it was a controlled roll out based on where it was ready. Also really weird that nobody can explain why FSD has to be active to learn, and in fact "its cute that you think they haven't been collecting that data." And that there is no response to why a minor technical snag can't be solved in 4 weeks despite Tesla's amazing software development process.
 
If you have been paying attention to this thread, there are lots of people defending the completely failed trial rollout as 4D chess on Tesla's part and it being about data collection not revenue. Go back to about page #8, and it's full of stuff like this:
That ONE user is "lots" of people?

Definitely not defending this individual (as I have opinions about what he said), but they are a vocal minority.

So, you are correct in pointing out that my "...I don't recall anyone saying..." is technically incorrect. I change it to "..I don't recall anyone reasonable saying..."
 
  • Like
Reactions: gt2690b
That ONE user is "lots" of people?

true with millions of drivers on the road all with FSD now it will probably only take two weeks to get 4 more nines

I agree, the month free was not a marketing strategy. It's for getting more data input for system development. It is mind boggling, for even just this month, how much data is going into the system. I am happy to be a part of this.


Anyway, here we are where people are already starting to have their 30 day trials end, with others having not even gotten it yet. So much for "All US cars that are capable of FSD will be enabled for a one month trial this week"
im willing to bet like at least 90% of teslas built since 2020 will receive a month of FSD in an "Elon week"
How's that bet coming? ;)
 
For those of you that believe Tesla is on a path with FSD to use machine learning / AI to detect a hand wave and understand the context, can you explain how Tesla's current path learns this, and why using FSD is better than not?

If I wanted to learn how people drive, I would watch videos of humans driving. I would not make a crappy self driver, and then try and learn only from when a human overrides that crappy driver. This is the same as why we don't train AI models on the output of other AI models.

So if what Tesla needs is data (not revenue, or hype), why aren't they better off just putting data collection on the 90% of cars without FSD, and uploading how those people drive, instead of doing data collection only on people that have FSD, and are actively using it? How do you even propose that the machine learning ever gets a chance to observe a hand gesture in this case?

Training ML/AI models requires data examples so, in the case of FSD, videos are a place to start. After all the videos Tesla has used to train FSD, I think the current state of the FSD product is little more than high school sophomore with a learner's permit.

can you explain how Tesla's current path learns this, and why using FSD is better than not?

Getting more video, as this free trial might provide, might advance FSD a little farther but I don't believe it advances it in a comfortable and safe direction. You ask a good question but I wonder why FSD needs to be installed to collect data. I've given Tesla permission to collect data (that was one of the reasons I bought the car). Can't it collect video from AutoPilot just as easy as it can from FSD? Or from just driving around town? It's all the same set of sensors? Regardless, I don't believe billions of miles of video will begin to help FSD reach a competent level of interaction with humans.

My concern is that FSD is the result of a magnificent regression algorithm that aggregates human drivers. In my opinion, there are too many poor human drivers, and I wonder how the poor driving is discerned from better driving within the training. I would much rather take is a sophomore-with-a-learner's-permit level FSD, drive around with it for a few months to learn how I drive. As for the human interaction, videos might be a start but interacting with that sophomore might be better.
 
Maybe they fell short on the processor power needed to upload from the whole US getting a free go at once.
So many unfounded theories!

So you're saying on March 25th they thought they had enough compute power to process every car in the USA (no longer compute constrained!), but by 7 days later, they realized they didn't? How bad were they at estimating their $1B supercomputer anyway?

So the solution was to just not install FSD on the cars that just happened to have v8.X software, which just happens doesn't have the FSD stack in it and for which Tesla doesn't know how to roll back to FSD versions? So convenient!

I like this theory because no matter which way you take it, Tesla is doing bad things. Either lying about not being compute constrained, or lying about all cars getting FSD 4 weeks ago.
 
  • Funny
Reactions: APotatoGod
Training ML/AI models requires data examples so, in the case of FSD, videos are a place to start.
Training ML models requires LABELED data examples. Nobody has ever trained a ML model just showing it the world and having it figure everything out. Show a 6 year old a 10 minute video of a basketball game with no other information and they will tell you that it's a game and the primary goal is to put the ball in a basket, and there are two teams. Show a ML model video from every single basketball game ever played but with no labels or rules, and it will return nothing. However, tell a ML to learn to play pong, and tell it that making the score go up is "good" and here is the one control it has, and in 10 minutes it will be the best pong player ever.

This is why Tesla can only learn from FSD being active, because it's the only time there is a "score". They can't afford to have humans review arbitrary videos, so all they learn from is negative reinforcement when a driver overrides. FSD goes to do something, and the driver cancels. So now you know at least SOMETHING that happened was bad. You at least have some chance of learning from that. But you learn nothing from when humans just dive the car because you have no idea what the human wanted to do, so you have no way to judge your ML model's theoretical behavior against it.

And now you can see why this process will take forever to learn what a hand wave means. You need a car on FSD to be at that intersection, with multiple other cars, decide not to go, and have a driver override it. Then you need about 100M examples of this, since 99% of those will be for some other reason than a hand wave, and you need 1M of them to even start hinting to the model that the wave allows you to ignore the right of way rules that apply 99% of the time you are at a stop, given there is zero labeling of this data.

Assume this override on FSD happens once every 5 seconds in your fleet, 24/7. Will only take you 16 years to get 100M examples.

And yet we think a 1 month free trial makes a big difference in data collection.
 
Training ML models requires LABELED data examples. Nobody has ever trained a ML model just showing it the world and having it figure everything out. Show a 6 year old a 10 minute video of a basketball game with no other information and they will tell you that it's a game and the primary goal is to put the ball in a basket, and there are two teams. Show a ML model video from every single basketball game ever played but with no labels or rules, and it will return nothing. However, tell a ML to learn to play pong, and tell it that making the score go up is "good" and here is the one control it has, and in 10 minutes it will be the best pong player ever.

This is why Tesla can only learn from FSD being active, because it's the only time there is a "score". They can't afford to have humans review arbitrary videos, so all they learn from is negative reinforcement when a driver overrides. FSD goes to do something, and the driver cancels. So now you know at least SOMETHING that happened was bad. You at least have some chance of learning from that. But you learn nothing from when humans just dive the car because you have no idea what the human wanted to do, so you have no way to judge your ML model's theoretical behavior against it.

And now you can see why this process will take forever to learn what a hand wave means. You need a car on FSD to be at that intersection, with multiple other cars, decide not to go, and have a driver override it. Then you need about 100M examples of this, since 99% of those will be for some other reason than a hand wave, and you need 1M of them to even start hinting to the model that the wave allows you to ignore the right of way rules that apply 99% of the time you are at a stop, given there is zero labeling of this data.

Assume this override on FSD happens once every 5 seconds in your fleet, 24/7. Will only take you 16 years to get 100M examples.

And yet we think a 1 month free trial makes a big difference in data collection.
You are right about machines requiring feedback to learn.

But then you crunched some numbers to fit a conclusion. You only have to be out a factor of ten twice with two guesses strung together and you are back in the ballpark.

Don't mistake my musings for making a case for FSD being solved this month or this year or this decade!
 
  • Like
Reactions: APotatoGod
You have crunched some numbers to fit a conclusion. You only have to be out a factor of ten twice with two guesses strung together and you are back in the ballpark.
You believe if you have 1M overrides at a 4 way stop, 100K of those will be because of a hand wave, and machine learning could determine it was a hand wave overriding the right of way logic with only those 100K unlabeled samples? I thought I was being generous in taking only 1M samples to figure it out and 1% of overrides being hand gestures!

Meanwhile Amazon has been manually reviewing and detailed labeling 70% of all transactions in their just walk out stores where all that needs to be tracked is which item someone picked off a shelf and left the store with in a non-safety critical application, and that's not working well enough after 6 years of operations and they're massively scaling it back.
 
You believe if you have 1M overrides at a 4 way stop, 100K of those will be because of a hand wave, and machine learning could determine it was a hand wave overriding the right of way logic with only those 100K unlabeled samples? I thought I was being generous in taking only 1M samples to figure it out and 1% of overrides being hand gestures!

Meanwhile Amazon has been manually reviewing and detailed labeling 70% of all transactions in their just walk out stores where all that needs to be tracked is which item someone picked off a shelf and left the store with in a non-safety critical application, and that's not working well enough after 6 years of operations and they're massively scaling it back.
I don't know the numbers any better than than the next man in the street either.

You forgot that overrides also occur with the waivee being the over rider. Very easy to spot those ones, and every so often you jackpot with the other vehicle being over ridden to proceed.

I only see possibilities. Not making predictions.
 
For those of you that believe Tesla is on a path with FSD to use machine learning / AI to detect a hand wave and understand the context, can you explain how Tesla's current path learns this, and why using FSD is better than not?

If I wanted to learn how people drive, I would watch videos of humans driving. I would not make a crappy self driver, and then try and learn only from when a human overrides that crappy driver. This is the same as why we don't train AI models on the output of other AI models.

So if what Tesla needs is data (not revenue, or hype), why aren't they better off just putting data collection on the 90% of cars without FSD, and uploading how those people drive, instead of doing data collection only on people that have FSD, and are actively using it? How do you even propose that the machine learning ever gets a chance to observe a hand gesture in this case?
Let me try to answer this. Tesla’s FSD program works as follows: it learns from mistakes by training on similar situations where those mistakes do not occur.

You are correct that a single video clip cannot be simultaneously a bad example (e.g. a necessary disengagement) and as a good example (used for training). However, the value of the bad examples (collected from the larger FSD fleet) is that they inform Tesla what they need to focus on. If numerous disengagements happen at exactly the same intersection, that’s a hint to Tesla that they should pay attention to what’s happening there, and collect data from human-driven cars (non-FSD) at that location to gather examples of good driving.

As FSD improves, necessary disengagements become more rare, so more FSD miles are required to spot patterns in them from which to learn which sorts of good driving data to collect. Particularly as Tesla gets into the “long tail” of flaws that may show up only in extremely rare situations (once every 100k miles, say), a HUGE number of FSD miles will be required to statistically surface such flaws. Hence the free FSD trials. Once the rare flaws are spotted and understood, Tesla can then gather good driving data from the rest of its fleet (and/or synthesize it) to train FSD to overcome these flaws.

The hand-waving example is a fairly easy one. Tesla has presumably already observed that a lot of FSD disengagements happen when there is hand-waving going on. So based on this insight, they can instruct the fleet to gather NON-FSD clips that also include hand-waving, and train the next generation of the system based on these clips. If there is a consistent pattern to be found in these clips, the system can find it and learn it. (That’s the essence of machine learning.)

I believe Tesla can probably drop the necessary disengagement rate by another couple orders of magnitude by using this approach. They can probably achieve geofenced highway L3 this way. But I suspect their current approach will not be sufficient to achieve sufficiently safe and human-like city-streets L4 driving at scale, let alone L5. That will require more general-intelligence reasoning over longer time horizons and broader contexts, and for L5 it will probably require the car to be able to have an actual conversation with the driver/passengers, a la KITT. Time will tell if I’m right!
 
Last edited:
  • Like
Reactions: APotatoGod
The hand-waving example is a fairly easy one. Tesla has presumably already observed that a lot of FSD disengagements happen when there is hand-waving going on. So based on this insight, they can instruct the fleet to gather NON-FSD clips that also include hand-waving, and train the next generation of the system based on these clips.
Are you claiming current Teslas can accurately detect hand waving but the only issue is that FSD doesn't know what it means, even after hand training the model to detect hand waving? The hard part is done and they are waiting for ML to do the easy logic part?

This seems ridiculous, and this is not how ML training works, much less the magical 300K lines of code removed V12.

Particularly as Tesla gets into the “long tail” of flaws that may show up only in extremely rare situations (once every 100k miles, say), a HUGE number of FSD miles will be required to statistically surface such flaws. Hence the free FSD trials.
So you're suggesting that we'll see free trials over and over again as Tesla needs data?
 
You are correct that a single video clip cannot be simultaneously a bad example (e.g. a necessary disengagement) and as a good example (used for training)
Ashok Elluswamy seems to say these disengagements are used in training (presumably both as negative examples of what not to do as well as positive examples of the correction) at the 2024 Q1 earnings call:

Any failures like Elon alluded to, we get the data back, add it to the training and that improves the model in the next cycle. So, we have this constant feedback loop of issues, fixes, evaluations and then rinse and repeat. Especially with the new v12 architecture, all of this is automatically improving without requiring much engineering interventions in the sense that engineers don't have to be creative in how they code the algorithms. It's mostly learning on its own based on data.​

Having people use the latest version and disengage finds examples of where the newest version still needs improvements. This is opposed to broadly collecting "easy" examples that the networks would handle anyway or differences in human behavior that aren't necessarily problematic.
 
  • Like
Reactions: APotatoGod
Are you claiming current Teslas can accurately detect hand waving but the only issue is that FSD doesn't know what it means, even after hand training the model to detect hand waving? The hard part is done and they are waiting for ML to do the easy logic part?

This seems ridiculous, and this is not how ML training works, much less the magical 300K lines of code removed V12.
Clearly hand-waving has not yet been a focus of their training set. (If there are very few examples of something in the training set, the system won't learn it.) They are currently and rightly focused on issues of more critical safety (i.e. not crashing), and politeness has been a secondary priority. Once they decide to focus on less critical issues like hand-waving, their current hardware and software approach should be able to solve it using the approach I described. The "hand-training" will not involve hand-coding, but rather gathering and curation of many training clips that involve that specific aspect of driving. So you're right, the model has not been "hand-trained" for this yet. Thus, FSD currently "sees" the hand signals (the pixels go into the network), but it doesn't yet understand what to do with that information, so it effectively ignores it. Eventually this will change, as Tesla improves the training sets.
So you're suggesting that we'll see free trials over and over again as Tesla needs data?
I would guess we'll see free trials once a year or so. They probably can't do it much more often than that without sabotaging their FSD subscription revenue, so it's a balance. Besides, the problems that currently make up the majority of necessary disengagements (avoiding large potholes, trying to drive straight from left-turn-only lanes) should have more than enough examples in the current FSD fleet. I do think they jumped the gun by at least several months on offering the free trials when they did.
 
Last edited:
  • Like
Reactions: APotatoGod
Ashok Elluswamy seems to say these disengagements are used in training (presumably both as negative examples of what not to do as well as positive examples of the correction) at the 2024 Q1 earnings call:

Any failures like Elon alluded to, we get the data back, add it to the training and that improves the model in the next cycle. So, we have this constant feedback loop of issues, fixes, evaluations and then rinse and repeat. Especially with the new v12 architecture, all of this is automatically improving without requiring much engineering interventions in the sense that engineers don't have to be creative in how they code the algorithms. It's mostly learning on its own based on data.​

Having people use the latest version and disengage finds examples of where the newest version still needs improvements. This is opposed to broadly collecting "easy" examples that the networks would handle anyway or differences in human behavior that aren't necessarily problematic.
I'm not sure how literally to take Ashok's description, as far as the disengagement clips being used for training verbatim, rather than being used to inform which other clips (with correct behavior) should be used to train the driving task. Most ML training works purely on positive reinforcement; give a model a million clips showing it what to do, and it will get extremely good at emulating that behavior. Negative reinforcement is a lot trickier, since it's not necessarily clear for the model to figure out from a wrong clip which parts of it are wrong.

However, I do expect that the disengagement clips will eventually be used directly for training (if they aren't already) in a slightly different way. They can be used to train an L3 driving model to predict when the FSD system is about to make a mistake! An L3 system must be able to predict such potential disengagements with several seconds warning, so it can alert the driver to take control, and this has to be done EXTREMELY reliably. Some of this functionality is already built into the current L2 system (various double-beeps, and the TAKE CONTROL IMMEDIATELY panic), and it is co-evolving with the driving task itself; this may be what Tesla is primarily using all the disengagement clips for?

I don't know if Tesla has any automated or semi-automated mechanisms for distinguishing "elective" disengagements from "necessary" disengagements; perhaps they are using multiple disengagements at the same location as a proxy for necessity. I do know that some types of clips are heavily curated, such as stop sign behavior to avoid rolling stops. But the fact that the system is still making some very common systematic and obvious mistakes, like trying to drive straight from left-turn-only lanes, suggests that their overall process still has plenty of room for improvement!
 
Last edited:
  • Informative
Reactions: APotatoGod