Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

All US Cars capable of FSD will be enabled for one month trial this week

This site may earn commission on affiliate links.
I do think they jumped the gun by at least several months on offering the free trials when they did.
Gee, I wonder what could have caused a company that is laying off 10% of the workforce including the whole supercharger team to "jump the gun" on releasing a free trial of some very expensive software?

The irony to all of this is that things like learning how to handle a 4 way stop are better learned without FSD being in active use.
Roll up to a 4 way stop. The FSD code can easily see this in "shadow" mode. It can even see intent with things like turn signals. It decides it can not go yet for reasons X such as another car arriving first. However, the driver does actually go. There you go, upload that video, and it's even semi-labeled for you and is positive reinforcement.

This is much better than only learning when a driver is actively using FSD and decides to disengage. However, it has two downsides: First, it requires much more compute since you'll get orders of magnitude more videos to include. Second, as a transparent background process, it does not make for a sexy marketing message like "free FSD trial" does.
 
Most ML training works purely on positive reinforcement; give a model a million clips showing it what to do, and it will get extremely good at emulating that behavior.
Are you thinking about supervised learning? Something like a classification task of "car" vs "bus" even positive examples of a "car" implicitly means it's "not a bus" resulting in the neural network training process to boost signals towards "car" and away from "bus." Some might even say this dual training targets (towards and away) are fundamental to the effectiveness of supervised learning and why it's important to have a balanced dataset.

If you're thinking of reinforcement learning, there's positive and negative rewards, so I was suggesting that video clips leading up to the disengagement are negative while the human corrective action can be positive. Human labelers can annotate which portions should be used as negative or positive examples (if any).

Potentially Tesla was focused on getting a wide diversity of disengagement examples with the free trial, but it could rely on the ongoing fleet and shadow mode for additional fine-tuning?
 
The software is not yet skilled at defensive driving techniques...

Or at following it's own route. More than once, it said it was going to take (for example) the ramp on the left in a split, even signaled and said it was going to do it only to cancel and just head towards the wrong route.

It's a path that I never had issues with NoAP on EAP.
 
  • Informative
Reactions: Transformer
tried the FSD for a couple weeks. Turned it off. Would never pay for it. Won’t use it even if it’s free. It’s terrible and dangerous. From a novelty/proof of concept standpoint it’s fun, but from a practical standpoint it’s awful.

Initially I reported when I had to disengage but it became too frequent so I simply turned it off.

And while I’m on my rant, this new Tesla vision park assist is also garbage. “High fidelty 3D representation”? Hahah! It’s just blobs and less useful than the highly inaccurate standard feature that gives you a distance to object of +/- 12”. I’ll stick with standard.

/rant

I still love my MY for its driving capability and experience. Just hate the software “tech”. It’s like the designers never actually tested this stuff live.
 
It’s like the designers never actually tested this stuff live.
Alternate possibility- the CEO and other leadership is forcing the designers to follow certain paths such as removing ultrasonic sensors and releasing software before it's ready, and the designers know it's not ready but they don't get to make the choice of when it's released.

Plus, you don't want to make anything at Tesla work too well or they'll lay off your whole team because you're no longer needed.
 
Alternate possibility- the CEO and other leadership is forcing the designers to follow certain paths such as removing ultrasonic sensors and releasing software before it's ready, and the designers know it's not ready but they don't get to make the choice of when it's released.

Plus, you don't want to make anything at Tesla work too well or they'll lay off your whole team because you're no longer needed.
Release dates in software are always driven from the top down!
 
tried the FSD for a couple weeks. Turned it off. Would never pay for it. Won’t use it even if it’s free. It’s terrible and dangerous. From a novelty/proof of concept standpoint it’s fun, but from a practical standpoint it’s awful.
Yeah, I totally understand not wanting software that's incomplete. I also won't buy but am very happy to be part of the development this past month. Today is the last day and so happens my car got the Vision Only park assist. I tested it out this morning, it worked great.
 
Gee, I wonder what could have caused a company that is laying off 10% of the workforce including the whole supercharger team to "jump the gun" on releasing a free trial of some very expensive software?

The irony to all of this is that things like learning how to handle a 4 way stop are better learned without FSD being in active use.
Roll up to a 4 way stop. The FSD code can easily see this in "shadow" mode. It can even see intent with things like turn signals. It decides it can not go yet for reasons X such as another car arriving first. However, the driver does actually go. There you go, upload that video, and it's even semi-labeled for you and is positive reinforcement.

This is much better than only learning when a driver is actively using FSD and decides to disengage. However, it has two downsides: First, it requires much more compute since you'll get orders of magnitude more videos to include. Second, as a transparent background process, it does not make for a sexy marketing message like "free FSD trial" does.
Agreed that the wider FSD distribution is not going to add value regarding common situations such as 4-way-stop behavior. As mentioned, they're already doing heavy manual curation of the stop-sign training set specifically, to appease NHTSA's request for no rollling stops, yet somehow the behavior still sucks, with double-stops and the like. IMO they really needed to fix more of these common annoying low-hanging-fruit issues before jumping to a wider FSD release. (If they can't solve stop signs with a small fleet, there's no way they'll be able to solve rare large cases regardless of the size of the fleet.) And I'm as disturbed as you are by the sacking of the Supercharger team; it's really inexplicable and sad. Feels like a case of the bull owning the china shop.

Regarding "much more compute", the training set emphatically does not include _every_ video from the fleet! They just need enough examples of each task to effectively reinforce and train the proper behavior.
 
Are you thinking about supervised learning? Something like a classification task of "car" vs "bus" even positive examples of a "car" implicitly means it's "not a bus" resulting in the neural network training process to boost signals towards "car" and away from "bus." Some might even say this dual training targets (towards and away) are fundamental to the effectiveness of supervised learning and why it's important to have a balanced dataset.

If you're thinking of reinforcement learning, there's positive and negative rewards, so I was suggesting that video clips leading up to the disengagement are negative while the human corrective action can be positive. Human labelers can annotate which portions should be used as negative or positive examples (if any).

Potentially Tesla was focused on getting a wide diversity of disengagement examples with the free trial, but it could rely on the ongoing fleet and shadow mode for additional fine-tuning?
Reinforcement learning, not supervised. The positive and negative rewards come from whether or not the model correctly emulates the example of correct driving; the entire training set is still "correct behavior". If the training example is "wrong behavior", then the model would be unlikely to get it wrong in exactly the same way, and there wouldn't be much of a useful signal if the model's output doesn't match the training example. There are countless ways to get it wrong, but only one way to get it right. The positive reinforcement should only come if the model finds the one correct way.

The combined FSD + human behavior of the car in takeover scenarios is generally still pretty bad; the model should probably only be trained on examples where the behavior is correct from start to finish, rather than where it starts to go badly wrong and then corrects. I suppose FSD takeovers could give useful information about how to recover from "badly wrong" situations, though. But as mentioned, for an L3 system the car will need to be able to reliably identify when it's starting to go badly wrong, so it can prompt the user to take over. Tesla's L2 system already uses this somewhat, and it will also be needed in L4 to put the car into "safe mode" where it will slow to a stop, though that should happen extremely rarely.
 
  • Like
Reactions: Mardak
Training ML models requires LABELED data examples. Nobody has ever trained a ML model just showing it the world and having it figure everything out. Show a 6 year old a 10 minute video of a basketball game with no other information and they will tell you that it's a game and the primary goal is to put the ball in a basket, and there are two teams. Show a ML model video from every single basketball game ever played but with no labels or rules, and it will return nothing. However, tell a ML to learn to play pong, and tell it that making the score go up is "good" and here is the one control it has, and in 10 minutes it will be the best pong player ever.

This is why Tesla can only learn from FSD being active, because it's the only time there is a "score". They can't afford to have humans review arbitrary videos, so all they learn from is negative reinforcement when a driver overrides. FSD goes to do something, and the driver cancels. So now you know at least SOMETHING that happened was bad. You at least have some chance of learning from that. But you learn nothing from when humans just dive the car because you have no idea what the human wanted to do, so you have no way to judge your ML model's theoretical behavior against it.

And now you can see why this process will take forever to learn what a hand wave means. You need a car on FSD to be at that intersection, with multiple other cars, decide not to go, and have a driver override it. Then you need about 100M examples of this, since 99% of those will be for some other reason than a hand wave, and you need 1M of them to even start hinting to the model that the wave allows you to ignore the right of way rules that apply 99% of the time you are at a stop, given there is zero labeling of this data.

Assume this override on FSD happens once every 5 seconds in your fleet, 24/7. Will only take you 16 years to get 100M examples.

And yet we think a 1 month free trial makes a big difference in data collection.

Totally forgot about the labeling - you are very correct. Now I understand why the car asked me to record the reason I disabled FSD - I was a free labeler. Do I recall Tesla let go a bunch of staff that was doing some manual labeling a while back?

I got an email that my free trial has ended. I would expect that the email would have had a link to a survey asking for feedback but Tesla probably has no staff to review the feedback.
 
You are going to make a small claims case about not getting a free demo?

Tesla has stated two things:
All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver.
All U.S. cars that are capable of FSD will be enabled for a one-month trial this week.

Makes me wonder if my car is actually FSD capable. That car would be worth a whole lot less if it wasn't, and Tesla said ALL FSD capable cars would be getting it. I bet if I subscribe for a month I don't actually get it either, despite them actively taking my money and then refusing to refund and having told me it would be free....

Filing a case is cheap and easy.
 
Tesla has stated two things:



Makes me wonder if my car is actually FSD capable. That car would be worth a whole lot less if it wasn't, and Tesla said ALL FSD capable cars would be getting it. I bet if I subscribe for a month I don't actually get it either, despite them actively taking my money and then refusing to refund and having told me it would be free....

Filing a case is cheap and easy.
Do you have version 8.9?

I haven't read everything in these long threads and haven't figured out how to find this with a search, but has anyone with version 8.9 tried spending $99 to subscribe to FSD for one month to see if works on a vision only car?
 
  • Like
Reactions: zoomer0056
Tesla has stated two things:



Makes me wonder if my car is actually FSD capable. That car would be worth a whole lot less if it wasn't, and Tesla said ALL FSD capable cars would be getting it. I bet if I subscribe for a month I don't actually get it either, despite them actively taking my money and then refusing to refund and having told me it would be free....

Filing a case is cheap and easy.
What car do you have? Do you see the option to subscribe in the app?
 
Last edited:
My free trial just expired, and while I think it's amazing how capable FSD 12 is, I don't need self driving so there's no reason for me to pay for it. I will admit after using it almost daily for a month, it's noticeable to suddenly not have it anymore (again, I also had 3 months free V11 when I bought my MYLR). While it wasn't perfect, it never failed to get from point A to point B despite many off highway routes including construction zones, and it never did anything unsafe - it errs on the side of caution, to a fault.

My only hope is full self driving will be a reality before I'd so old the state won't give me a license anymore - many years from now.
 
My free trial just expired, and while I think it's amazing how capable FSD 12 is, I don't need self driving so there's no reason for me to pay for it. I will admit after using it almost daily for a month, it's noticeable to suddenly not have it anymore (again, I also had 3 months free V11 when I bought my MYLR). While it wasn't perfect, it never failed to get from point A to point B despite many off highway routes including construction zones, and it never did anything unsafe - it errs on the side of caution, to a fault.

My only hope is full self driving will be a reality before I'd so old the state won't give me a license anymore - many years from now.
My own experiments with it over the past weeks have been less positive. While the technology is extremely impressive when it works, it often does not. FSD is constantly being disengaged - either self-disengaging or being manually disengaged by driver interventions - when it cannot handle specific situations. This disengagement is especially frequent in urban driving, making FSD basically unusable there in my own driving experience. It is better on the highway, but even then it is subject to regular disengagements. There is a need for driver supervision at a very high degree of vigilance to allow for quick interventions to avoid dangerous situations. I find this driving approach more stressful than being entirely in control by myself. No doubt future versions will be better, but for the type of driving I do and in this area, it is just not ready yet.