Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

All US Cars capable of FSD will be enabled for one month trial this week

This site may earn commission on affiliate links.
Sunday 4/21, still no v12 FSD supervised. Again, my guess is Musk meant to say all new sales, but since he said “everyone” they had to deploy to whomever they could. Leaving a chunk ineligible. I guess the better question is if we get into May or June, will we still get a free 30 days? I’ve got 14,000 reasons why this “everyone” gets v12 won’t happen soon or free but I will post and be pleasantly surprised if it does.
I’ve spoken to quite a few friends who have gotten the free month for cars bought in the last couple years; it’s not just new sales (unless “new” is the last two years).
 
I’ve spoken to quite a few friends who have gotten the free month for cars bought in the last couple years; it’s not just new sales (unless “new” is the last two years).
Of course it’s not just new sales, I was only suggesting maybe everyone (“next week”) was perhaps an overstatement because clearly at least a large chunk on update 2025.8.x hasn’t seen squat.
 
Still waiting on 2024.8.9 for the free month. Wanted to go out for margaritas with my buddy. So I went ahead and just paid the $99 for the FSD (Female Sexy Driver). I was very pleased with the performance. Seemed like a good option.🤣😝😜View attachment 1040615
thats gonna cost you way more than $99 / month but totally worth it

would be willing to endure the annoying "your seatbelt is off" chime in this case
 
  • Funny
Reactions: zoomer0056
You cite every day experiences that human drivers experience. If FSD were more human, then a waiting pedestrian and a driver waving you on would be handled with aplomb. Not knowing how FSD is reacting to these scenarios is an unsafe situation and a distraction to the driver.

Being trained with billions of miles of video hardly makes FSD safe or more human. I'm not sure I want an FSD to drive more like a human given the humans I've seen ignore stop signs, weave in traffic, or speed through neighborhoods.
The way that you worded it, I am not sure what you are saying. In relation to the examples I gave, are you saying that FSD is unsafe? And are you also saying, regardless, that human driving is as unsafe or even more unsafe?

And my more fundamental question is how should an eventual fully-independent FSD (not requiring or even contemplating driver supervision or intervention) react to the real-life examples I gave. To recap and provide more context, the examples are:

- My Tesla with FSD activated is turning right from a parking lot onto a very busy street. There are no realistic gaps in the traffic for the Tesla to pull out, but eventually a car pauses and leaves a little gap and waves me on. The FSD can't see the wave and can't realistically process that this is an opportunity to pull out without being able to read a signal that the other driver is going to wait. The Tesla does not proceed and the other driver eventually gets frustrated and goes ahead. Ideally what should FSD have done? Just be cautious and wait for a sufficient gap in traffic? What if the Tesla is not in a terrifically safe position, with its nose partially on the street and is blocking the sidewalk for pedestrians, and at that time of day it could take a long time for big enough gap to come?

- My Tesla with FSD activated is driving very slowly on a narrow downtown street with heavy car traffic and lots of pedestrians along the sidewalks. Taking advantage of a small gap and the slow, creeping traffic, a pedestrian starts to jaywalk across the street in front of the Tesla, but then pauses just before the path of the car, not knowing if the Tesla will stop. The Tesla then pauses, not knowing whether the pedestrian is going to cross. It is a standoff: the pedestrian will not go ahead without the driver's signal to proceed, but the car will not go head without the pedestrian's signal that he is going to wait. Under supervised FSD, I disengaged the self-driving, hit the brake and waved the pedestrian on, but how should FSD handle the situation without human supervision and intervention and without the ability to exchange human signals with the pedestrian?
 
Last edited:
@chinney said:
The way that you worded it, I am not sure what you are saying. In relation to the examples I gave, are you saying that FSD is unsafe? And are you also saying, regardless, that human driving is as unsafe or even more unsafe?

My apologies for not being clear. I am saying that I believe the Tesla FSD product is NOT safe enough for anyone to use even with full driver attention. I'm also saying that there are many human drivers that, in my opinion, drive in an unsafe manner and I would not want their examples used to train an FSD product.


@chinney also said:
And my more fundamental question is how should an eventual fully-independent FSD (not requiring or even contemplating driver supervision or intervention) react to the real-life examples I gave.

I think that is exactly the question. Training videos, rules of the road, and sensors are the beginning of an FSD product. Driving is so much more. Your examples, with the additional context, highlight subtle interactions that may not have been considered for FSD. I believe those interactions begin to define a threshold of interaction competency that an FSD product must exhibit before it could be considered for broad use. Some kind of analog to the Turing test should be established to capture required behavioral interactions in many scenarios. Is that the responsibility of the NTSB? AAA? Some other independent organization? It certainly should not be the responsibility of the manufacturers of FSD products.

If videos of your examples were used to train an FSD product, it should observe not only the road and traffic but the human behaviors that help make driving safe. I know I'm contradicting myself in suggesting humans be used to improve safety. But if manufacturers expect to deploy FSD products that work for and with humans, it must have an understanding of human behavior as well as the rules of its intended environment.
 
My apologies for not being clear. I am saying that I believe the Tesla FSD product is NOT safe enough for anyone to use even with full driver attention. I'm also saying that there are many human drivers that, in my opinion, drive in an unsafe manner and I would not want their examples used to train an FSD product.




I think that is exactly the question. Training videos, rules of the road, and sensors are the beginning of an FSD product. Driving is so much more. Your examples, with the additional context, highlight subtle interactions that may not have been considered for FSD. I believe those interactions begin to define a threshold of interaction competency that an FSD product must exhibit before it could be considered for broad use. Some kind of analog to the Turing test should be established to capture required behavioral interactions in many scenarios. Is that the responsibility of the NTSB? AAA? Some other independent organization? It certainly should not be the responsibility of the manufacturers of FSD products.

If videos of your examples were used to train an FSD product, it should observe not only the road and traffic but the human behaviors that help make driving safe. I know I'm contradicting myself in suggesting humans be used to improve safety. But if manufacturers expect to deploy FSD products that work for and with humans, it must have an understanding of human behavior as well as the rules of its intended environment.
Thanks. Good points. I essentially agree with you.
 
- My Tesla with FSD activated is turning right from a parking lot onto a very busy street. There are no realistic gaps in the traffic for the Tesla to pull out, but eventually a car pauses and leaves a little gap and waves me on. The FSD can't see the wave and can't realistically process that this is an opportunity to pull out without being able to read a signal that the other driver is going to wait.
Why wouldn't FSD see the wave? It has eight cameras. The beauty of end-to-end NN training is that it will implicitly train itself on subtle signals like this, just as humans do. (As long as these sorts of examples are in the training set, but why wouldn't they be?) Since the training data pulls from real-life video sourced from the fleet, it will inevitably include real-life cases like this.
- My Tesla with FSD activated is driving very slowly on a narrow downtown street with heavy car traffic and lots of pedestrians along the sidewalks. Taking advantage of a small gap and the slow, creeping traffic, a pedestrian starts to jaywalk across the street in front of the Tesla, but then pauses just before the path of the car, not knowing if the Tesla will stop. The Tesla then pauses, not knowing whether the pedestrian is going to cross. It is a standoff: the pedestrian will not go ahead without the driver's signal to proceed, but the car will not go head without the pedestrian's signal that he is going to wait. Under supervised FSD, I disengaged the self-driving, hit the brake and waved the pedestrian on, but how should FSD handle the situation without human supervision and intervention and without the ability to exchange human signals with the pedestrian?
How does this work at night, when it's too dark for the pedestrian to see you wave them on? Eventually the pedestrian will move, one way or the other, and the car will follow their lead, again based on extensive real-world training data that includes such scenarios, and the end-to-end network's ability to recognize and respond to very subtle signals.
 
I got a Tesla model Y all wheel drive in August 2023 with hdwe 4. I traded a 2021 model 3 with hdwe 3. I had fsd on the 3 and thought it worked passably well The difference between hdwe 3 and hdwe 4 is ultrasonic sensors are included in hdwe 3 but not in hdwe 4. I used fsd on the Y with the 90 day free trial and it had v11.1 and it really sucked. It stopped in dangerous spots took stupid turns was terrible on lane changes. The Y cannot judge short distances at all. The 3 I had could judge short distances within a few inches but the Y can't tell 1 foot from 2 feet. I relied on this determination on the 3 but can't rely on the Y.

I am ineligible for FSD 12 because I have update 2024.8.9. Tesla says the car is as up-to-date as it's going to be at least for now. I think the reason for the lack of eligibility is the lack of ultrasonic sensors.I think they made a major error in deleting them because vision doesn't cut it when you get into close situations like parking and lane changes. I wonder if it will ever cut it. AI is probably not too hot when it comes to situations which require precise deterministic judgments since it relies so much on probabilities.

I don't think any of us that have 2024.8.9 will ever have fsd 12.x. The lack of proper hardware is too much of a constraint and Elon is afraid to admit to such a drastic error.
 
Why wouldn't FSD see the wave? It has eight cameras. The beauty of end-to-end NN training is that it will implicitly train itself on subtle signals like this, just as humans do. (As long as these sorts of examples are in the training set, but why wouldn't they be?) Since the training data pulls from real-life video sourced from the fleet, it will inevitably include real-life cases like this.

How does this work at night, when it's too dark for the pedestrian to see you wave them on? Eventually the pedestrian will move, one way or the other, and the car will follow their lead, again based on extensive real-world training data that includes such scenarios, and the end-to-end network's ability to recognize and respond to very subtle signals.
Well, FSD did not see the wave from the driver of the other car, regardless of the number of cameras that it has. And yes, on a well-lit urban street, a pedestrian can see a driver wave them forward even at night. Will FSD eventually be observant enough and sufficiently advanced in its programming to recognize and respond to very subtle human signals such as those in the examples I gave, and to convey appropriate signals that humans will recognize? Maybe, but it is not there yet, and I have doubts about when it will be. I agree with the post from JDOhio above that the ability of FSD to appropriately engage in such interactions begins to define a threshold of interaction competency that an FSD product must exhibit before it could be considered for broad, unsupervised use.
 
Last edited:
I got a Tesla model Y all wheel drive in August 2023 with hdwe 4. I traded a 2021 model 3 with hdwe 3. I had fsd on the 3 and thought it worked passably well The difference between hdwe 3 and hdwe 4 is ultrasonic sensors are included in hdwe 3 but not in hdwe 4. I used fsd on the Y with the 90 day free trial and it had v11.1 and it really sucked. It stopped in dangerous spots took stupid turns was terrible on lane changes. The Y cannot judge short distances at all. The 3 I had could judge short distances within a few inches but the Y can't tell 1 foot from 2 feet. I relied on this determination on the 3 but can't rely on the Y.

I am ineligible for FSD 12 because I have update 2024.8.9. Tesla says the car is as up-to-date as it's going to be at least for now. I think the reason for the lack of eligibility is the lack of ultrasonic sensors.I think they made a major error in deleting them because vision doesn't cut it when you get into close situations like parking and lane changes. I wonder if it will ever cut it. AI is probably not too hot when it comes to situations which require precise deterministic judgments since it relies so much on probabilities.

I don't think any of us that have 2024.8.9 will ever have fsd 12.x. The lack of proper hardware is too much of a constraint and Elon is afraid to admit to such a drastic error.
HW3 and USS are not tied together. I have HW3 w/o USS.
 
I noticed today that FSD had a hard time threading the needle between cars and pedestrians while making a left hand turn against oncoming traffic. It failed to go when it should have gone and we sat through a light…then I had to take control as it was about to fail again to go (meaning I’d be there for two cycles of the light). I’m sure the ppl behind me were peeved.
 
I think the reason for the lack of eligibility is the lack of ultrasonic sensors.
It's not. I have two cars with USS and Radar and they don't have FSD either. It's just Tesla's ineptitude and Elon's lies that are the reason you don't have it now. Plenty of cars with your configuration do have FSD running. As the sycophants will tell you, it's both a good thing you don't have FSD right now because Tesla is spreading out the release and it's also a "minor software snag" that they are fixing remarkably quick.
 
  • Like
Reactions: boonedocks
It's not. I have two cars with USS and Radar and they don't have FSD either. It's just Tesla's ineptitude and Elon's lies that are the reason you don't have it now. Plenty of cars with your configuration do have FSD running. As the sycophants will tell you, it's both a good thing you don't have FSD right now because Tesla is spreading out the release and it's also a "minor software snag" that they are fixing remarkably quick.
tesla's inept but you have yet to tell me who is actually .. uhh.. ept?
 
  • Like
Reactions: schuchm
Well, FSD did not see the wave from the driver of the other car, regardless of the number of cameras that it has. And yes, on a well-lit urban street, a pedestrian can see a driver wave them forward even at night. Will FSD eventually be observant enough and sufficiently advanced in its programming to recognize and respond to very subtle human signals such as those in the examples I gave, and to convey appropriate signals that humans will recognize? Maybe, but it is not there yet, and I have doubts about when it will be. I agree with the post from JDOhio above that the ability of FSD to appropriately engage in such interactions begins to define a threshold of interaction competency that an FSD product must exhibit before it could be considered for broad, unsupervised use.
Pedantically, FSD did see it, but didn't have enough relevant examples in its training set to know what the correct thing was (if anything) to do with that information. The whole point (and beauty) of v12-style end-to-end neural networks is that there is no intermediate hand-engineered "programming" layer, where a Tesla engineer may have made an explicit decision about what to do with hand-waving signals from other drivers. If there are enough examples in the training set that involve other drivers waving (or not), and the car responding appropriately, the network will learn the relevance and correct behavior implicitly.

I agree with you that v12 doesn't currently seem to be well-enough trained on hand signals to know what to do with them yet. The purpose of Tesla wanting to gather billions of training examples from the fleet is that as the training set grows, it will invariably include such examples, without requiring anyone at Tesla to have to explicitly select or look for them. This is particularly critical for the "long tail" of edge cases, which is far too long and varied to be manually curate-able. But it will eventually allow FSD to respond appropriately to truly oddball situations that only come up once in a million miles.

One major limitation, given Tesla's current approach as I understand it, is if the appropriate action is contextually dependent on a broader time horizon. For instance, a few weeks ago I was in stop-and-go traffic on an interstate due to road construction ahead. Two lanes were going my direction; periodically one lane would move a bit, then the other. FSD (with its short time horizon) kept interpreting the stopped car in front of me as an "obstacle", and kept trying to dart into the other lane. "Understanding" the broader context in this situation would be necessary for the car to realize that staying in its own lane is the right thing to do, and that the car in front isn't an "obstacle". Perhaps Tesla can overcome this by finding a way to include much longer clips (10 minutes, say) into its training data. Understanding such broader context, over a wide range of timescales (from seconds to years or even decades), is an essential part of general intelligence, I think.
 
Last edited:
  • Funny
  • Like
Reactions: JDOhio and KArnold
The whole waving people on shouldn’t be a thing anyways. If you have the right of way then just take it and go.

This is how accidents happen. Eg a car wants to turn left across 2 lanes of traffic. Car in lane 1 decides to “be nice” and slow down/stop and wave them on even though they have right of way. Car in lane 2 keeps going because they don’t see the car wanting to turn. Turning car doesn’t see car in lane 2 because car in lane 1 is blocking the view.
 
The whole waving people on shouldn’t be a thing anyways. If you have the right of way then just take it and go.
Yup. Maybe FSDS should have a CAPTCHA routine to select all the squares that contain a hand gesture.
1000029596.jpg
 
  • Funny
Reactions: byeLT4 and E90alex
Pedantically, FSD did see it, but didn't have enough relevant examples in its training set to know what the correct thing was (if anything) to do with that information. The whole point (and beauty) of v12-style end-to-end neural networks is that there is no intermediate hand-engineered "programming" layer, where a Tesla engineer may have made an explicit decision about what to do with hand-waving signals from other drivers. If there are enough examples in the training set that involve other drivers waving (or not), and the car responding appropriately, the network will learn the relevance and correct behavior implicitly.

I agree with you that v12 doesn't currently seem to be well-enough trained on hand signals to know what to do with them yet. The purpose of Tesla wanting to gather billions of training examples from the fleet is that as the training set grows, it will invariably include such examples, without requiring anyone at Tesla to have to explicitly select or look for them. This is particularly critical for the "long tail" of edge cases, which is far too long and varied to be manually curate-able. But it will eventually allow FSD to respond appropriately to truly oddball situations that only come up once in a million miles.

One major limitation, given Tesla's current approach as I understand it, is if the appropriate action is contextually dependent on a broader time horizon. For instance, a few weeks ago I was in stop-and-go traffic on an interstate due to road construction ahead. Two lanes were going my direction; periodically one lane would move a bit, then the other. FSD (with its short time horizon) kept interpreting the stopped car in front of me as an "obstacle", and kept trying to dart into the other lane. "Understanding" the broader context in this situation would be necessary for the car to realize that staying in its own lane is the right thing to do, and that the car in front isn't an "obstacle". Perhaps Tesla can overcome this by finding a way to include much longer clips (10 minutes, say) into its training data. Understanding such broader context, over a wide range of timescales (from seconds to years or even decades), is an essential part of general intelligence, I think.

I agree with you that autonomous driving has a long tail. The stop-and-go is another example in a very long list of examples which must inform the broader context. An FSD product may have to learn to use resources outside of its sensors to discover this context.

In your example, the context is road construction that causes traffic to slow and to back up. As a human, I could use Google maps to broaden my context and see the construction marked on the map and highlighted in red to indicate slower traffic. Could the Tesla FSD determine that context from just the multiple cars around it and the stop-and-go nature of the traffic? Today, I don't believe it can (in my experience, it doesn't even interpret dynamic speed zone signs that are used near construction sites). If it had the right context, would FSD want to change lanes less often if at all? As you said, understanding that context would lead to "...the right thing to do".

My sense is that the Tesla FSD context is limited to understanding where it is and where it needs to go based on what the driver requested. Its goal, I believe, is to determine the fastest method possible to complete the trip. A broader context must include not only an ability to understand the "why" of a present situation, but to interact with humans as well. A long tail, indeed.
 
  • Like
Reactions: chinney and Ben W
For those of you that believe Tesla is on a path with FSD to use machine learning / AI to detect a hand wave and understand the context, can you explain how Tesla's current path learns this, and why using FSD is better than not?

If I wanted to learn how people drive, I would watch videos of humans driving. I would not make a crappy self driver, and then try and learn only from when a human overrides that crappy driver. This is the same as why we don't train AI models on the output of other AI models.

So if what Tesla needs is data (not revenue, or hype), why aren't they better off just putting data collection on the 90% of cars without FSD, and uploading how those people drive, instead of doing data collection only on people that have FSD, and are actively using it? How do you even propose that the machine learning ever gets a chance to observe a hand gesture in this case?