Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
If they're going to be stingy about this rollout, Tesla should just send an opt-in e-mail to testers that have used FSD Beta for at least 2 years if they want to be on the "cutting edge" with FSD V12.

But everything I've seen has shown FSD V12 to already be safer than any version of FSD V11 I've used in Southern California for sure. Just roll it out already.
 
  • Like
Reactions: yerEVan
If they're going to be stingy about this rollout, Tesla should just send an opt-in e-mail to testers that have used FSD Beta for at least 2 years if they want to be on the "cutting edge" with FSD V12.

But everything I've seen has shown FSD V12 to already be safer than any version of FSD V11 I've used in Southern California for sure. Just roll it out already.
I suspect the few videos of v12 making unprotected left turns right in front of oncoming cars (biggest danger we’ve seen) is the main thing contributing to the pause. They may be adding a C++ overriding rule to disallow unprotected left turns below a minimum oncoming traffic spacing, or perhaps retraining with more videos of this scenario.

Although having said that, my HW4 X on 11.4.9 tries to pull out in front of oncoming traffic all the time so it can’t be much worse than what we have now…
 
At some point the mechanics of driving safely will need to include a component of human comfort. AI will need to not only view the outside world but also have millions of miles of cabin data to solve the human comfort component.

What acts of driving make humans uncomfortable (squirm factor)? Identify the offenders and invent a method to overcome human anxiety as much as possible either through actions or mitigation. This should be interesting to work on and greatly increase acceptance.
 
  • Like
Reactions: GSP
I've no idea what this can mean. The NNs do not, at present, learn by themselves. I dont see that happening for decades, and even then it will be very risky and tricky, since it would learn BAD things as equally as GOOD things.
Let's say V11 has an unknown failure mode. Say when a penis comes flying in front of the car and V11 hit it when it should stop.

For V12 they add 1B miles extra training data not specific to that failure mode. A few years later it is discovered that V11 had that unknown failure mode but V12 didn't have that unknown failure mode. How did V12 solve the unknown failure mode? It did it by having seen enough data and generalizing to that unknown failure mode.
 
  • Funny
Reactions: FSDtester#1
Let's say V11 has an unknown failure mode. Say when a penis comes flying in front of the car and V11 hit it when it should stop.

For V12 they add 1B miles extra training data not specific to that failure mode. A few years later it is discovered that V11 had that unknown failure mode but V12 didn't have that unknown failure mode. How did V12 solve the unknown failure mode? It did it by having seen enough data and generalizing to that unknown failure mode.

I don't think it works that way. V12 can only generalize to what it sees in the training data. So the only way V12 would avoid that failure mode is if a penis flying in front of the car or something very similar was in the training data. V12 cannot generalize to things that are not in the training data.
 
I don't think it works that way. V12 can only generalize to what it sees in the training data. So the only way V12 would avoid that failure mode is if a penis flying in front of the car or something very similar was in the training data. V12 cannot generalize to things that are not in the training data.
The statement you're asking about touches on a nuanced aspect of how neural networks (NNs) work, particularly in the context of generalization and training data. Let's break down the main points:

  1. Generalization to Training Data: It's generally true that neural networks can only make predictions or recognize patterns based on the data they've been trained on. If a neural network is trained on images of cars and pedestrians to navigate streets, it will learn to recognize and respond to those specific types of objects.
  2. Generalization Beyond Exact Examples: However, the assertion that a neural network cannot generalize to anything not explicitly in the training data is a bit oversimplified. Advanced neural networks, especially those employing techniques like transfer learning, data augmentation, and deep learning architectures, can indeed generalize beyond the exact examples they've seen during training. They do this by learning underlying patterns and features that can be abstractly applied to new, unseen data.
  3. Handling Unseen Scenarios: The example of a "penis flying in front of the car" is quite specific and humorous, but it highlights an important point about edge cases and rare events. While it's unlikely that a neural network trained on typical road scenarios would have seen this specific example, it might still handle the situation appropriately if it has learned to react to unexpected obstacles in general. The network's response would depend on its training: if it has learned to safely stop or avoid unexpected objects, it could potentially deal with unusual or novel obstacles.
  4. Limitations and Failures: Nonetheless, neural networks can and do fail in unexpected or novel situations not covered by their training data. This is a known challenge in AI and autonomous systems, leading to ongoing research on improving robustness, generalization, and safety in AI models.
  5. In conclusion, while neural networks rely heavily on their training data, advanced models have capabilities for abstract generalization that allow them to handle a range of scenarios not explicitly covered in their training. However, their ability to deal with highly unusual or novel situations is not perfect and depends on the breadth of their training data and the sophistication of their architecture.
 
The statement you're asking about touches on a nuanced aspect of how neural networks (NNs) work, particularly in the context of generalization and training data. Let's break down the main points:

  1. Generalization to Training Data: It's generally true that neural networks can only make predictions or recognize patterns based on the data they've been trained on. If a neural network is trained on images of cars and pedestrians to navigate streets, it will learn to recognize and respond to those specific types of objects.
  2. Generalization Beyond Exact Examples: However, the assertion that a neural network cannot generalize to anything not explicitly in the training data is a bit oversimplified. Advanced neural networks, especially those employing techniques like transfer learning, data augmentation, and deep learning architectures, can indeed generalize beyond the exact examples they've seen during training. They do this by learning underlying patterns and features that can be abstractly applied to new, unseen data.
  3. Handling Unseen Scenarios: The example of a "penis flying in front of the car" is quite specific and humorous, but it highlights an important point about edge cases and rare events. While it's unlikely that a neural network trained on typical road scenarios would have seen this specific example, it might still handle the situation appropriately if it has learned to react to unexpected obstacles in general. The network's response would depend on its training: if it has learned to safely stop or avoid unexpected objects, it could potentially deal with unusual or novel obstacles.
  4. Limitations and Failures: Nonetheless, neural networks can and do fail in unexpected or novel situations not covered by their training data. This is a known challenge in AI and autonomous systems, leading to ongoing research on improving robustness, generalization, and safety in AI models.
  5. In conclusion, while neural networks rely heavily on their training data, advanced models have capabilities for abstract generalization that allow them to handle a range of scenarios not explicitly covered in their training. However, their ability to deal with highly unusual or novel situations is not perfect and depends on the breadth of their training data and the sophistication of their architecture.

All good points. I think we are talking about "out of distribution generalization". In this talk, Prof. Amnon Shashua talks about it and how it is the "holy grail" for AI.

2J4wf3E.png


 
  • Informative
Reactions: GSP
I suspect the few videos of v12 making unprotected left turns right in front of oncoming cars (biggest danger we’ve seen) is the main thing contributing to the pause.

As much as I want to experience V12 right now this seems like the right move on Tesla's end.

They may be adding a C++ overriding rule to disallow unprotected left turns below a minimum oncoming traffic spacing, or perhaps retraining with more videos of this scenario.

Definitely possible. The idea of hand coded rules to define the "outer boundary" of allowable decisions by the AI has always seemed intriguing to me. I'd love to know if that is actually something they're pursuing.

Given an undesirable behavior like this, it seems Tesla has a few options to improve the AI:
- Add additional training samples the dataset to override / correct the behavior
- either from real telemetry or synthetic data
- Remove bad training data that somehow made its way in

Are there any other levers they have to control it's behavior? Is there a desired assertiveness setting that's passed into the AI they can use to make minor tuning adjustments?

I wonder how they actually curate the training data sets at scale. They probably have tools to analyze clips and tag them based on the type of maneuver? I wonder if they can detect things like training data that strongly deviates from what the V11 planner would do.
 
First drive with 12.2.1 was incredible, I was gushing with joy and excitement

I would say it's 5-10x better than 11.4.9 in terms of smoothness and decision-making

My wife was in the car and she felt it was a huge improvement over 11.4.9 as well

It was doing many many things for the first time, smoothly

Multiple times through the short drive, I kept saying that it's so smart and reading my mind about my concerns

I would go over specifics but they really don't matter, you guys will experience it soon. It will blow your mind, no joke, there were so many situations during the short drive where it made the most delightful decisions at the exact right times, like kid on a scooter, family nearby, kid skates into road then backs off and stands at edge with sister, V12 turned right with the kid and sister on the edge, then later some old lady was crossing the road, V12 started turning unprotected left but slowed smoothly to wait for the old lady to cross

So incredible, I can't tell you how "smart" of a driver V12 seems when you're in it. You get the feeling that it's thinking and considering things very quickly and confidently. It made a wide left turn and pulled into a driveway that it was never able to do before.

In another situation, the map was showing it to turn out of driveway and cross 4 lanes into a backed up left turn lane, which was IMPOSSIBLE. I kept saying omg this is impossible, don't even try it. V12 didn't try it in a very smooth way!! It just drove past and then waited for the reroute.

I've never gotten back from a drive and been shaking with excitement, it's the future, this is the way, WOW!

12.2.1 shortcomings:

1) Speed stuttering with no lead car
2) Parking lot hesitations, can't find a spot, wheel turning back and forth
3) Micro-stuttery creep on unprotected left from stop sign with obstructions on sides
 
A fair amount (since at least August 2023), and with a few forced disengagements. Seems like at a minimum they should have not rolled it out to anyone who had ever had a strike! Seems like a pretty easy and very relevant screening for early releases! It's fine for people who have "inadvertently" due to "hardware issues" to be kept out of the first rollout since they got a strike.
Hey wait a minute. Testing for 2 1/2 years and have never had a strike.
 
This certainly isn't a problem yet since FSD beta can't even estimate distances or speeds as well as a human. See the recent example of it hitting a stationary car and my bet with @AlanSubie4Life that it will get a single "9" of performance on Chuck's ULT.
If this issue does become relevant then Tesla should still operate within the safety envelope of humans until they remove the need for humans to monitor the system.
But it IS a problem today, since we are already seeing people reacting to the car with the expectation that it should drive "like a human" (whatever that is supposed to mean).
 
As much as I want to experience V12 right now this seems like the right move on Tesla's end.



Definitely possible. The idea of hand coded rules to define the "outer boundary" of allowable decisions by the AI has always seemed intriguing to me. I'd love to know if that is actually something they're pursuing.

Given an undesirable behavior like this, it seems Tesla has a few options to improve the AI:
- Add additional training samples the dataset to override / correct the behavior
- either from real telemetry or synthetic data
- Remove bad training data that somehow made its way in

Are there any other levers they have to control it's behavior? Is there a desired assertiveness setting that's passed into the AI they can use to make minor tuning adjustments?

I wonder how they actually curate the training data sets at scale. They probably have tools to analyze clips and tag them based on the type of maneuver? I wonder if they can detect things like training data that strongly deviates from what the V11 planner would do.
A lot of their work is tuning the loss function to better punish unwanted actions and better reward wanted actions. Ie how much too penalize a slight lateral error, a slight velocity error, a slightly late lane change etc.