Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Speed stuttering with no lead car
I'm glad everyone is seeing this. Around here where there isn't much traffic, this is so annoying that I long for v11. Perhaps they only tested in higher traffic areas, because I don't see how they would release it like that.

As I reported today, on a 25 mph road near my house it would slow to 13, speed up to 26, slow to 13, continuously.

I'm sure it will be fixed soon.
 
The statement you're asking about touches on a nuanced aspect of how neural networks (NNs) work, particularly in the context of generalization and training data. Let's break down the main points:

  1. Generalization to Training Data: It's generally true that neural networks can only make predictions or recognize patterns based on the data they've been trained on. If a neural network is trained on images of cars and pedestrians to navigate streets, it will learn to recognize and respond to those specific types of objects.
  2. Generalization Beyond Exact Examples: However, the assertion that a neural network cannot generalize to anything not explicitly in the training data is a bit oversimplified. Advanced neural networks, especially those employing techniques like transfer learning, data augmentation, and deep learning architectures, can indeed generalize beyond the exact examples they've seen during training. They do this by learning underlying patterns and features that can be abstractly applied to new, unseen data.
  3. Handling Unseen Scenarios: The example of a "penis flying in front of the car" is quite specific and humorous, but it highlights an important point about edge cases and rare events. While it's unlikely that a neural network trained on typical road scenarios would have seen this specific example, it might still handle the situation appropriately if it has learned to react to unexpected obstacles in general. The network's response would depend on its training: if it has learned to safely stop or avoid unexpected objects, it could potentially deal with unusual or novel obstacles.
  4. Limitations and Failures: Nonetheless, neural networks can and do fail in unexpected or novel situations not covered by their training data. This is a known challenge in AI and autonomous systems, leading to ongoing research on improving robustness, generalization, and safety in AI models.
  5. In conclusion, while neural networks rely heavily on their training data, advanced models have capabilities for abstract generalization that allow them to handle a range of scenarios not explicitly covered in their training. However, their ability to deal with highly unusual or novel situations is not perfect and depends on the breadth of their training data and the sophistication of their architecture.

Was wondering why this was weirdly written.

IMG_1682.jpeg


What the AI message is trying to get across simply is that, the NN has been training on many “things” moving towards the car’s cameras, it knows what to do when that is happening. If something unknown moves towards the car it’s going to recognize it as in the family of “things” and take an appropriate braking action.
 
Last edited:
Was wondering why this was weirdly written.

View attachment 1021982

What the AI message is trying to get across simply is that, the NN has been training on many “things” moving towards the car’s cameras, it knows what to do when that is the case.” If something unknown moves towards the car it’s going to recognize it as in the family of “things” and take an appropriate braking action.
Just like chatgpt generalized to answer questions about cars seeing flying penises that were not in their trading set. That’s what neural networks do if you give them a large and diverse dataset…
 
  • Like
Reactions: Kyrne
In effect, the ADAS needs to be "de-tuned" so that it simulates a human driver even in circumstances when it could do better. Turning left into oncoming traffic comes to mind. The car can do the math to figure out it has plenty of time (and be right), but scare the passengers (and oncoming traffic) half to death since humans need a wider margin of safety.

But it IS a problem today, since we are already seeing people reacting to the car with the expectation that it should drive "like a human" (whatever that is supposed to mean).

I think (not sure) @Daniel in SD was suggesting that there's no reason to "detune" the ADAS, because it doesn't have the capability yet to require detuning (it's inferior to human distance and velocity estimation)?

If anything the car needs to have greater space, and drive more defensively than a human, to accommodate its inferior capabilities. Users should definitely have the expectation that it should drive like a very defensive human. Lots of margin, so that the human can recover. I see many examples of FSD (v11) not doing so. That's a problem.


Hey wait a minute. Testing for 2 1/2 years and have never had a strike.

Yep, you would have been a good candidate. It just seems like a very easy bar for such a dangerous sort of release. It's really odd that for v12 they suddenly went to completely unvetted individuals after so many years of using the early access program. I would be all for getting rid of that group and going to a group based on strike count, or based on amount of nagging and cabin camera telemetry previously detected for that user (if this data is tracked). Not sure what the hangup is for grading people based on how Tesla wants the system used.

I guess I'm also ok with random...but presumably they expected a higher chance of accidents with that strategy.

Seems odd that the car hit didn't move at all in the video and the driver doesn't seem to react.
I suspect there is more to the story then we're getting.

Huh? Were you watching the driver door? And the driver turns his head. What is odd? This was a small accident, just a few thousand dollars of damages. There's not some massive hit. Things just rock a bit and then all is well. Just good that there was not a human standing there (maybe it would have been more cautious, maybe not, if it couldn't see the human).

Why would there be any more to the story? It seems completely conceivable that FSD can get in minor accidents. It's not like we haven't seen similar before. Could certainly all be a big hoax but I don't think it really matters.
 
Last edited:
What are the current "imagined" safety issues?
How can anyone know? Most of the time what do we do when the car does something outside our comfort zone? We disengage FSD and take over. it's a natural (and correct) reaction, but it's hard to know if what we thought was a mistake by the car was indeed that or the car doing something it judged as safe (correctly).

My comments anyway were NOT about the current state of the software, but the general challenges facing any ADAS maker as the car reaches a certain level of competence.
 
  • Like
Reactions: GSP
but it's hard to know if what we thought was a mistake by the car was indeed that or the car doing something it judged as safe (correctly).
Fair enough that you weren't talking about the current state of things. That was hard to tell with the follow on comments.

But anyway I would argue that nearly every time a competent human disengages due to something outside the comfort zone, that was a mistake by the car - it was deviating from prudent behavior. I don't think these type of systems should routinely engage in behaviors that would result in reduced margins, since they make it too difficult for the human to recover.

Maybe in the future we'll see capabilities that allow for different behavior, but I'm still not sure it's going to be ok with humans.
 
  • Love
  • Like
Reactions: GSP and gsmith123
(it's inferior to human distance and velocity estimation)?
Any source that it's inferior to human distance and velocity estimation? I for one will admit that I struggle between estimating velocities and distances, not always sure if the car in front of me is 24m or 34m away and it's speed is +1m/s or +3m/s relative to me.

Do you think you can beat that output?
 
  • Informative
Reactions: JB47394
Dumb questions, no snark. If end to end is a black box NN, how does the engineer or PM or whoever has to sign off on it ever really validate behavior?

Re: crossing double yellow and trying to pass a line of cars at a stop sign, almost causing a head on collision (above):

1. Let's say somehow this behavior is "trained out," based on IDK, continued testing? How can anyone be sure it hasn't just been reduced enough that it's OK in the tests run, but there's still a latent condition in the NN that would trigger the behavior again, when IDK, the angle of the sun is different than all other times and some latent set of nodes gets happy and boom, does it again?

2. Why should I not be terrified to think that there could be 1000 latent conditions lurking in the NN, just waiting for that lucky day when the car drives me off the cliff? Really, no one can probe the NN for the exact function of every combination of inputs and outputs? How can anyone validate anything? What fundamental understanding am I missing?
I expect everything is percentages and you may never reach 100%. So yes there could be a situation that the car interprets wrongly and causes an accident but the odds, hopefully, would dramatically less than the odds of human error causing it. It hasn’t reached that level as yet, though.
 
Any source that it's inferior to human distance and velocity estimation? I for one will admit that I struggle between estimating velocities and distances, not always sure if the car in front of me is 24m or 34m away and it's speed is +1m/s or +3m/s relative to me.

Do you think you can beat that output?
It just ran into a parked car!
In term of longer distances Andrej is only showing a single video clip, the distance is only 30m and there are no statistics about how well it actually works. It's also a very simple scenario where the cars are moving in the same direction. Watch Chuck's ULT videos which have much longer distances. He still hasn't managed to get nine successful turns in ten attempts (one "9" in the "march of nines"). I win a beer if V12 does it so I'm optimistic.
 
First drive with 12.2.1 was incredible, I was gushing with joy and excitement

I would say it's 5-10x better than 11.4.9 in terms of smoothness and decision-making

My wife was in the car and she felt it was a huge improvement over 11.4.9 as well

It was doing many many things for the first time, smoothly

Multiple times through the short drive, I kept saying that it's so smart and reading my mind about my concerns

I would go over specifics but they really don't matter, you guys will experience it soon. It will blow your mind, no joke, there were so many situations during the short drive where it made the most delightful decisions at the exact right times, like kid on a scooter, family nearby, kid skates into road then backs off and stands at edge with sister, V12 turned right with the kid and sister on the edge, then later some old lady was crossing the road, V12 started turning unprotected left but slowed smoothly to wait for the old lady to cross

So incredible, I can't tell you how "smart" of a driver V12 seems when you're in it. You get the feeling that it's thinking and considering things very quickly and confidently. It made a wide left turn and pulled into a driveway that it was never able to do before.

In another situation, the map was showing it to turn out of driveway and cross 4 lanes into a backed up left turn lane, which was IMPOSSIBLE. I kept saying omg this is impossible, don't even try it. V12 didn't try it in a very smooth way!! It just drove past and then waited for the reroute.

I've never gotten back from a drive and been shaking with excitement, it's the future, this is the way, WOW!

12.2.1 shortcomings:

1) Speed stuttering with no lead car
2) Parking lot hesitations, can't find a spot, wheel turning back and forth
3) Micro-stuttery creep on unprotected left from stop sign with obstructions on sides

V12 is the real deal period. Got FSD back in 2020 or 2021 don't even remember but it was October. 11.4.9 or any other earlier version don't make any difference to me and don't believe those YouTubers. Even with 11.4.9 I can only use it for 30 seconds and I have to disengaged, it will block the car behind you wants to turn right when you are on the right lane, it will make crazy sudden maneuver. I can only use it on highway.

V12 feels like you have your own uber driver. The only issue is it doesn't drive at your set speed, if you set at 50MPH it will driver at 40MPH and speed up and slow down and so on. But it's very early beta and I think they will fix it soon.

Also when it arrive at destination it will try to find a parking spot and park it self, not perfect in the spot but it's doing it. I think reverse summon is coming.

Also when you arrived home it will pull to your driveway. Feature complete is near with V12.
 
Do you think you can beat that output?
Yes, as applied to the driving task. The numbers don’t matter, of course.

As was mentioned, real world demonstrations of capability like the unprotected lefts show that humans have superior performance at these tasks requiring distance and velocity estimation, for now.

And even simple things like following a car in front of us: we see every day we are better at that than FSD (excluding any human errors). Humans are routinely much faster to respond to changes in lead vehicle speeds. That requires excellent distance estimation, closing speed estimation, reaction time, etc. I assume humans will still be better than v12, though I guess we’ll see.
 
Humans are routinely much faster to respond to changes in lead vehicle speeds. That requires excellent distance estimation, closing speed estimation, reaction time, etc. I assume humans will still be better than v12, though I guess we’ll see.
My experience has been that autopilot is very fast with responding to changes in lead vehicle speed while driving. It don't agree with the way it decelerates to complete stop. I think it's not the estimate that's the problem, the problem has been the control signals. And now with V12 control seems to be getting a lot better. I think a lot of the issues we see are the guardrails that we put on V12 for the first releases, a lot of the situation seems to be that V12 is too conservative which likely is the guardrails that are activated in some situations and sometimes the guardrails seems to flicker on and off.
 
  • Like
Reactions: RabidYak
V12 feels like you have your own uber driver

Yes, V12 really *feels* different. During the drive, I kept talking to it in my head and out loud, like "uh oh there's an old lady there, I hope you stop dude," "did you see that obstructed little kid over there? You gotta slow down"

There were so many times where in my head, I was anticipating a failure that never happened, and when you have that feeling and it's resolved, it's so delightful, having been using fsdb for over 2 years now

I've only had a short drive so far, so maybe this is all a fluke, but that first drive especially had many many situations where V11 would have hesitated and been completely uncomfortable, especially with a passenger
 
As was mentioned, real world demonstrations of capability like the unprotected lefts show that humans have superior performance at these tasks requiring distance and velocity estimation, for now.
Correct, but let's be careful. That doesnt really prove anything about the cars ability to estimate velocity or distance. For all any of us knows there are other reasons the car has trouble with some of these maneuvers. I'm not arguing on other side here, just pointing out that I dont think any of us can decide much about the cars specific abilities based on its behavior. Only Tesla really know what the source of these issues are.
 
  • Helpful
Reactions: RabidYak