Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Could phantom breaking be due to ML training set?

This site may earn commission on affiliate links.
Our new 2023 Model Y has hit the brakes 3 times with TACC on. Once was incredibly hard (caught by the seatbelt), the other 2 times were merely 'hard' (things sliding off the seats). These were truly random events on different points in 2 lane road with nothing interesting happening (no curves or ups and down, no rain, ~60 mph).

I've been thinking how that could happen, and how Tesla is the only car mfg. where this is happening with high frequency (according to my experience with Subaru EyeSight (dual cameras) and Kia EV (radar with single camera)). My hypothesis at this point is that must be due to the ML; in particular the training set: What if the training set includes events that, either correctly or incorrectly, include breaking? These events might then cause breaking in the real world when camera inputs match 'close enough' to the training events. If this were true, Tesla would need to manually validate their ML training data for correct | incorrect breaking events and remove those from the TACC training data set.

Any thoughts on this hypothesis? I am not familiar with how Tesla applies ML to say one way or another - and the details of the ML pipeline and algorithmic implementation details will be certainly be important. It's just that I read / hear that Tesla uses AI that makes me wonder.
 
  • Like
Reactions: DrChaos
Our new 2023 Model Y has hit the brakes 3 times with TACC on. Once was incredibly hard (caught by the seatbelt), the other 2 times were merely 'hard' (things sliding off the seats). These were truly random events on different points in 2 lane road with nothing interesting happening (no curves or ups and down, no rain, ~60 mph).

I've been thinking how that could happen, and how Tesla is the only car mfg. where this is happening with high frequency (according to my experience with Subaru EyeSight (dual cameras) and Kia EV (radar with single camera)). My hypothesis at this point is that must be due to the ML; in particular the training set: What if the training set includes events that, either correctly or incorrectly, include breaking? These events might then cause breaking in the real world when camera inputs match 'close enough' to the training events. If this were true, Tesla would need to manually validate their ML training data for correct | incorrect breaking events and remove those from the TACC training data set.

Any thoughts on this hypothesis? I am not familiar with how Tesla applies ML to say one way or another - and the details of the ML pipeline and algorithmic implementation details will be certainly be important. It's just that I read / hear that Tesla uses AI that makes me wonder.
Tesla has "ML" since the introduction of AP1 in 2014, or 9 years so far. According to your theory, the 9-year "ML training set" should get better and better and not worse.

The theory of AI is: If a driver makes enough corrections, it would learn to do the right thing so no more corrections will be needed. Some swear that is what is happening to them.

I think AP1 has fewer phantom brakes than AP2 and later with radar. AP2 and later without radar have more phantom brakes than previous hardware.

I think the problem is the switch to Tesla Vision before its programming codes and its AI are ready.
 
Last edited:
  • Like
Reactions: texas_star_TM3
I think it's bugs. Bugs decreased last fall on our normal drive across the state. No PB all Fall and Winter, literally zero, 3800 miles of mostly AP, out of 10k total during that stretch.

Bugs came back, this spring, PB came back.

(sort of joking, but really just reaching for anything)
 
The theory of AI is: If a driver makes enough corrections, it would learn to do the right thing so no more corrections will be needed. Some swear that is what is happening to them.
I'm responding because I think your assertion implies a level confidence about something that is not presently true on the whole. I sense that the OP grasps this already, but I didn't want to let it lie because he's a new member and new owner so a clarification may be helpful.

it is not the case that "theory of AI" for Tesla FSDb machine learning is for the individual car to learn or respond to driver corrections - at least not at this stage. As the OP implied, training takes place at the mothership based on the aggregation of data from a huge number of cars, feeding an ever-evolving NN architecture.

Yes it's true that there is a long-running subtext discussion regarding many users' Impressions that the FSD behavior of each release seems to change over a time scale of days or weeks, often getting better but sometimes regressing or simply feeling different from week to week. Suggested explanations include user adaptation, environment or traffic conditions, the introduction of detailed mapping info that is downloaded in real time for each drive (as revealed by @verygreen here and elsewhere).

Though some including myself had wondered if there could also be real-time downloading or even a preset calendar for for parameter tweaks that would affect the performance within a given release, it seems that there is no facility in the software to do that.
 
  • Informative
Reactions: tmartin
Our new 2023 Model Y has hit the brakes 3 times with TACC on. Once was incredibly hard (caught by the seatbelt), the other 2 times were merely 'hard' (things sliding off the seats). These were truly random events on different points in 2 lane road with nothing interesting happening (no curves or ups and down, no rain, ~60 mph).

I've been thinking how that could happen, and how Tesla is the only car mfg. where this is happening with high frequency (according to my experience with Subaru EyeSight (dual cameras) and Kia EV (radar with single camera)). My hypothesis at this point is that must be due to the ML; in particular the training set: What if the training set includes events that, either correctly or incorrectly, include breaking? These events might then cause breaking in the real world when camera inputs match 'close enough' to the training events. If this were true, Tesla would need to manually validate their ML training data for correct | incorrect breaking events and remove those from the TACC training data set.

Any thoughts on this hypothesis? I am not familiar with how Tesla applies ML to say one way or another - and the details of the ML pipeline and algorithmic implementation details will be certainly be important. It's just that I read / hear that Tesla uses AI that makes me wonder.
If I understand what you're saying, I think it's certainly true in the general sense that the massive training has almost inevitable side effects or a degree of mis-training. This include (but is not limited to) dlata labeling errors or insufficient sophistication in the goal-seeking algorithms and the codng of the goals themselves. It's not just the firmware NN pipeline that is evolving here, it's everything - which makes it very complicated and non-obvious for the Autopilot engineers to solve, not to mention for us as outside observers.

Regarding the comparison to competitor adaptive cruise and driving assistance systems: I think that probably a core explanation, though maybe not a very satisfying one, is that Tesla's design intent is to respond to a much larger set of possible threats, extracted from the huge ML data set as discussed. It's true that the present feature, and therefore the obvious and legally stated design intent, is for so-called L2, but the real underlying goal is a system that can then turn to complete automation. In Tesla's case, the TACC operation is a byproduct feature of the much more ambitious self-driving feature goal. I'm not excusing the relatively higher frequency of phantom braking events, but I think it's not hard to understand that the scope of the task being worked is for larger in Tesla's case. Also keeping in mind that PB represents a false-positive response to a possible threat, which admittedly can be dangerous also, but it's arguably worse to tolerate a few false negatives where the system doesn't respond to a real threat.

Tesla could have chosen to engineer and train TACC using a wholly independent code base with the Target of higher accuracy to a much more limited set of goals; perhaps that would have satisfied those owners who wish for a dumber but more reliable cruise control option, but it would also create some unavoidable and very noticeable inconsistencies in the behavior among the various Autopilot modes, and that would itself be the subject of complaints and criticism.

James Douma is someone who is occasionally the subject of derision here, but whom I find to be very astute, enjoyable and thought-provoking in various YouTube guest appearances. He talked about the need to surround the mlML-trained execution network with hard-coded "guardrails" to self-intervene against potentially dangerous or unreasonable spurious behaviors from the immature NN-based driver. These guardrails are a necessary evil especially in the early stages of each major architectural change, and can create discontinuous and unpleasant ehavior themselves - where the system essentially argues with itself. Like when your SO or any other helpful backseat driver blurts a string of "look out!" , "what are you doing?" and other such helpful tips while you're thinking that you've got the situation under control. The guardrails are awkwaed and don't merge cleanly because they themselves are hand-coded "Software 1.0" elements that have already been deemed an unsuccessful and probably impossible approach to implement a competent generalized self-driving system. The theory is that these guardrails can be opened up or eliminated as theML system matures, eventually disappearing or perhaps being invoked so rarely that they have no significant effect.
 
  • Informative
Reactions: QUBO and tmartin
I think it's bugs.
100%. Doesn't matter if it's drifting GPS, side roads, mirages, map errors, sun glare, UFO's, or whatever excuse you want to throw at it. It's a bug that is serious and going on for over a year.

This might be the show-stopper for anything more than what we have, at least from a hardware perspective. To date, not only has Tesla not fixed this I dare say it's getting worse. If they haven't found a way to fix via software yet, they may not be able to. With AP3 (and TBD with AP4) we will likely have this fault forever. Still could be L3+, but PB isn't going anywhere, IMHO.

I assume this is the kind of bug you are referring to. ;)
 
100%. Doesn't matter if it's drifting GPS, side roads, mirages, map errors, sun glare, UFO's, or whatever excuse you want to throw at it. It's a bug that is serious and going on for over a year.

This might be the show-stopper for anything more than what we have, at least from a hardware perspective. To date, not only has Tesla not fixed this I dare say it's getting worse. If they haven't found a way to fix via software yet, they may not be able to. With AP3 (and TBD with AP4) we will likely have this fault forever. Still could be L3+, but PB isn't going anywhere, IMHO.

I assume this is the kind of bug you are referring to. ;)
lol, either one is just as plausible, as black box is this all is. (but it did correlate to about a 100x increase in actual bugs splattered on the front)
 
  • Like
Reactions: KArnold
All uninformed speculation. The short answer is why we don't know why it's happening but also despite these phanton braking events occuring more than your other cars, the tech that allows TACC to work on your Y is far superior, even if it does make these mistakes. The good news is your Y's brain is ever evolving and hopefully eventually the phantom brakes go away.
 
  • Like
Reactions: JHCCAZ
I don't think that word means what you think it means
I know it is. Even if the outcome isn’t better than whatever primative radar/image processing found on the OP’s other cars doesn’t mean Tesla’s approach isn’t far superior. It just means it needs more time to develop and mature. It’s the difference between a graphing calculator and ChatGPT. True, ChatGPT often can’t calculate simple math, but it understands how to use a graphing calculator, so that fact is kind of irrelevant.
 
  • Funny
Reactions: tmartin
I want to know why the car brakes everytime a hill is present...Have Teslas been known to drive off cliffs? Was there are rash of Teslas driving off the edge of the planet? There is simply no point in braking before the crest of every hill, other than the car thinks there is no road on the other side of that hill.
 
  • Like
Reactions: texas_star_TM3
I have had my MYP for just over 2 months and had 2 instances of Casper interrupting… active cruise at 70, then 40 in a few seconds. Luckily minimal traffic on I95 heading into DC at 5AM. Sent a note to Tesla and they said to recalibrate cameras. I haven’t tried, but reading through these post, this may be a moot point.
 
I want to know why the car brakes everytime a hill is present...Have Teslas been known to drive off cliffs? Was there are rash of Teslas driving off the edge of the planet?

My car does the opposite, lol. I semi-consistently get uncomfortably and abnormally close following distances to a lead car when cresting hills. I'm not 100% it's hill crest related but it's happened enough I think it might be.
 
My car does the opposite, lol. I semi-consistently get uncomfortably and abnormally close following distances to a lead car when cresting hills. I'm not 100% it's hill crest related but it's happened enough I think it might be.
I get the braking going over the crests of hills when there's no lead car and no oncoming traffic to tell the cameras that there's road ahead. But I also get the car accelerating off a green through a controlled intersection at the crest of a hill, and it goes faster than I ever would. I think FSDb has yet to be trained for the combination of intersection and crest of hill. For now, it's just an intersection.
There is simply no point in braking before the crest of every hill, other than the car thinks there is no road on the other side of that hill.
When I brought this up, it was suggested that slowing was safer because there might be a car stopped on the other side of the hill. I think that's rubbish logic because other drivers aren't expecting the slowdown, which makes the behavior more dangerous for the 99.99% case. I think the slowdown is an artifact of the current software. If it was planned, surely the car would slow a bit as it approached the crest, the way it does before a turn. My car slows abruptly by a several mph as it reaches the crest, which is not a good move for anyone concerned.
 
I have had my MYP for just over 2 months and had 2 instances of Casper interrupting… active cruise at 70, then 40 in a few seconds. Luckily minimal traffic on I95 heading into DC at 5AM. Sent a note to Tesla and they said to recalibrate cameras. I haven’t tried, but reading through these post, this may be a moot point.
Maybe it's a thing with naming our cars Casper. I don't know for sure. ;):)
 
  • Funny
Reactions: spacecoin
Our new 2023 Model Y has hit the brakes 3 times with TACC on. Once was incredibly hard (caught by the seatbelt), the other 2 times were merely 'hard' (things sliding off the seats). These were truly random events on different points in 2 lane road with nothing interesting happening (no curves or ups and down, no rain, ~60 mph).

straight roads with few cars in front of them are also susceptble to mirages which the vision handles poorly.

I've been thinking how that could happen, and how Tesla is the only car mfg. where this is happening with high frequency (according to my experience with Subaru EyeSight (dual cameras) and Kia EV (radar with single camera)). My hypothesis at this point is that must be due to the ML; in particular the training set: What if the training set includes events that, either correctly or incorrectly, include breaking? These events might then cause breaking in the real world when camera inputs match 'close enough' to the training events. If this were true, Tesla would need to manually validate their ML training data for correct | incorrect breaking events and remove those from the TACC training data set.

The old TACC/AP stack doesn't have neural network based driving policy (controlling the car) as far as I know. And it's PB is worse than the FSDb stack particularly on highways. It hasn't been significantly updated in years, all work is going on the FSDb.

Any thoughts on this hypothesis? I am not familiar with how Tesla applies ML to say one way or another - and the details of the ML pipeline and algorithmic implementation details will be certainly be important. It's just that I read / hear that Tesla uses AI that makes me wonder.

I believe some of the problem is because of the ambition of the Tesla system, as well as insufficient sensors and pushing out immature software.

The Tesla system is trying to be a step towards autonomous driving, which means it's doing dynamical prediction of all sorts of cut-in hazards (moving into lane) and unexpected events, and to respond with the control system to them. Other manufacturers driving assist systems are intentionally less ambitious, and they test and tune them more for a limited case, and to explicitly avoid phantom braking unless proven necessary, like by stereo vision and radar. I guess that they won't brake until they are confident there is a real physical hazard confirmed by vision and radar/stereo directly in the lane ahead and close enough to be a certain danger.

Tesla systems otoh assume things are dangerous unless confident they are safe, and there are more false positives. You had a 2 lane road (meaning the car knows there are oncoming hazards) and it probably saw something temporarily which it did not understand and random statistical fluctuations pointed the estimated speed, distance and trajectory of the hazard as being falsely intersecting the ego's travel.

The driving systems perform much better when it sees other cars in front of it. The native environment (most miles driven) is heavily trafficked California freeways, like where I live, and the performance is good there.

The FSDb system is significantly better than the TACC/AP stack now on highway travel, as it uses a substantially more sophisticated vision model (2nd generation 'occupancy networks' that model presence/absence of objects in volume regardless of object classification). Once that tech propagates to the standard TACC/AP the reliability will get better.

Tesla machine learning is working on a 3rd generation perception of a general "video world model" that's going to do simultaneous perception, object recognition and prediction all from video data.
 
  • Like
Reactions: JB47394