Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
This takes me back to one of my original posts here in discussions with @sleepydoc. I had postulated that FSD will be a reality when all cars are able to communicate with each other. In this scenario if you were driving ahead of me, your computer would have reported that no issues exist and my computer would have acknowledged that and continued on the trip over the undulating hill with full confidence.
I’d be worried about bad actors spamming that system with bad data?
 
Here are a some examples I captured a few days ago.
They happen at the same circumstance. The road continues, there is a dip which has road which is invisible to the camera and the road continues after the dip and becomes visible again to the camera at a longer distance. But if it looked for vanishing lines (as I suspect it does) it saw a discontinuity / mismatch. I bet that's the origin of the problem, it detects what it thinks is contradictory data, and going at a high speed it's programmed to slow down as it doesn't know what to do until it resolves into a continuous lane line it can estimate to the horizon.

I'm not making an excuse for it, but a diagnosis. I've seen this same effect in YT videos a year ago, and it was often at undulations.

Since it happens because of the road geometry then presumably crowdsourced data could find those locations of consistent failures, obtain better training data to overcome the problem, or mark these road areas where the logic should be changed to not slow with such a discontinuity.

It is claimed that the FSDbeta code is less reliant on lane lines than the current highway AP code (which is running this), so maybe it will be less susceptible when the long awaited 'merge' happens. That's still a ways off for non FSD drivers for standard AP. I bet they also didn't fix it before because management (I.e. Elon) thought the FSDbeta merge was right around the corner but it obviously wasn't. Tesla has a remarkably small autopilot development team (only a few dozen actual developers/ML scientists) for the size of the task they have to support (many regions, many models if you include all the sensor & computer variants). Elon chronically understaffs and overworks. With Tesla's revenue and tremendous profit margins they should not be as stingy on a key technology R&D area.
 
Pretty rough, looks like a 15mph speed drop in under 2 seconds, even with you intervening quickly to stop it from dropping further.
That’s roughly what my car was doing for quite some time. It was usually about 10 mph before I would catch it and it might not have braked quite as hard but pretty close. When I had the gall to complain and criticize Tesla for it I was told all cars do it and criticized for being a hater and having unreasonable expectations. All I can say to @pcopeland is it’s not acceptable and we all should expect better of any car.
 
To be fair, from what I hear everyone else not driving a Tesla has a better TACC so their car would not bang into his Tesla 🤷🏽‍♂️🤣
to be fair, using Radar would *not* trigger such phantom braking ... because mirages, dipping roads and other optical illusions aren't fooling a radar signal. "vision only" was the worst cost cutting from Tesla's side and software delete the radar input in older cars just the icing on the cake.
 
  • Disagree
Reactions: nvx1977
To be fair, from what I hear everyone else not driving a Tesla has a better TACC so their car would not bang into his Tesla 🤷🏽‍♂️🤣
I’ve seen other people post that their car seems not to phantom brake when there’s a car behind them. That’s a difficult hypothesis to test, but raises other questions if true.

I can’t make any observations on it one way or another.
 
  • Like
Reactions: enemji
I’ve seen other people post that their car seems not to phantom brake when there’s a car behind them. That’s a difficult hypothesis to test, but raises other questions if true.

I can’t make any observations on it one way or another.
That surely adds relevance to my hypothesis that the AI needs an external reference. If so, It definitely appears to be using the vehicle behind it as a reference/indicator of doing the right thing.
 
  • Like
Reactions: sleepydoc
That surely adds relevance to my hypothesis that the AI needs an external reference. If so, It definitely appears to be using the vehicle behind it as a reference/indicator of doing the right thing.
Or rather if there is no car behind, phantom braking doesn't come with the possibility of causing a rear end accident. I have long suggested this is the way it may be done. Basically you can err to allow more false positives when there is no one behind, given slowing down doesn't put you in any additional risk.
 
That surely adds relevance to my hypothesis that the AI needs an external reference. If so, It definitely appears to be using the vehicle behind it as a reference/indicator of doing the right thing.

I also find little PB on crowded SoCal freeways. It must be frustrating to get PB on wide open highways with little traffic where it should be very 'easy' for even a dumb system.

I have the feeling that the PB issue might be an unexpected emergent property of a complex software system that they haven't been able to engineer out without hurting something else. They're moving more towards neural networks for the driving policy and control system, which are also opaque and incomprehensible from a logical point of view, but one can adjust the behavior by changing the weighting of various examples in the dataset and also giving positive and negative reinforcement. The problem is that you need a huge database of correct and well curated proper driving behaviors to train against. That's a more complex problem as it deals with time series and contextual information than the datasets they have acquired for visual classification (autolabeler) which is now a solved technology. That requires only following academic published literature and then doing a great job of it in practice.

But there is very little written in open publications about training driving policies as it's more commercially focused. And difficult.
 
I also find little PB on crowded SoCal freeways. It must be frustrating to get PB on wide open highways with little traffic where it should be very 'easy' for even a dumb system.

I have the feeling that the PB issue might be an unexpected emergent property of a complex software system that they haven't been able to engineer out without hurting something else. They're moving more towards neural networks for the driving policy and control system, which are also opaque and incomprehensible from a logical point of view, but one can adjust the behavior by changing the weighting of various examples in the dataset and also giving positive and negative reinforcement. The problem is that you need a huge database of correct and well curated proper driving behaviors to train against. That's a more complex problem as it deals with time series and contextual information than the datasets they have acquired for visual classification (autolabeler) which is now a solved technology. That requires only following academic published literature and then doing a great job of it in practice.

But there is very little written in open publications about training driving policies as it's more commercially focused. And difficult.
Application programming languages come and go. The business logic remains the same.
 
just for context from my experience... if I follow a car there is no PB. all the PBs i had were on wide open roads with no car in front of me.
that being said ... if "vision only" needs a reference point like a car ahead to "latch onto" as visual reference - and otherwise can get confused.... not a great selling point and probably one of the reasons Tesla is moving back to radar for their next gen cars.
 
just for context from my experience... if I follow a car there is no PB. all the PBs i had were on wide open roads with no car in front of me.
that being said ... if "vision only" needs a reference point like a car ahead to "latch onto" as visual reference - and otherwise can get confused.... not a great selling point and probably one of the reasons Tesla is moving back to radar for their next gen cars.
All, not vision only suffers from this
 
I also find little PB on crowded SoCal freeways. It must be frustrating to get PB on wide open highways with little traffic where it should be very 'easy' for even a dumb system.
I think this is my biggest pain point after the fact that my 2018 M3LR has radar that didn't have this problem and it was disabled. I don't like that Tesla disabled the radar and pushed Tesla Vision while it still has these issues, but if I could at least set the car to a non-traffic-aware, "dumb" cruise control, I'd at least have a temporary remedy while they work on it. Growing up along I-94 in North Dakota we often joked the road was so flat, straight, and empty you could set cruise control and take a nap as you drove from one end of the state to the other. It should be the easiest case for self driving that there is 😆
 
I think this is my biggest pain point after the fact that my 2018 M3LR has radar that didn't have this problem and it was disabled. I don't like that Tesla disabled the radar and pushed Tesla Vision while it still has these issues, but if I could at least set the car to a non-traffic-aware, "dumb" cruise control, I'd at least have a temporary remedy while they work on it. Growing up along I-94 in North Dakota we often joked the road was so flat, straight, and empty you could set cruise control and take a nap as you drove from one end of the state to the other. It should be the easiest case for self driving that there is 😆
+1

my biggest issue with radar-deleted TACC are the PB events on empty highways... *exactly* where I usually prefer to use TACC and where I barely had any PB events before my radar unit was deleted
 
  • Like
Reactions: Dennisis
The original solution to cruise control was:

“Keep going till something comes in your way.”

The new AI approach (Vision, Radar, Lidar) is:

“I don’t see anything. Should I keep going?”


I hope this helps you all understand the real problem these new approaches are facing.
 
  • Like
Reactions: DrChaos