Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Phantom braking will get a lot worse before it gets better

This site may earn commission on affiliate links.
Here is a damning article describing Uber's decision to aim for a "smooth ride" by making it stop its equivalent of phantom braking that we all hate.

Inside Uber before its self-driving car killed a pedestrian: Sources describe infighting, 'perverse' incentives, and questionable decisions

It's my impression that as the autopilot software is evolving, and trying to do more, the phantom braking and general nervousness seems to intensify before it gets better. I'm not the only one to report this either, with people noticing that changing from TACC to autosteer makes the car slow down more, and enabling Navigate on Autopilot will make it slow down even more. I'm also sure it will get worse again when Unassisted Lane Change is released, and worse each time an FSD feature is released. As the onus falls more and more on the vehicle to make safety decisions, the code will get more and more conservative and swerve and brake more for things that aren't there; invisible pink elephants, truck trays that appear and disappear, and so on... If traffic light and stop sign recognition come in the near future as an FSD feature, it will start seeing traffic lights and stop signs everywhere as well.

What worries me is the phantom braking allegedly has been "addressed" by finding common spots where it strikes - such as entering an underpass, leaving a tunnel etc. and decreases the "sensitivity" threshold in those areas to prevent them by whitelisting tiled areas. I've already seen posts (on reddit) where people complain that their car tried to slam into the back of a (real) truck as they entered an underpass, presumably having a false negative that there was a vehicle there. This is concerning as it also sounds a lot like what went wrong on an absolute grand scale with the Uber FSD vehicle. I'm hopeful that as the smarts improve in the neural network and code base that they can remove this horrible band-aid. For a level 2 vehicle it's fine since we're ultimately completely in control, and we all complain bitterly whenever there is a false positive and phantom braking event. Once the vehicle is purported to be a level 3+ vehicle, this would constitute an unacceptable risk.

I can imagine the current FSD development incarnation within Tesla would be a horribly jerky, hesitant, and painfully slow experience. If and when it's released to the public, it will be a lot less, but still have those features in its drive quality.
 
It’s not just Uber or Tesla. About a month ago I was parking in Mountain View. It was perpendicular street parking and just as I exited my car and slammed the door shut, I heard screeching tires. Look behind me and a Waymo Pacifica had abruptly stopped about 5 feet ahead of where I was. The driver looked like he got tossed forward quite a bit and was gathering his belongings.

So not only did it stop inappropriately (I was far from the road, no intention of walking in that direction), but had I went to the street it would’ve still stopped too late and ran me over.

It’s convenient that other autonomous vehicle prototypes are piloted by employees who are not allowed to openly speak about their experiences. When seeing all the DMV crash filings, it’s not at all surprising most of the crashes have to do with rear endings or aborted lane changes.
 
  • Like
Reactions: OPRCE and conman
It all depends on how fast the rate of false positives goes down. Maybe phantom braking events will go down because the vision neural network just gets better and better. Even if the system gets more conservative as new functions are added, false positives might decrease faster than conservatism increases. Who knows. I’m just saying it isn’t guaranteed to get worse.

The vision neural network might have a harder time recognizing trucks than cars because there are fewer trucks on the road, and therefore fewer opportunities to capture training images of trucks. As far as I know, geocoded whitelisting only applies to radar, not cameras.

Uber has a toxic culture and it’s hard to fix a toxic culture. I hoped that Uber’s new CEO would fix it. He hasn’t. I think the world might be better off if Uber just disbanded and all the employees went to work somewhere else. It’s sad that Uber’s dysfunction is so out of control that it’s ended up killing a person.

Just a reminder of the kind of corporate culture at Uber, from a former engineer:

“On my first official day rotating on the team, my new manager sent me a string of messages over company chat. He was in an open relationship, he said, and his girlfriend was having an easy time finding new partners but he wasn't. He was trying to stay out of trouble at work, he said, but he couldn't help getting in trouble, because he was looking for women to have sex with. It was clear that he was trying to get me to have sex with him, and it was so clearly out of line that I immediately took screenshots of these chat messages and reported him to HR.

... When I reported the situation, I was told by both HR and upper management that even though this was clearly sexual harassment and he was propositioning me, it was this man's first offense, and that they wouldn't feel comfortable giving him anything other than a warning and a stern talking-to. Upper management told me that he "was a high performer" (i.e. had stellar performance reviews from his superiors) and they wouldn't feel comfortable punishing him for what was probably just an innocent mistake on his part.

... Over the next few months, I began to meet more women engineers in the company. As I got to know them, and heard their stories, I was surprised that some of them had stories similar to my own. Some of the women even had stories about reporting the exact same manager I had reported, and had reported inappropriate interactions with him long before I had even joined the company. It became obvious that both HR and management had been lying about this being "his first offense", and it certainly wasn't his last. Within a few months, he was reported once again for inappropriate behavior, and those who reported him were told it was still his "first offense". The situation was escalated as far up the chain as it could be escalated, and still nothing was done.”​

I would feel relieved if Uber just died. Lyft could fill the vacuum. They seem friendly, right?
 
Last edited:
  • Like
Reactions: croman
If that were the case, we'd only have TACC and autoparking :( No autosteer, autowipers, lane change, NoA...

TACC is the only one of those features that actually works quite well, and it's available in other makes without the "beta" label. Autowipers could have been solved the same way everybody else does it; no need for "beta". NoA is a joke. Lane change is OK but the most valuable part of it is blind spot monitoring, which again other manufacturers implement better via radar and no "beta" label. Autosteer is nice to have on long highway drives but again other manufactures have various levels of lane keeping assistance, and they are not labelled "beta".
 
It all depends on how fast the rate of false positives goes down. Maybe phantom braking events will go down because the vision neural network just gets better and better. Even if the system gets more conservative as new functions are added, false positives might decrease faster than conservatism increases. Who knows. I’m just saying it isn’t guaranteed to get worse.

Nothing is guaranteed but the most rational assessment would have to look at past performance to predict future performance -- rather than pointing out "well it might be better you don't know for sure". Tesla has been trying to do "autopilot" for 4 years now I believe, if you count AP1? They have consistently in that time struggled to do anything beyond adaptive cruise control with highway lane keeping -- and these are features that are commonly available in other makes using robust off-the-shelf systems. They have failed in all those years to release anything that really strays from these core features by much -- lane change is probably their best success beyond the features available elsewhere, but honestly this feature still has significant problems and other manufacturers could probably be doing this right now if they really wanted to. (Probably better/safer thanks to radar for blind spot monitoring.)

Now, the core features of adaptive cruise control and lane keeping have greatly improved in AP2 over the past 2 years. So now it's just about as good at those things as AP1 was. V9 however was a very clear regression in these core features, in the interest of (being charitable here) laying a more solid foundation for future expansion beyond these core features. So a sober assessment of the path forward for AP development must lead in the direction of "it's going to get worse before it gets better", since it has already gotten worse -- and still not better, though probably in 3-6 months they'll have brought the core features back to where they were at the end of the V8 series. At that point we'll have V8 with a worse UI and gimmicky NOA, and (again being charitable) a better foundation for future features.

Of course, you don't have to make a sober rational assessment based on evidence. You can just do what partisans do -- look for any excuse to believe what you want to believe because nobody can prove anything about the future.
 
I would feel relieved if Uber just died. Lyft could fill the vacuum. They seem friendly, right?
I think Lyft would have no problem filling the vacuum even without Uber dying. From a friend who works there, I know they have different views of the world: Uber is building an app; Lyft is building an app store. In other words, Lyft's app supposedly supports plugging in different rideshare providers. While they do have their own rideshare service, it essentially is a client to the app like other services could be. That means at some point, Lyft could be the only app you need to request a ride. It might be a lyft car, or it might be an Uber or a Waymo or Tesla AV if they were plugged into the platform.
 
  • Informative
Reactions: OPRCE
This is the proper way to go. No one should let regular people use beta versions.

“Beta” doesn’t have a determinate technical meaning. It’s just a label that serves as a note of caution to the user. With driver assistance systems, a note of caution is always good. That’s Tesla’s stated rationale for using the “beta” label.
 
  • Disagree
Reactions: SalisburySam
“Beta” doesn’t have a determinate technical meaning. It’s just a label that serves as a note of caution to the user. With driver assistance systems, a note of caution is always good. That’s Tesla’s stated rationale for using the “beta” label.

Beta means: not fully tested, may contain bugs and accidents may be unavoidable even if the driver pays full attention.
 
Beta means: not fully tested, may contain bugs and accidents may be unavoidable even if the driver pays full attention.

That’s not what it means according to Tesla. “Beta” is not a technical term with a definite meaning; it’s just a loose, subjective term that is subject to reinterpretation and reappropriation. Different companies use it differently.

Tesla could remove the “beta” label tomorrow and that wouldn’t make Autopilot more safe. Similarly, driver assistance systems from other companies that don’t use the “beta” label aren’t necessarily more safe or less buggy than Autopilot.
 
That’s not what it means according to Tesla. “Beta” is not a technical term with a definite meaning; it’s just a loose, subjective term that is subject to reinterpretation and reappropriation. Different companies use it differently.

If you want to know what Tesla means by it, look at their public statements following incidents involving Autopilot. They always emphasize at these times that Autopilot is in "beta". So what "beta" means, according to Tesla, is that you can't blame them for mistakes made by Autopilot.

If I were to go to my service center complaining about how useless NOA is and how it tried to slam me into a concrete barrier, how it slams on the brakes at unexpected times, how it misses exits, or how Autosteer wobbles in the lane and makes my daughter carsick, they would shrug and say "It's beta".

This is what "beta" means to Tesla.
 
Isn’t ‘beta’ supposed to mean something like ‘pre-release’ version?

Like a test version (something unfinished)? Not-yet-ready-for-general-public kind of sample, that the developer expects feedback / reports from the users, before going wide?
It's up to the software company to decide what to call beta. Once upon a time it had a more fixed meaning but it has gotten far more fluid than that. In the free software world, pretty much all software in use is considered beta. So many versions of relatively long-term stable release applications are version numbers less than 1, which also imply beta. We could debate endlessly what it should mean but this would be a futile discussion in the current climate.
 
The first example I remember of the term “beta” being used loosely was Gmail. It was in “beta” for 5 years, from 2004 to 2009. It had over 100 million users while it was still in “beta”.

For Discourse (the forum software that runs Gradient Descent), the default release channel is the beta channel. You have to specifically opt in to the stable channel. So “beta” has essentially been redefined as the normal, standard version, and “stable” has been redefined as an optional, non-standard version that has been used more and is (in theory) more conservative than the main version, which is the “beta” version.

At this point the term “beta” has evolved to mean something different from what it might have meant 15 years ago, although different companies use it differently, so there is still no single universal meaning.

Some people mistakenly think Tesla’s deployment of “beta” software is a sign that Tesla is being cavalier with safety. But in fact, appending the (loose, somewhat arbitrary) “beta” label to Autopilot is Tesla being cautious. The purpose of the “beta” label is to remind users to be careful. Arguably, other automakers are being more cavalier than Tesla by not appending the label to their driver assistance systems.
 
Last edited:
  • Disagree
Reactions: rnortman
Arguably, other automakers are being more cavalier than Tesla by not appending the label to their driver assistance systems.

I'm at a loss for words to describe how strongly I feel about this statement. Other automakers have released -- and the key word here is released -- driver assistance systems with very clearly defined capabilities and limits. Not one of them has tweeted about their cars being capable of L5 autonomy with a future update -- which has demonstrably misled many consumers into thinking that Teslas are in fact capable of more than just driver assistance -- and not one of them has ever sold and accepted payment for a feature that was not already completed, validated, and fully specified and described as to its exact capabilities and limitations. None of them have sold anything claiming to be anything more than a driver assistance system. Absolutely everything they have sold has been fully tested to be safe when used within its well-defined limits.

The 2011 Prius I recently traded in for a Model 3 -- a vehicle that was released without a beta label before the Model S even existed -- had fully functioning radar-based adaptive cruise control, camera-based lane keep assist, autopark, pre-collision, and AEB systems. All of them functioned exactly as they were advertised to function from the day I bought the car. None of them ever did anything I would call frightening or unsafe, unlike Autopilot, which routinely pushes the limits of safety. The key difference is that they were all firmly scoped to operate within the present day capabilities and never tried to push those limits.

Now, the Tesla way is to not sit comfortably within present-day capabilities. That's fine, and debating the merits of that approach is a different discussion. But to suggest that other automakers should label their features beta is... as I said before, I'm at a loss for polite words to describe how I feel about that.
 
I'm at a loss for words to describe how strongly I feel about this statement. Other automakers have released -- and the key word here is released -- driver assistance systems with very clearly defined capabilities and limits. Not one of them has tweeted about their cars being capable of L5 autonomy with a future update -- which has demonstrably misled many consumers into thinking that Teslas are in fact capable of more than just driver assistance -- and not one of them has ever sold and accepted payment for a feature that was not already completed, validated, and fully specified and described as to its exact capabilities and limitations. None of them have sold anything claiming to be anything more than a driver assistance system. Absolutely everything they have sold has been fully tested to be safe when used within its well-defined limits.

The 2011 Prius I recently traded in for a Model 3 -- a vehicle that was released without a beta label before the Model S even existed -- had fully functioning radar-based adaptive cruise control, camera-based lane keep assist, autopark, pre-collision, and AEB systems. All of them functioned exactly as they were advertised to function from the day I bought the car. None of them ever did anything I would call frightening or unsafe, unlike Autopilot, which routinely pushes the limits of safety. The key difference is that they were all firmly scoped to operate within the present day capabilities and never tried to push those limits.

Now, the Tesla way is to not sit comfortably within present-day capabilities. That's fine, and debating the merits of that approach is a different discussion. But to suggest that other automakers should label their features beta is... as I said before, I'm at a loss for polite words to describe how I feel about that.

Seems that Beta label is quite arbitrary given that Tesla’s beta software has been a one to beat for a long time.

Overall, Tesla AP is beta because it is not autonomous yet, other systems are not beta because their functionality is restricted while good enough for limited scope of operation.
 
Seems that Beta label is quite arbitrary given that Tesla’s beta software has been a one to beat for a long time.

Overall, Tesla AP is beta because it is not autonomous yet, other systems are not beta because their functionality is restricted while good enough for limited scope of operation.

I do not want to debate that point. I am specifically talking about the ridiculous statement @strangecosmos made that other automakers are being "more cavalier" than Tesla is by not labeling their systems "beta". The idea that the very conservative, slow-moving auto companies are somehow "more cavalier" than Tesla is just... just... I'm still at a loss for (polite) words.
 
more cavalier
A bigger supporter of King Charles I in the English Civil War???

giphy-downsized-large.gif


Huh, huh, huh
 
  • Funny
Reactions: MTOman