Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Navigate on Autopilot is Useless (2018.42.3)

This site may earn commission on affiliate links.
I think you are right! I see no reason to buy FSD in Europe. Also - they raised the price $1000 .

Increasing the price in Europe was a foolish move, and their conversion rates for FSD sales are still pretty low and only getting lower.

I usually give it a try after I get an update, how else would we know if it has been fixed?

The easiest way to know whether Tesla has made major progress is that Tesla will be absolutely yelling it from the mountain tops. They'll give press demo drives, they'll hold a special event for investors, they'll blog about it repeatedly, etc. Because if you're actually the leader in autonomous driving and selling it, that's an obvious way to drum up immediate sales.
 
Here’s what I’d like to see in NoA:

When approaching an interchange where traffic is merging, where you can clearly see traffic coming up an on ramp, NoA should move the the left lane. This is how I “manually” drive, then move back into the right lane once I’m able to. I know it was doing this in prior software releases, but it was doing it waaaaay to early, hanging in the left lane too long. It slows down in the right lane to allow cars to merge now, which is fine and well, but I could see people behind me bitching about slowing down.

Also, on 2020.36 when merging from certain on ramps to the free way, it wants to go directly to the far left lane of the freeway. This is not safe, nor necessary, no reason it should be doing this in the cases it does this. Whatever. It’s still better than the Pilot Assist my last two Volvo’s had.
 
MS R 2020.36.10

Something that has been bugging me for a while (and currently being ignored waiting for the fabled 'rewrite') is why the visualizations are so twitchy and objects often appear, disappear, turn into a different object - especially when stationary.

I have had truck visualizations repeatedly changing the cab end from the front to rear of the truck as surrounding traffic edges forwards. I have had traffic cones appearing and disappearing all over a wet road surface that were nothing more than reflections of stop lights.

Surely, no self driving system can hope to work if it is prepared to believe that random (and near impossible) events are taking place within the system's field of vision. No wonder fantom braking is an issue.

The AI might well be trained on a frame by frame basis, but surely its obvious that it needs to learn what happens across many frames, pretty well taking into account everything (relevant) that it 'sees' up to a hundred or more yards ahead.

Hopefully, this will be the night / day change that we see with the rewrite, where we will be able to feel the car change speed and position within a lane based on upcoming road layout / traffic.

Right now, driving (only) at 60 in a 60 zone regardless of it being a country lane or major road, not starting to slow until you pass a reduction in limit sign, slamming on brakes if a vehicle crosses your path some way ahead, having no regard for parked / stationary cars other than stopping behind them all seem like big short comings.

The (simple) NN models I have looked at - once fully trained - do not lurch suddenly from one conclusion to another. If that was even possible, then it would suggest that the NN wasn't adequately trained as 'confidence' shouldn't be able to suddenly switch back and forth between significantly different outcomes. For that to happen would indicate a dangerous situation. I feel as though the issue in training is not so much the stuff the system 'recognizes' as the stuff it effectively opts / learns to ignore.

Edit: if an AI system absolutely must 100% come up with some opinion on what it is seeing, (and given a finite processor resource) then presumably in some situations, something has to give, and you allow an increased chance of the system being 'tricked' purely to have a view to work with. Within a single frame, this might manifest as jumping objects, and imo there should be far less of that evident today if everything was working correctly. Within multiple consecutive frames, I can't really vizualise how it would work. As soon as a frame has low(er) confidence, then can you usefully build that into a multi-frame view?

So is Tesla really going to suddenly make a leap from (imo) jumpy twitchy visualizations as at present to a far smoother, steadier, flowing and more stable / confident view?
 
Last edited:
  • Helpful
Reactions: diplomat33
Is it generally known how Tesla's recognition (maybe prediction-less) system works? Does it (at present) really just try to make sense of each frame, one at a time, or is it doing object tracking? Object tracking would suggest some implied relationship between consecutive frames, and (with my little amount of real 1st hand knowledge) makes me feel that it is recognising but not tracking. Or if it is tracking then it's not using that (dynamic) data in a predictive manner.

If the system needs to identify and track objects and their relative vectors all around the car, is that still within the scope of HW3?

What are the numbers like? 1000 potential objects that are or might soon become track able? 100 significant objects identified as needing close tracking? Plus some 'Emergency' capability to allocate a load of processing for up to 3 or 4 objects deemed to pose imminent threat?

Is there even such a hierarchy in the way the system works at present or might work in future?

Or is all this really just an intrinsic function of the NN that as long as you are working at a sufficient resolution, it just works itself out?
 
Is it generally known how Tesla's recognition (maybe prediction-less) system works? Does it (at present) really just try to make sense of each frame, one at a time, or is it doing object tracking? Object tracking would suggest some implied relationship between consecutive frames, and (with my little amount of real 1st hand knowledge) makes me feel that it is recognising but not tracking. Or if it is tracking then it's not using that (dynamic) data in a predictive manner.

If the system needs to identify and track objects and their relative vectors all around the car, is that still within the scope of HW3?

What are the numbers like? 1000 potential objects that are or might soon become track able? 100 significant objects identified as needing close tracking? Plus some 'Emergency' capability to allocate a load of processing for up to 3 or 4 objects deemed to pose imminent threat?

Is there even such a hierarchy in the way the system works at present or might work in future?

Or is all this really just an intrinsic function of the NN that as long as you are working at a sufficient resolution, it just works itself out?
Where you are looking for is that 4th dimension - i.e. time.
Right now, there is very little in Autopilot that is correlated in time (cut in detection is an example).

Elon called the current implementation 2.5 -- meaning 2D (images at a time) and only correlated in time for some specific tasks.

The rewrite is supposed to give full 3D (stitched view of the entire surrounding) plus correlation in time throughout the entire stack, not just explicit tasks.
Obviously, I am waiting to see this play out, but the potential is there.

Hope that helps.
 
Where you are looking for is that 4th dimension - i.e. time.
Right now, there is very little in Autopilot that is correlated in time (cut in detection is an example).

Elon called the current implementation 2.5 -- meaning 2D (images at a time) and only correlated in time for some specific tasks.

The rewrite is supposed to give full 3D (stitched view of the entire surrounding) plus correlation in time throughout the entire stack, not just explicit tasks.
Obviously, I am waiting to see this play out, but the potential is there.

Hope that helps.

Thanks.

I get the time element, but as a process, is it identifying and tracking objects or does a NN develop some inherent automatic object awareness just by virtue of what it does and how it does it?

As well as trying to build a 3D image from 2D cameras, you then presumably have to hold a moving buffer of say 10 - 20 seconds (may be less) and then follow 'likely' objects through that space.

That sounds like a staggering task, and in some way in very busy environments where little can be taken for granted, it seems like the only way to be sure your image / environment processing keeps up is by slowing down the car OR working with a less certain view of the environment.
 
I get the time element, but as a process, is it identifying and tracking objects or does a NN develop some inherent automatic object awareness just bu virtue of what it does and how it does it?
The NN for self-driving at Tesla, goes through 70000+ GPU hours of training, on well-curated and meticulously labeled data.

So, it is able to identify the objects only as well as it has been trained.
But prior to the stitching (3D) view, it would run the detection on independent feeds (1 for each camera) and then would spit out predictions for each of those and was stitched in the code. (I believe this is also where all the UI jumpiness is coming from)
upload_2020-9-18_11-29-28.png
 
  • Like
Reactions: Battpower
It's not doing that. It's assisting you and you're driving. Be sure to know the difference or you'll end up like so many other people, angry that "autopilot crashed". Most of us started out like you, pretty psyched about what seems like an impressive system at first. But over time and experience, you start to see the man behind the curtain more and more.
Still driving me to work, 2 years later, only now it's also driving on the city streets part of my commute too.

@DrDabbles and @wk057, is Navigate on Autopilot still useless to you?
 
Yep. In fact it's worse than it's ever been to be honest.

I actually almost don't even engage AP on the Model 3 anymore because it just does dumb stuff far too often.

I have an interstate highway stretch of ~50 miles each way I do regularly (every week at least). In the Model 3 I'll get 5+ phantom brakings, various no-reason slowdowns, and several unpredictable lane departures on each leg of that trip. With NoA enabled it's even worse, coming up on slow traffic, waiting multiple seconds before even thinking about passing. It's completely useless to me.

In my AP1 Model S with my never-need-hands-on-the-wheel hack, I can literally get on the highway, engage AP, and all I have to do is hit the blinker to pass people. I can generally do the entire route with zero disengagements, including through a long-term construction area. The Model 3 completely ****s the bed there.

So yeah, still useless.
 
Yep. In fact it's worse than it's ever been to be honest.

I actually almost don't even engage AP on the Model 3 anymore because it just does dumb stuff far too often.

I have an interstate highway stretch of ~50 miles each way I do regularly (every week at least). In the Model 3 I'll get 5+ phantom brakings, various no-reason slowdowns, and several unpredictable lane departures on each leg of that trip. With NoA enabled it's even worse, coming up on slow traffic, waiting multiple seconds before even thinking about passing. It's completely useless to me.

In my AP1 Model S with my never-need-hands-on-the-wheel hack, I can literally get on the highway, engage AP, and all I have to do is hit the blinker to pass people. I can generally do the entire route with zero disengagements, including through a long-term construction area. The Model 3 completely ****s the bed there.

So yeah, still useless.

With what firmware version?

During the Safety Score Testing (before the FSD Beta) was the only time in my Model 3 that AP actually worked well. Now it wasn't perfect as it didn't slow down for people using their blinkers to get in, but I felt like it was at least usable. They mostly fixed that ridiculous re-centering that happened at every merge point, and mostly cut down on phantom braking. I still had some degree of phantom braking on occasion when it would start tracking the car ahead, but in the other lane.

I always have NoA off (it just sucks due to bad map data in my area), and traffic light response off (too prone to false detection).

I did have a Model S with AP1 where I liked it to the point where I didn't use AP. That might sound funny, but I felt that it was so good it reduced my situational awareness. So I just used and it was great.

The other reason why it was so good is it simply didn't do as much as EAP/FSD. Like the auto-lane change of AP1 doesn't have camera sensors on the sides to see if someone is coming. It also isn't set to brake if it thinks a car ahead might get into the lane. There is probably a laundry list of items it doesn't do thereby reducing the odds of phantom braking or canceled lane changes. Which is great for a good driver that just wants to thing to go, but less good for some idiot who needs protecting.

Basically HW1/AP1 -> Hit the brakes if it's going to hit something
HW2+/EAP/FSD -> The throttle is now the go-pedal. Hover foot over the go-pedal to quickly apply it when you need to go.

:)

One thing I'll never miss from AP1 is truck lust. I hated that.
 
  • Like
Reactions: DrDabbles
@DrDabbles and @wk057, is Navigate on Autopilot still useless to you?

Complete crap, yep. Just tried it on a longer drive the other day, and it not only pissed me off immediately but also pissed off several vehicles behind me. My favorite move is pulling into the left lane to pass a vehicle that's barely going slower than me, not accelerating, and now it's going slower than the flow of traffic in the leftmost lane.

FSD seems to be amazing if you want to charge at pedestrians, metal poles, and concrete pillars, though.
 
  • Funny
Reactions: emmz0r
Mine freaks out at taxi lanes, has started to slow down massively before overpasses.
That diagonal line makes it violently turn left, potentially slamming into vehicles on the left lane. Very dangerous. So routinely I have to just hold the steering wheel really firm, let it disengage when it tries to turn, and then just reengage it. Also it breaks hard there so I have to hold the wheel AND have the pedal pressed.
So then I am like - "why am I doing this"
1635935138664.png
 
  • Like
Reactions: DrDabbles
Mine freaks out at taxi lanes, has started to slow down massively before overpasses.
That diagonal line makes it violently turn left, potentially slamming into vehicles on the left lane. Very dangerous. So routinely I have to just hold the steering wheel really firm, let it disengage when it tries to turn, and then just reengage it. Also it breaks hard there so I have to hold the wheel AND have the pedal pressed.
So then I am like - "why am I doing this"

How about slowing down at blinking yellow lights on the highway? Or, and this one is pure genius, slowing down where yellow blinking lights used to be but they don't exist anymore? Obviously temporary signs are edge cases, right? That doesn't exist everywhere.
 
  • Funny
Reactions: emmz0r
How about slowing down at blinking yellow lights on the highway? Or, and this one is pure genius, slowing down where yellow blinking lights used to be but they don't exist anymore? Obviously temporary signs are edge cases, right? That doesn't exist everywhere.

The so-called traffic light detection detects the tunnel status signs for lanes (open / closed ) lane etc as traffic lights.
That's the only FSD feature I could legally use here , rendered useless.

Also it refuses to see some speed limit signs, even if it's clear as day. And has decided that an invisible speed limit which has never been there in history is the truth. So it slows down from 70 to 50 km/h at a random spot . it's all a hack with poor map data.
 
This all helps to confirm what I already suspected: AP2 quality still depends a fair bit on local map quality and local training. I've never seen a taxi lane before in my life so it wouldn't be a use case here. Instead I get to be ready for a section of highway that just finished construction, because AP will slow down to the old construction speed limit there even though the construction is done and the signs are down. I still find it very useful, but boy howdy is it finicky at times.
 
This all helps to confirm what I already suspected: AP2 quality still depends a fair bit on local map quality and local training. I've never seen a taxi lane before in my life so it wouldn't be a use case here. Instead I get to be ready for a section of highway that just finished construction, because AP will slow down to the old construction speed limit there even though the construction is done and the signs are down. I still find it very useful, but boy howdy is it finicky at times.

It's a HOV lane, but the unique marking in norway is maybe the diagonal dotted line. AP freaks out over that
 
With what firmware version?

Latest... whatever the sentry mode remote cam version is.

I'm mainly just impressed (and annoyed) that the MobileEye-based AP1 system from 2014 (2015 to be fair) can hold its own and even best Tesla's own system after 7 years of development.

Makes me wonder how good things would actually be by now had Tesla not lost MobileEye.
 
  • Like
Reactions: DrDabbles
Latest... whatever the sentry mode remote cam version is.

I'm mainly just impressed (and annoyed) that the MobileEye-based AP1 system from 2014 (2015 to be fair) can hold its own and even best Tesla's own system after 7 years of development.

Makes me wonder how good things would actually be by now had Tesla not lost MobileEye.
Hard to say.

There is no doubt that Tesla it took years to reach functional comparison to AP1.

But, with mobileeye they could have never had the freedom to operate independently. How can you take risk if your supplier simply won't let you? Plus they wouldn't easily be able to do things like sentry mode remote cam because that relies on the custom HW tesla has.

Then there is the chip shortage that's preventing GM from shipping any Enhanced (or non-enhanced) Supercruise vehicles.

A lot of it just comes down to what Tesla wants, and what customers want.

For myself it doesn't really matter what the ADAS system consist of because I want a Rivian. I'm tired of how Tesla treats its customers (like the Safety Score for example), and I'm bored of passenger EVs.

I am concerned that Rivian will try to pull a Tesla with their self-driving technology. It's important for them to partner up because they're not Tesla. Tesla can do what Tesla does because the head guy takes risks. I hope Rivian partners with someone.

Some risks pay off like Gigafactory, and some risks don't seem to pane out (the whole FSD thing).