Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Navigate on Autopilot is Useless (2018.42.3)

This site may earn commission on affiliate links.
At this stage in the game, anecdotes do not count for anything. We need data. We need people with rigorous testing regimes to put together a comprehensive, repeatable set of tests that can be followed and documented by large numbers of people. Or we need Tesla to actually list out the improvements they have made and when they were sent to the fleet.
 
We need people with rigorous testing regimes to put together a comprehensive, repeatable set of tests that can be followed and documented by large numbers of people. Or we need Tesla to actually list out the improvements they have made and when they were sent to the fleet.

None of which is very likely. We're lucky, after years of pleading, that Tesla finally started telling us what firmware version is being downloaded before install. While I would LOVE actual comprehensive release notes/changelog, this just isn't happening. Every time there was a bugfix, there would be articles on CNBC, CNN, and the like every update: "Tesla had to fix hundreds of software bugs" or "Deadly bug in Tesla software can cause XYZ" (with only a tiny blurb about them fixing it buried somewhere in the article). So, not happening.

As for people doing testing... in my daily experience, it's kind of obvious that most people can barely operate vehicles in a sane fashion as it is, let alone do comprehensive testing of a feature while doing so. So, this isn't happening either.
 
None of which is very likely. We're lucky, after years of pleading, that Tesla finally started telling us what firmware version is being downloaded before install. While I would LOVE actual comprehensive release notes/changelog, this just isn't happening. Every time there was a bugfix, there would be articles on CNBC, CNN, and the like every update: "Tesla had to fix hundreds of software bugs" or "Deadly bug in Tesla software can cause XYZ" (with only a tiny blurb about them fixing it buried somewhere in the article). So, not happening.

I would still like a bit more details in the release notes. Tesla does not need to list every single bug fix but maybe just give more general info like "changed algorithms to make auto lane changes smoother". It could help owners better understand how AP/NOA will behave.
 
I would still like a bit more details in the release notes. Tesla does not need to list every single bug fix but maybe just give more general info like "changed algorithms to make auto lane changes smoother". It could help owners better understand how AP/NOA will behave.

Headline for that example: "Tesla had to update software to fix rough lane changes"

Still won't happen. lol.
 
NN is only used for object recognition here.
That is a simplification. It is more than than. NN interacts with other systems too. It recognized the actions of other cars (ie. cut ins that may affect your goal (obtain speed, chg lanes, take off ramp exit when on ramp is there too (I see this often)). Also it includes path prediction (where do I need to go not only in my current lane but once I move to the adjacent lane or once I take that exit). NN also helps with depth perception of objects around you and not just what an object is. Important for changing lanes and a speeding car in and the adjacent lane is overcoming you at speed that will cause problems when you move to that lane. You have a 3D picture (27:15 in video on NN from A.K.)

This is just ONE video that talks about this (and includes shadow mode points too). Several examples from Andrej Karpathy and other experts out there.
18:55 and 22:20
 
Last edited:
That is a simplification. It is more than than. NN interacts with other systems too. It recognized the actions of other cars (ie. cut ins that may affect your goal (obtain speed, chg lanes, take off ramp exit when on ramp is there too (I see this often)). Also it includes path prediction (where do I need to go not only in my current lane but once I move to the adjacent lane or once I take that exit). NN also helps with depth perception of objects around you and not just what an object is. Important for changing lanes and a speeding car in and the adjacent lane is overcoming you at speed that will cause problems when you move to that lane. You have a 3D picture (27:15 in video on NN from A.K.)

This is just ONE video that talks about this (and includes shadow mode points too). Several examples from Andrej Karpathy and other experts out there.
18:55 and 22:20


This is a better video (the source)

Yeah it may be a simplification but that's what it is - object recognition. No overarching logic and context is in the NN.


The cut-ins are trained right now but AFAIK not released. They try to make it work. We'll see how well it will work. So there is no logic here, it's just a pattern of different variables like the yaw of the car as he literally says.
 
  • Like
Reactions: DrDabbles
I'm not reading all that, but here';s my take. Summon was useless when ti first came out. Now it's almost good to play with. While Navigate on AP isn't the greatest yet, it will be. Give it time.

Sure are a lot of people mad about beta products in this forum lately. You know you're an early adopter, right? You cannot expect it to be perfect today.
 
No overarching logic and context is in the NN.
Path prediction, depth of ahead/behind/side objects are not a [simple] object 'recognition'. Way more than 'yes, that looks like a cat'.

It aint working :)
I had it happen several times on a long weekend roadtrip just a few days back. It was more cautious than I would have been. I'm on 2019.32.12.3 in a X with AP 2.0
 
Last edited:
Path prediction, depth of ahead/behind/side objects are not a give object 'recognition'. Way more than 'yes, that looks like a cat'.

I am only thinking loudly here :) :
Image is a matrix with RGB or monochrome, and you could say that the movement / trajectory across space is another pattern in a matrix that can be learned. At least I believe it gathers such data from movement, and massages it into some dataformat that is "learnable". So my guess is that the "magic part" here is the prepping of data. Karpathy says it can query the fleet for a detected car object that moves horizontally, so that data is logged. That combined with the look of the car.

I had it happen several times on a long weekend roadtrip just a few days back. It was more cautious than I would have been. I'm on 2019.32.12.3 in a X with AP 2.0

Mine never yields when a car is cutting in, but it's braking and stays behind a car in the adjacent right lane which has no intention or plan to change lanes. This is very annoying and I end up resting my foot on the accelerator. The whole point is to go past someone in the fast (left) lane LOL.
 
I'm starting to lean towards the "useless" category. On 2019.36.2.1 HW3 Model 3, I've had a few frustrating instances in the past few days. Follow distance was set to 3 and lane changes set to "Average"

NoA is ok with exiting now but it really can't handle on ramp merging consistently. Yesterday I got off a ramp where it slowed down to about 40mph, and then it has a short area in which it needs to merge. Instead of intervening I let the car attempt it on its own. Traffic was light, but the car wants to merge at the last minute (probably based on GPS and maps, not based on situational awareness). Well it sped up to 75mph (GPS speed limit + 5 offset), and by the time it decided it was ready to merge, a car was already there. Since it won't squeeze in between cars by acceleration, it just ran out of time to get in. I ended up going straight and exiting off again. So it sped up to about 75 mph, didn't merge, and then had to slow down and exit again.

This morning I tried to let it merge on to another highway but the lane lines aren't the best, and it gets hesitant between the two lanes. It also wanted to change lanes to the left, and was halfway into it, when it saw a car coming. A human driver would match speeds or otherwise accelerate, but the car just aborted the lane change and swerved back to the right.

I'm at the point where the only feature of FSD I use regularly is auto lane change and maybe some exiting through NoA (cycling NoA on and off), and only when I determine it's clear. The system relies too much on GPS/maps and not enough on situational awareness.
 
I honestly don't see how this could be described as "nearly flawless" by anyone anywhere.

I can't even get it to make sane decisions on any local highways in any of 3 different vehicles. This behavior was mirrored when trying on California highways as well. Seriously, it generally can't perform well enough for more than a couple of miles in most cases without intervention (not even counting the useless need for torque on the wheel before it will do anything at all). And I'm not just talking about intervention because it's doing something kind of silly, pointless, or just dumb. Those are plentiful for sure. But, I also mean intervention as in, if I don't prevent what it's wanting to do there may be an actual accident involved. So far, it's not been anywhere near flawless in any situation for me or anyone I personally know.

@clydeiii Really, though, if you can do a 25 mile commute with NoA with minimal interventions, and minimal overrides (like, letting it make and proceed with the decisions it makes to change lanes, interchanges, etc), I'd honestly love to see that in action if you get time and inclination to make a video... like, where the AP/NoA status can be seen in the video the whole time. Will be disappointed if this is on an empty road with no lane changes, or at below traffic flow speed, though.

I think you may be singing some higher than deserved praises when looking at it objectively, but that's just my guess based on now thousands of miles of experience and attempted use of this feature over multiple vehicles in a wide variety of locations.
Sure, I'll try to make a video of my commute sometime in the near future. It is a decently dense stretch of I-495 (DC Beltway) and I-95, but it's also a reverse commute, which does help somewhat. Still nearly flawless. On a few interventions needed, typically right near a tricky exit to my work, right at the end. Sometimes it wants to get over too early on 495 to make the 95 exit, so I just cancel those on the screen.
 
So you seem to be making the assumption that just because I mentioned only one specific area where it tries to kill me 100% of the time, that that is the only place where it screws up, or that it is always predictable where it will screw up. It's not always predictable, and anyway NOA's greatest value would be using it on unfamiliar roads, where learning its pattern of screwing up is not really an option. On familiar roads I know what lane to be in (much better than it does) and I don't need its help.
I might not "need" its help on my daily commute, but the help it provides greatly reduces stress (for me).
 
Hopefully it comes with vision improvements because there's definitely blind spots that create uncertainty for auto-lane change. Long vehicles that span multiple cameras don't seem to be handled well, and cars that fill the entire repeater camera's frame go missing or the depth estimation goes nuts. At least if they wait before making a lane change, it increases the chance the vision network will see the car that's right next to you.