Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Navigation on Autopilot is incredible

This site may earn commission on affiliate links.
I’m impressed. NOA worked very well for my commute in both directions except for two things:

1) It refused to take my exit ramp coming home. It showed a message saying “unsupported maneuver, Navigate on Autopilot ending” a little ways before. It was easy for me to take over in plenty of time, but I was disappointed that my exit wasn’t supported and really wasn’t sure why. I was curious and optimistic an update would resolve this eventually.

2) It often suggests lane changes “to follow the route” unnecessarily. I’m not sure if it’s trying to do it way way too early, or confused in some way.

3) When coming home, if I’m in the right lane (usually I’m not due to traffic), there’s a spot where it swerved into the middle of a margining in on-ramp. AP in general does this sometimes, but at this spot it’s scary and seems dangerous, as there’s a curve right ahead and a big guard rail at the start of a bridge. I usually avoid being in the right lane at this spot because there’s always traffic merging in, but with NOA I also avoid it due to this.

I keep using it both for convenience and to give them more training data.

A few days ago I noticed that suddenly #1 was resolved, and it now happily takes my exit. That’s pretty good progress in a relatively short time. I’m going to keep an eye out on the other issues and see if they get resolved soon too.
 
  • Informative
  • Like
Reactions: MaryAnning3 and GSP
Is there any evidence that the car is collecting sensor data and sending it back to Telsa to add to the training?

That’s the whole point. Though there is a way to opt out as I recall.

AFAIK, there is no training in the car. This would be near useless since it would only apply to your car. Plus, I'd want to data being sent from other cars to be properly curated -- we don't want cars to learn bad driving.

Correct.
 
Right??? This basically caused me to write the whole thing off until the next rev. Possibly, NoA is allergic to the right lane?

Oh, in my experience I’ve only seen this when it suggests that I move into the right lane, even though I’m passing slow traffic that is there, and my exit is 15 minutes away. In some places, like on the 520 bridge, it seems to suggest moving to the right lane, but after the bridge it gives up on that suggestion.
 
  • Like
Reactions: jgs
I'm loving NOA but it's not for every situation. My daily commute consist of 2-3 miles on freeway and sometimes transition from one freeway to another depending on which route I take. The other 12 miles are on major surface streets. I'm in Los Angeles so freeways are everywhere and pretty congested most of the time. NOA is not my preferred choice for my daily commute but I still use it as much as possible, when I get in a situation where it's not being aggressive enough I switch off NOA and just use EAP or do the diving myself. That being said, on weekends, about twice a month, I travel to a location that is 12 miles away and involves 2 freeway transitions. Traffic is usually fairly light and NOA works great!!! The first time I tried it, I had to interact a couple of times but now I've made the trip four times since NOA has been available to me and it keeps getting better. Today I did not interact once from the first freeway on-ramp to third freeway off-ramp. It was amazing.. Keeps getting better.. like it's learning uh-oh AI is coming!! :eek:
 
>>Is there any evidence that the car is collecting sensor data and sending it back to Tesla to add to the training?
That’s the whole point. Though there is a way to opt out as I recall.

I'm a bit skeptical on this. First, the data (car sensors plus video and GPS, speed etc) once sent to Tesla has to be properly labeled and annotated to be used in the neural net training). This could be semi-automated but has to be manually checked. Remember that the "automation" of this would be using the same algorithms that are in the car.
Second, most miles driven are boring and not really worth this effort. If they are only taking miles that drivers marked as difficult, interesting or where crashes occurred, then it could have value.
Third, I was at an AI industry conference a few months ago where Andrej Karpathy (head of AI at Tesla) gave a talk. He never mentioned this...and the whole point of the talk was how highly reliable datasets were more important and time consuming than developing the NN models and code.
 
I'm a bit skeptical on this. First, the data (car sensors plus video and GPS, speed etc) once sent to Tesla has to be properly labeled and annotated to be used in the neural net training). This could be semi-automated but has to be manually checked. Remember that the "automation" of this would be using the same algorithms that are in the car.
Second, most miles driven are boring and not really worth this effort. If they are only taking miles that drivers marked as difficult, interesting or where crashes occurred, then it could have value.
Third, I was at an AI industry conference a few months ago where Andrej Karpathy (head of AI at Tesla) gave a talk. He never mentioned this...and the whole point of the talk was how highly reliable datasets were more important and time consuming than developing the NN models and code.

You’re skeptical of what? Did you even read the text you quoted? I never said anything about how the data is used or labeled. I said they collect data that’s used to improve the AutoPilot model(s). That’s it.

I work in applied ML. I don’t know what exactly Tesla’s setup is nor have I claimed to. I can make some pretty good guesses. First, ML requires a lot of examples, including a lot of what you’d call “boring” ones. Second, labeling does not need to be done by hand or entirely by hand. Given the nature of their work I assume there’s more human review, but I also would hope they’ve automated much of the process.

They run EAP in “shadow mode” when you’re not using it, where it will make predictions about what the driver is going to do next. I assume they collect snapshots of data at those decision points and then a signal about whether the prediction was right or wrong. When you are using it, I assume they do the same thing whenever you manipulate the controls or disengage EAP (or are in an accident, etc). They probably also collect a good deal of randomly sampled data.

You’re also thinking way too much about raw data (e.g. video and images, sensor data). While I assume they collect some of that, I expect they’re also collecting a good deal of featurized data.
 
  • Informative
  • Love
Reactions: MaryAnning3 and jgs
I never said anything about how the data is used or labeled. I said they collect data that’s used to improve the AutoPilot model(s). That’s it.

Then I think we agree more than disagree.

However, without the raw data that led to a decision a human made that may have been different than the machine predicted action it is very difficult to "fix" the bug in the system. IMO. Other than to just collect lots of similar situations as a red flag for developers to look at, in general.

First, ML requires a lot of examples, including a lot of what you’d call “boring” ones.
Yes, I completely agree. But at this point in time I would think that the database of boring examples is relatively full.
A car accident with injuries happens about once every million miles and an accident with a death is about once every 100 million miles (probably more than this since some accidents have multiple deaths). This is the kind of thing they need multiple new examples of since the database of these things is very sparse.

Second, labeling does not need to be done by hand or entirely by hand.
I agree. However, by definition it can't be completely automated...else the validation is meaningless since the same algorithm used to test with is the same one you used to label the data more or less. If you had a better algorithm to identify a pedestrian in the labeling than you use in the car you'd have to ask why the car is using a worse algorithm. I do suppose that the auto-labeller could use multiple algorithms and allow the human checker to choose and modify.
But then I go back to what Karpathy actually said about what Tesla actually does...something like more than 1.5x or 2x the engineering cost for dataset development/validation than everything else.
 
  • Like
Reactions: MaryAnning3
I"ve had the autopilot-lane changing-exit ramp missing - stop sign blues like the rest of you. I'm also in my 30day free trial. This software, in beta, is not ready for prime time and should be given away free to all drivers. The standard of the software industry is to give beta software away until the bugs are ironed out. It also allows adoption by lots off drivers who can help developers as they test it out. While in it's beta phase, it is ludicrous to charge $5500-7000 for the privilege of being guinea pigs for Tesla. I'm glad I've saved my cash.
 
I find Autopilot indispensable. My commute is down and up the 680 that is east of San Francisco. NOA just makes it so much less stressful. It's not perfect, but well worth it. Some of the common things I've noticed:

1. It struggles with lane changes in dense traffic. When possible, I mitigate this by timing my signal that approves the lane change by waiting until the lane is sufficiently clear.
2. Notifications to change lanes for upcoming interchanges appear rather early in some situations. Usually the timing is fine, but sometimes it's like 2-3 miles before.
3. I still wish AP would behave a bit differently in the far left lane. California allows motorcycles to lane split. It would be nice if the car would hug the left side of the lane when in the left most lane. That would give motorcycles more clearance. Often times, I end up disabling AP when I forced the car more to the left.

In regards to the training of AP, I'm pretty sure that overriding AP is a triggering event for Tesla to collect the data around that event for AI training.
 
  • Like
Reactions: MaryAnning3
I figured out how I feel about AP and Nav on AP

I can, for the most part, walk talk and chew gum at the same time - and not have to think about the walking part. Ditto driving most places -exceptions are really bad traffic or in areas unfamiliar to me. It's not that I am not paying attention but that at this point it is second nature.

AP requires that I make sure the car is 'walking' (driving and navigating) properly, causing much more conscious attention to the car than merely driving myself would require.

TACC is pretty good though, except when it gets scared by shadows or super fast lane splitting motorcyclists