Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

“Drive on Nav” feature in Enhanced Autopilot v9.0

This site may earn commission on affiliate links.
I have read the posts, yes. Based on what he’s written, I think jimmy is basing a fair amount of his educated guesses on the names of files. I don’t believe jimmy has access to everything that constitutes Enhanced Autopilot. I don’t see any post where he indicates that.

In the Neural Networks thread, jimmy acknowledges there may be neural networks he doesn’t have access to, and he acknowledges that he’s sometimes speculating.



You’re right: the vision neural networks are not doing path planning or control. They are doing vision. But that doesn’t mean that other neural networks aren’t doing path planning or control.

One of the parts of path planning is prediction. It looks like Waymo is using deep learning for prediction. The same could be true for Tesla.


Just to clarify a few things:

While it's true that speculation about how AP works is just that, it is based on quite a bit more than just filenames. The reason that I know about post processing networks is because I walked the decompiled binaries and found the other networks - they do not appear in the filesystem. From that code I can tell what libraries are used, the shape of the networks, the character of their outputs and so forth. That information constrains the possible uses of those networks pretty tightly so there's a lot that can be deduced from it. One of the things we can deduce is that the NNs don't yet control the car directly, rather they provide highly abstracted perceptual information that comparably simple control software can use to make effective driving decisions.

#323

I've learned a lot about the networks since those posts but haven't bothered to post technical details that aren't going to be interesting to most of the people on these forums. It takes a lot of time to carefully write up that stuff and the audience for it is pretty small. But it's worth saying that I haven't seen anything so far that invalidates my earlier speculation and I've seen plenty of stuff that supports it.

But to weigh in generally, what I see is consistent with the idea that Tesla is going with Karpathy's Software 2.0 thing as @strangecosmos suggests, and I agree that these EAP features are probably going to roll out incrementally with less driver intervention over time as the features mature and as it becomes clear how the driver population is using and reacting to them. The car is already doing a lot of processing beyond what is strictly needed for the features we can use today and it's reasonable to see that as evidence of Tesla laying the groundwork for future capabilities. That future AP growth is dominated by features that will be brought about by an increase in the NN capabilities. So just as the Software 2.0 concept implies, the fraction of AP behavior that is learned will continue to grow. Right now that increases the amount of written code in absolute terms, but fraction of AP functionality that comes from that written code will continue to decline.
 
Just to clarify a few things:

While it's true that speculation about how AP works is just that, it is based on quite a bit more than just filenames. The reason that I know about post processing networks is because I walked the decompiled binaries and found the other networks - they do not appear in the filesystem. From that code I can tell what libraries are used, the shape of the networks, the character of their outputs and so forth. That information constrains the possible uses of those networks pretty tightly so there's a lot that can be deduced from it. One of the things we can deduce is that the NNs don't yet control the car directly, rather they provide highly abstracted perceptual information that comparably simple control software can use to make effective driving decisions.

#323

I've learned a lot about the networks since those posts but haven't bothered to post technical details that aren't going to be interesting to most of the people on these forums. It takes a lot of time to carefully write up that stuff and the audience for it is pretty small. But it's worth saying that I haven't seen anything so far that invalidates my earlier speculation and I've seen plenty of stuff that supports it.

But to weigh in generally, what I see is consistent with the idea that Tesla is going with Karpathy's Software 2.0 thing as @strangecosmos suggests, and I agree that these EAP features are probably going to roll out incrementally with less driver intervention over time as the features mature and as it becomes clear how the driver population is using and reacting to them. The car is already doing a lot of processing beyond what is strictly needed for the features we can use today and it's reasonable to see that as evidence of Tesla laying the groundwork for future capabilities. That future AP growth is dominated by features that will be brought about by an increase in the NN capabilities. So just as the Software 2.0 concept implies, the fraction of AP behavior that is learned will continue to grow. Right now that increases the amount of written code in absolute terms, but fraction of AP functionality that comes from that written code will continue to decline.
I find your posts about NN architecture very interesting and informative. I may have never responded to your posts, but would definitely be appreciative if you continue to write up what you learn. I suspect there are many others in the same boat.
 
Just to clarify a few things:

While it's true that speculation about how AP works is just that, it is based on quite a bit more than just filenames. The reason that I know about post processing networks is because I walked the decompiled binaries and found the other networks - they do not appear in the filesystem. From that code I can tell what libraries are used, the shape of the networks, the character of their outputs and so forth. That information constrains the possible uses of those networks pretty tightly so there's a lot that can be deduced from it. One of the things we can deduce is that the NNs don't yet control the car directly, rather they provide highly abstracted perceptual information that comparably simple control software can use to make effective driving decisions.

#323

I've learned a lot about the networks since those posts but haven't bothered to post technical details that aren't going to be interesting to most of the people on these forums. It takes a lot of time to carefully write up that stuff and the audience for it is pretty small. But it's worth saying that I haven't seen anything so far that invalidates my earlier speculation and I've seen plenty of stuff that supports it.

But to weigh in generally, what I see is consistent with the idea that Tesla is going with Karpathy's Software 2.0 thing as @strangecosmos suggests, and I agree that these EAP features are probably going to roll out incrementally with less driver intervention over time as the features mature and as it becomes clear how the driver population is using and reacting to them. The car is already doing a lot of processing beyond what is strictly needed for the features we can use today and it's reasonable to see that as evidence of Tesla laying the groundwork for future capabilities. That future AP growth is dominated by features that will be brought about by an increase in the NN capabilities. So just as the Software 2.0 concept implies, the fraction of AP behavior that is learned will continue to grow. Right now that increases the amount of written code in absolute terms, but fraction of AP functionality that comes from that written code will continue to decline.

I agree that these posts are awesome. Honestly, some of the most useful stuff on TMC!

Jimmy, what would be your guess on when FSD will become viable?
 
Thanks for weighing in jimmy. Do you have any sense of whether path planning — as opposed to control (i.e. actuation of the steering wheel, accelerator, and brakes) — is run by a neural network?

By the way, if you publish your technical write-ups on Medium, you might be able to make some money doing it. And you could potentially reach a wider audience. That might make it worth your time. (Everyone can read 3 paywalled articles on Medium for free every month. You can also share a special link that gets past the paywall. So if you shared that link on TMC, everyone could read the article for free.)
 
Thanks for weighing in jimmy. Do you have any sense of whether path planning — as opposed to control (i.e. actuation of the steering wheel, accelerator, and brakes) — is run by a neural network?

By the way, if you publish your technical write-ups on Medium, you might be able to make some money doing it. And you could potentially reach a wider audience. That might make it worth your time. (Everyone can read 3 paywalled articles on Medium for free every month. You can also share a special link that gets past the paywall. So if you shared that link on TMC, everyone could read the article for free.)

Thanks for the suggestion - I didn't know that stuff about Medium. The only compensation that I care about is feeling like the effort is useful to people. TMC has been good for that (thanks guys!), but the format is a bit restrictive, so I'll look at Medium next time and take your suggestion of sharing a like to it here.
 
I'd imagine it's a pretty simple feature in practice:

1) Nav says you need to leave the motorway at the next exit (2 miles away)
2) 1 mile away from the exit, Autopilot makes a beep and says "lane change required for exit, proceed?"
3) You confirm it somehow, with a tap of the stalk or the indicator or something
4) AP users the side repeaters and ultrasonics and some object ID tracking etc to try and dodge cars whilst changing over however many lanes it needs to.
5) Once you're in the correct lane, it'll signal when you need to take the exit, and then also take the exit
6) It'll slow down and bring you to a gradual stop at the end of the exit

And, I imagine, this will be a US only feature for the foreseeable future (as the current slow down for the 'off ramp' feature is)

Useful? Well, I guess so - but not with the current nag rate...

Good outline of the upcoming feature,

Just one thing I think is slightly wrong.

>6) It'll slow down and bring you to a gradual stop at the end of the exit

It will slow down yes, and stop if there is a vehicle there. However, I do not think it will stop before a stop sign or red light, if it is the first vehicle in line.

You’re right: the vision neural networks are not doing path planning or control. They are doing vision. But that doesn’t mean that other neural networks aren’t doing path planning or control.

One of the parts of path planning is prediction. It looks like Waymo is using deep learning for prediction. The same could be true for Tesla.


Yea no evidence that they are not doing path planning / control. But it is extremely unlikely they are, except for perhaps in some moonshot research projects.

It is possible they have a NN or some other kind of machine learning for prediction, but I really doubt it at this point.

And Also I usually do not consider prediction a subset of path planning. but I see why you may want to lump it in there.

I find your posts about NN architecture very interesting and informative. I may have never responded to your posts, but would definitely be appreciative if you continue to write up what you learn. I suspect there are many others in the same boat.

+1
 
I guess you could categorize it as either perception or path planning. David Silver categorized it as part of path planning, so I went with that.

All based on an anonymous source, but this article from Electrek indicates that the latest dev build doesn't require user input to perform lane changes:

"In the previous version of the update, ‘Drive On Nav’ automatically suggested lane changes based on the destination, but the driver needed to pull on the Autopilot stalk to initiate any lane change.

Now it is not required anymore in the latest version of the feature."

Tesla Version 9 update: getting closer to release, now fixes one of the biggest issues with Model 3 UI, and more
 
EAP includes the two side repeater cameras, which are rear-facing and perfect for watching the blind spot and farther back. At least that's how they originally described it.
Yeah, but I don't think the current incarnation of EAP actually uses these cameras. I suspect it will require the new "10 times faster" custom processor board to be able to handle the data rate of all the cameras on the car.
 
What I'm really perplexed by is how much they're throwing into V9 right at the starting gate.

To really do lane changes without user input requires knowledge of the cars on the side/behind.

But, the latest version of the SW doesn't use the side rear facing cameras to detect cars. I expected Tesla to first release better side/blindspot monitoring BEFORE releasing the SW that does automatic lane changes.

This V9.0 release seems absolutely massive in terms of features added.
 
What I'm really perplexed by is how much they're throwing into V9 right at the starting gate.

To really do lane changes without user input requires knowledge of the cars on the side/behind.

But, the latest version of the SW doesn't use the side rear facing cameras to detect cars. I expected Tesla to first release better side/blindspot monitoring BEFORE releasing the SW that does automatic lane changes.

This V9.0 release seems absolutely massive in terms of features added.


We have no idea what will and won't be in there at public launch. Just because it's on an engineer car doesn't mean it'll be in the 9.0.0 release that goes to customers.
 
  • Like
Reactions: boonedocks