Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autopilot article - Best I've seen, so far.

This site may earn commission on affiliate links.
This is an excellent semi-technical summary of the underlying technology in Tesla Autopilot: Exclusive: The Tesla AutoPilot - An In-Depth Look At The Technology Behind the Engineering Marvel

Very interesting that, as it stands, the Tesla system using a Deep Neural Network will markup the edges of roads and objects with absolutely NO lane markings. So, I think that is obviously in the not-too-distant future. Now that we know it's based on DNN computing we know that the system has to learn to recognise all these things (or be taught) and that's what we drivers seem to be doing, teaching it "the shape of things". It's good news to me, anyway, maybe you guys already knew all that :redface:

N.B. The full article is in 6 parts, so follow the links or use the drop down list at the bottom of each section to get the full story.
 
Last edited:
To what extent does the DNN make use of the smarter cars' equipment - ie, Mobileye's cameras, etc. - versus the input of the entire Tesla fleet of drivers? Our Model S, for example, is pre-autojust-about-everything, so the data TM can glean from our vehicle is limited to roads driven. But if - as definitely is the case - the roads we drive are Off The Beaten Path - then presumably those data have a value inherently greater than those coming from the hundreds of Model Ss daily traversing the same roads within Silicon Valley.

Much more importantly, however, let's make the assumption there are just two drivers. One drives extremely well - stays in the center of lanes, takes turns appropriately by using the correct egress/ingress lanes, and so forth; the other wanders all over the place - crossing fog lines, center lines etc. Obviously, TM is going to want to discount that driver's input by a lot. Can the DNN do this?
 
To what extent does the DNN make use of the smarter cars' equipment - ie, Mobileye's cameras, etc. - versus the input of the entire Tesla fleet of drivers? Our Model S, for example, is pre-autojust-about-everything, so the data TM can glean from our vehicle is limited to roads driven. But if - as definitely is the case - the roads we drive are Off The Beaten Path - then presumably those data have a value inherently greater than those coming from the hundreds of Model Ss daily traversing the same roads within Silicon Valley.

Much more importantly, however, let's make the assumption there are just two drivers. One drives extremely well - stays in the center of lanes, takes turns appropriately by using the correct egress/ingress lanes, and so forth; the other wanders all over the place - crossing fog lines, center lines etc. Obviously, TM is going to want to discount that driver's input by a lot. Can the DNN do this?

A long watch but may answer some questions: The Future of Computer Vision and Automated Driving by Prof. Amnon Shashua - YouTube

The Denali Hwy would be a good example for off the beaten path
 
The usual procedure is for the neural net "programmer" to initially set base rules for "good" and "bad" behaviour and occassionally correct aberrant behaviour in the system. In that respect it is a kind of learning, but, unlike a human being the machine will not then negligently break the base rules except to avoid a collision or other disaster. For instance, it might drive out of a lane if that wuld avoid a coĺlision and braking hard would not. I don't know that this particular system is that advanced just yet. Good behaviour is weighted so that it will occur most often, bad behaviour is the reverse.