Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Navigate on Autopilot is Useless (2018.42.3)

This site may earn commission on affiliate links.
Thank you for this post. This aggressive accel/decel is making me sick to my stomach. This is the first time I’ve voluntarily driven my car manually in heavy traffic; I can’t handle the lurching on this firmware.
I haven't noticed this yet on my M3AWD. But it has been very rainy/snowy in DC since 40.x rolled out here and traffic has been much slower than usual, so it's hard to say anything conclusive here yet. I'll try to film my PM commute and we can see what the deal with this is on my car.
 
https://i.imgur.com/2a8WbZC.jpg
2a8WbZC.jpg

Above graphic was on a highway during an uncontrolled test ... seems even following someone with a little speed variance (vs good cruise control) made my X feel herky-jerky. Controlled test below.

1) Great points and observations from a TOO forum post I made.
1a) I think there are some good points that 40.2.1 seems more aggressive/tighter on following. I have over 43k on my '17 and use AP a *lot* (bought it with 4K). 2/3rd of those miles are highway AP roadtrips. Had 38K on my '16 X. It is more noticeable to me now.
1b) I think following someone on a steady cruise is reasonable smooth and seems close to just an open road with my Tesla's cruise control set. Assumes their cruise is well designed (see my 2 points below). My first post had two graphs that represented uncontrolled testing and my speed changed +/- 5mph (vs +/-1 mph in my controlled test (2)).
1c) I really liked the term "elasticity" and failing a more fluid following method. It really seems obvious that the 'calm mode' should have an impact here but it doesn't appear the let that influence AP/TACC only manual driving it seems.

2) I did some controlled testing following my wife. She was driving her Volt while on the battery and was using cruise control. She would tell me when she hit our test speed then I'd wait a few seconds for things to settle and I would record in ScanMyTesla for 30ish seconds.
2a) My test were done at 40mph, 45, 60, and 65. Using 5 units for the following distance. See TITLEs for mph.
2b) Roughly 100 samples / second for each of the data lines.
2c) I added front and rear torque as the combined HP line seemed to 'smooth' out the data points and I wanted everyone to see more details.
2d) It seems that 60 and 65 mph front and rear torque lines are smoother but I think the scale due to 60&65 being larger numbers 'smoothed' it out visually.
2e) I then picked 17ish seconds out of the 30ish seconds to find the smoothes/most_consistent (flat) speed for the graphs below.

pgUBDzY.jpg



QULCDEo.jpg



NiNIca8.jpg



zqqh6yD.jpg
 
  • Informative
Reactions: mtndrew1
Above graphic was on a highway during an uncontrolled test ... seems even following someone with a little speed variance (vs good cruise control) made my X feel herky-jerky. Controlled test below.

1) Great points and observations from a TOO forum post I made.
1a) I think there are some good points that 40.2.1 seems more aggressive/tighter on following. I have over 43k on my '17 and use AP a *lot* (bought it with 4K). 2/3rd of those miles are highway AP roadtrips. Had 38K on my '16 X. It is more noticeable to me now.
1b) I think following someone on a steady cruise is reasonable smooth and seems close to just an open road with my Tesla's cruise control set. Assumes their cruise is well designed (see my 2 points below). My first post had two graphs that represented uncontrolled testing and my speed changed +/- 5mph (vs +/-1 mph in my controlled test (2)).
1c) I really liked the term "elasticity" and failing a more fluid following method. It really seems obvious that the 'calm mode' should have an impact here but it doesn't appear the let that influence AP/TACC only manual driving it seems.

2) I did some controlled testing following my wife. She was driving her Volt while on the battery and was using cruise control. She would tell me when she hit our test speed then I'd wait a few seconds for things to settle and I would record in ScanMyTesla for 30ish seconds.
2a) My test were done at 40mph, 45, 60, and 65. Using 5 units for the following distance. See TITLEs for mph.
2b) Roughly 100 samples / second for each of the data lines.
2c) I added front and rear torque as the combined HP line seemed to 'smooth' out the data points and I wanted everyone to see more details.
2d) It seems that 60 and 65 mph front and rear torque lines are smoother but I think the scale due to 60&65 being larger numbers 'smoothed' it out visually.
2e) I then picked 17ish seconds out of the 30ish seconds to find the smoothes/most_consistent (flat) speed for the graphs below.

pgUBDzY.jpg



QULCDEo.jpg



NiNIca8.jpg



zqqh6yD.jpg

I can feel those little green jaggies in my soul. They need to smooth out these frequent, abrupt adjustments.
 
  • Like
Reactions: MTOman
Right now it reacts on a per-frame-basis for most things. The "AI" now is indeed dumb as a brick. It can only classify images. There's no overarching logic built in / trained.
The only thing that could resemble it, is cut-in detection. But that is not working at all, if it's even deployed (I can't tell). But that's just an isolated pattern of something very specific.

I think the point here is that it's a long, long way till you have contextual behavior that it makes intelligent decisions on.
They still need if-then-else logics on the behavioral things for a good long while.
 
Right now it reacts on a per-frame-basis for most things. The "AI" now is indeed dumb as a brick. It can only classify images. There's no overarching logic built in / trained.
The only thing that could resemble it, is cut-in detection. But that is not working at all, if it's even deployed (I can't tell). But that's just an isolated pattern of something very specific.

I think the point here is that it's a long, long way till you have contextual behavior that it makes intelligent decisions on.
They still need if-then-else logics on the behavioral things for a good long while.
Not sure what you mean by "cut-in detection" but just in 40.x we have the AI that sometimes will slow you down when traffic on both sides of you is going slower than you. That is surely some sort of behavior driven by image recognition. And the thing that moves you over left or right when a big truck is in the lane next to you, surely that is a sign of behaviors to come that aren't simple hardwired "if then else" blocks of coding.
 
Not sure what you mean by "cut-in detection" but just in 40.x we have the AI that sometimes will slow you down when traffic on both sides of you is going slower than you. That is surely some sort of behavior driven by image recognition.

It may use the data gathered from image recognition, but the logics using the data is just normal programming.

And the thing that moves you over left or right when a big truck is in the lane next to you, surely that is a sign of behaviors to come that aren't simple hardwired "if then else" blocks of coding.

Yes, it is. state machines and control systems. Just regular everyday no nonsense normal programming :)
It is just complex image processing through deep learning results in a score and classifier, and you use this data in a classical way in simplified representations in 3D space, then you further simplify it:

while (hasNewFrame)
{
if (vehiclesNextToMe.size() > 0 && vehiclesNextToMe.contains(VehicleType::Trailer))
{
this.carController.MoveToLeftLaneMarking()
}

if ....
}
 
Last edited:
  • Like
Reactions: Matias and eli_
Not sure what you mean by "cut-in detection" but just in 40.x we have the AI that sometimes will slow you down when traffic on both sides of you is going slower than you. That is surely some sort of behavior driven by image recognition. And the thing that moves you over left or right when a big truck is in the lane next to you, surely that is a sign of behaviors to come that aren't simple hardwired "if then else" blocks of coding.
Image recognition just builds the 3D map of your environment and maps out some metadata such as if the space looks drivable, markers on the road, classifying objects etc...

The driving logic is still "if..then..else", which actually will do quite long. All logic such as moving in your lane to increase margins, and slowing down when passing a much slower lane can be rooted down to a single common algorithm:

Every object classification has a calculated bounding box of where it probably could end up, and max physical possible movement in 0.5s, 2sek 4sek etc... These bounding boxes can be used to calculate risk of your vehicle intersection them, which you can cap at a max acceptable number, and then apply that cap back at your planned trajectory when evaluating every possible path to take.

Example: This should eliminate all plans trying to pass a vehicle going at 20 km/h at the speedlimit of 120 km/h. Because you need to pass the other vehicle at only 2m space, and there's a small chance the other vehicle could take your lane. Hence max applicable passing speed would be 40 km/h for example, because that's the max acceptable risk (eg low collision probability based on object vectors and acceptable damage potential).

All algorithmic calculated based on quite simple rules. There's obviously more to it and the implementation is ofcourse more work than this sounds, but it should be very possible and does not rely on any AI for other than vision recognition. It is also 100% unit-testable without relying on the driving simulator.
 
ML doesn't make sense for high-level control stuff, but I can imagine using an RNN downstream of the vision outputs for motion prediction. It can predict a future position and velocity for an object in the scene. Other downstream models could be some ML predication for different intersection and parking lot topologies, e.g. given all these recognized lane markings from the vision CNN, predict what type of intersection this is, or where the parking aisles will be even if they're not visible.
 
ML doesn't make sense for high-level control stuff, but I can imagine using an RNN downstream of the vision outputs for motion prediction. It can predict a future position and velocity for an object in the scene. Other downstream models could be some ML predication for different intersection and parking lot typologies, e,g, given all these recognized lane markings from the vision CNN, predict what type of intersection this is, or where the parking aisles will be.

Kalman filter might still do a better job, and they are already in use, most likely.
 
A kalman filter is built the same way, fusing different sensor inputs into a model of predicted data, with a feedback loop of actual data.

Kalman requires you to explicitly create a motion model that will probably be overly simplistic i.e. assume constant velocity or acceleration, RNN will learn the underlying dynamics from the data and possibly pick up on hidden patterns. One will produce better predictions than the other.
 
  • Like
Reactions: emmz0r
ML doesn't make sense for high-level control stuff, but I can imagine using an RNN downstream of the vision outputs for motion prediction. It can predict a future position and velocity for an object in the scene. Other downstream models could be some ML predication for different intersection and parking lot topologies, e.g. given all these recognized lane markings from the vision CNN, predict what type of intersection this is, or where the parking aisles will be even if they're not visible.
Agreed, and I think this is the way to go. Makes the code more testable. Narrow AI/NN is primarily only needed when you got chaotic input.

Object classifications implement their own Predict method, which is depending on type of object. Some may use additional help from downstream NN.

An issue with downstream NN is they can be rather sensitive of slight seemingly insignificant changes upstream. Input which can be hard to simulate likely due to the reason an NN was chosen in the first place. If it can only be trained in the context of the entire system, it can be much harder to test. But probably solvable.
 
driving me to work every day

It's not doing that. It's assisting you and you're driving. Be sure to know the difference or you'll end up like so many other people, angry that "autopilot crashed". Most of us started out like you, pretty psyched about what seems like an impressive system at first. But over time and experience, you start to see the man behind the curtain more and more.
 
  • Like
Reactions: emmz0r
It's not doing that. It's assisting you and you're driving. Be sure to know the difference or you'll end up like so many other people, angry that "autopilot crashed". Most of us started out like you, pretty psyched about what seems like an impressive system at first. But over time and experience, you start to see the man behind the curtain more and more.
I think he is genuinely aware that it is 'assisting' him. Like him that assistance it gives me is pretty significant and really makes the cognitive 'work' of driving easier and much much less mundane. I do some traveling to play sports (or other activities) and go to places that are a little farther because AP makes it 'easier'. My son has a long commute and the 'assistance' make a huge difference.

To be honest, tho, your point about getting too comfortable and having a potential 'autopilot crash' is valid. I have to snap back sometimes if I realize I'm not paying enough attention and look down at the map or music search or whatever for 'too long'. Strange things happen out there and by paying attention I've avoided those situations where 'autopilot' definitely would have crashed!
 
  • Love
Reactions: mikes_fsd