Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

AP tried to crash my car TWICE this week!

This site may earn commission on affiliate links.
Hey Guys,

So about a week after getting the “Autopilot features limited” warning I had a near accident using AP this week. The car tried to veer into a side barrier without any warning, and I was able to take quick control to avoid crashing. I didn’t get any take over warnings.

I’ve managed to capture this with my dash cam.


Action starts at 00.16

I’ve reported this to Tesla and waiting for an appointment from the local SC.

But, today I had another unexpected reaction from the AP, and the car suddenly tried to veer out of the lane but luckily I was able to take manual control. Again no take over warnings. (No cam footage as the car didn't record exactly at the time of the incident)

I initially thought it could be my fault in some way even though I was paying full attention to the road and was able to take over on both incidents, but after what happened today I'm starting to think something is seriously wrong with my car's AP suite. I hope its some hardware failure and not a dangerous fleetwide software issue.
  • The lane markings were clear in both situations and I’m not seeing any obstructions.
  • Both incidents happened at night. (Visibility issue?)
  • All of this started happening after the last firmware update. Driven almost 9000KM (With a tonne of AP) and I had zero issues.
  • I’m also starting to notice the car is mistaking the opposite side of the road as adjacent lanes. It’s happening a LOT more now than before.
  • After my initial complaint to Tesla via email the car asked to update its firmware again. (Same firmware)

I will no longer use AP until I figure this out. But thought I’d ask if anyone else seeing any similar issues?
 
Last edited:
  • Funny
  • Informative
Reactions: Matias and jeffro01
Ok, I managed to find the footage of the second incident.

I've edited the clip so It starts straightaway. Pay attention to how the car is trying to turn out of the lane.


Both incidents happened at similar road layouts.

%255B796cab2501127b7e95ed9d1f3fe4b2af%255D_Image%2525202018-12-27%252520at%2525209.29.43%252520PM.png


%255B101a73f5135ae9c0c99dba669dca7f86%255D_Image%2525202018-12-27%252520at%2525209.29.17%252520PM.png


Any thoughts guys?
 
...I’d ask if anyone else seeing any similar issues?

Would you also ask dead Tesla drivers who used Autopilot as well?

Autopilot is a hands-on feature. You need to monitor its torque at all time without any efforts.

I do that by using the weight of my arm to create a constant counter torque. There is little effort to correct its torque if it swerves unexpectedly and so quickly.

Until Autopilot will pass its beta status and will become production quality status, if you don't know how to monitor its torque: accidents, injuries, and of course deaths will continue.
 
Last edited:
  • Disagree
Reactions: Matias and GreenT
Every Tesla I have had with AP1 and AP2 has always done that. That is why you should only use AP on the highway or freeway.

It even does that if you are in the slow lane and there is an exit on the highway or freeway.

And remember, AP is to be used as a drivers aid.
 
...I will no longer use AP until I figure this out...

In the old days, human programmers have to "figure this out" and hardcode or write the program up to fix the problem.

Tesla is leaning toward Artificial Intelligence that human programmers no longer have to "figure this out" because the machine writes the codes themselves.

You just feed a whole bunch of data and the AI would figure it out eventually.

Each time you correct the Autopilot is each time you are contributing to a tiny fraction of the future computer code to write the program correctly to fix the problem.

Tesla can personally hardcode to solve a scenario at a specific location but that would slow down what Tesla is aiming for "a general solution for self-driving that works well everywhere."
 
In the old days, human programmers have to "figure this out" and hardcode or write the program up to fix the problem.

Tesla is leaning toward Artificial Intelligence that human programmers no longer have to "figure this out" because the machine writes the codes themselves.

You just feed a whole bunch of data and the AI would figure it out eventually.

Each time you correct the Autopilot is each time you are contributing to a tiny fraction of the future computer code to write the program correctly to fix the problem.

Tesla can personally hardcode to solve a scenario at a specific location but that would slow down what Tesla is aiming for "a general solution for self-driving that works well everywhere."

I don't think that is true. They are using NNs for object recognition but my understanding is that the actually driving portion is all procedural code.
 
  • Like
Reactions: 1375mlm
Think it was just a little confused by the dashed lines. Normally the edge of roads are delineated by solid white lines.

It is getting better, but Tesla still wants you to keep your hand on the wheel and be ready to take over at any time.

If it wanted to kill you, it could have done it. This was just a glitch.

Current autopilot is just a drivers aid. Not yet full self driving.

After some time you will know the areas where it does well, and also where you need to take control.
 
  • Like
Reactions: 1375mlm
I don't think that is true. They are using NNs for object recognition but my understanding is that the actually driving portion is all procedural code.

What Makes Tesla’s Autopilot Different


"Behavior Cloning

Tesla’s cars collect so much camera and other sensor data as they drive around, even when Autopilot isn’t turned on, that the Autopilot team can examine what traditional human driving looks like in various driving scenarios and mimic it, said the person familiar with the system. It uses this information as an additional factor to plan how a car will drive in specific situations—for example, how to steer a curve on a road or avoid an object.

Such an approach has its limits, of course: behavior cloning, as the method is sometimes called, cannot teach an automated driving system to handle dangerous scenarios that cannot be easily anticipated. That’s why some autonomous vehicle programs are wary of relying on the technique.

But Tesla’s engineers believe that by putting enough data from good human driving through a neural network, that network can learn how to directly predict the correct steering, braking and acceleration in most situations. “You don’t need anything else” to teach the system how to drive autonomously, said a person who has been involved with the team. They envision a future in which humans won’t need to write code to tell the car what to do when it encounters a particular scenario; it will know what to do on its own.

Many other autonomous car developers abhor such an approach. They worry that relying on networks in this way will make it hard to deduce why errors occurred because it’s often not clear how these networks arrive at their decisions."
 
Honestly, in the first video, I see the swing, but I'm also seeing the driver override before the car has a chance to.
In the second video, I see a roundabout. That's pretty much a guaranteed fail.

I guess that you are proving to yourself that Tesla's requirement to have an attentive driver makes a lot of sense.
 
If only Tesla warned that this was beta, not meant to be used on public roads, and told you to keep alert every time it’s activated
Come on Tesla... :cool:

The warning is in the Owner's Manual.

Also, before a driver can use it, it has to be turned on. That particular driver would get the warning screen once and has to physically press the agree button to agree or skip the use.

If another driver wants to use the car and creates a new driver profile, that new driver has to do the same as above: to agree or not agree to activate Autopilot.

We are adults. We don't need to be warned more than once.
 
  • Like
Reactions: ChrML
What Makes Tesla’s Autopilot Different


"Behavior Cloning

Tesla’s cars collect so much camera and other sensor data as they drive around, even when Autopilot isn’t turned on, that the Autopilot team can examine what traditional human driving looks like in various driving scenarios and mimic it, said the person familiar with the system. It uses this information as an additional factor to plan how a car will drive in specific situations—for example, how to steer a curve on a road or avoid an object.

Such an approach has its limits, of course: behavior cloning, as the method is sometimes called, cannot teach an automated driving system to handle dangerous scenarios that cannot be easily anticipated. That’s why some autonomous vehicle programs are wary of relying on the technique.

But Tesla’s engineers believe that by putting enough data from good human driving through a neural network, that network can learn how to directly predict the correct steering, braking and acceleration in most situations. “You don’t need anything else” to teach the system how to drive autonomously, said a person who has been involved with the team. They envision a future in which humans won’t need to write code to tell the car what to do when it encounters a particular scenario; it will know what to do on its own.

Many other autonomous car developers abhor such an approach. They worry that relying on networks in this way will make it hard to deduce why errors occurred because it’s often not clear how these networks arrive at their decisions."

There is currently no actual NN training (or learning) going on inside customer cars, they all operate on a pre-trained NN(s) that are uploaded into the cars during software updates. The NNs are trained on professionally driven vehicles though they customer cars do send some testing and validation trigger data but they do not train NNs.

It would be a mistake and possibly a dangerous mistake to think that driving some place many times would improve AP performance there.
 
  • Informative
Reactions: kavyboy
Hey Guys,
So about a week after getting the “Autopilot features limited” warning I had a near accident using AP this week. The car tried to veer into a side barrier without any warning, and I was able to take quick control to avoid crashing. I didn’t get any take over warnings.

I’ve reported this to Tesla and waiting for an appointment from the local SC.

But, today I had another unexpected reaction from the AP, and the car suddenly tried to veer out of the lane but luckily I was able to take manual control. Again no take over warnings. (No cam footage as the car didn't record exactly at the time of the incident)

I initially thought it could be my fault in some way even though I was paying full attention to the road and was able to take over on both incidents, but after what happened today I'm starting to think something is seriously wrong with my car's AP suite. I hope its some hardware failure and not a dangerous fleetwide software issue.

I will no longer use AP until I figure this out. But thought I’d ask if anyone else seeing any similar issues?

Yes. It WAS your fault. Everything that happens while you are in the driver’s seat IS your fault, by definition.

Likely auto steer was using the guard rail as a guide, rather than the lines on the pavement. When the rail disappeared, auto steer interpreted it as the lane widening out, and attempted to reposition the car in the new middle. It corrected that mistake as the guard rail came back. It looks to me like autosteer actually was working correctly.

My suggestion would be to not attempt to use autosteer where there is a barrier immediately adjacent to the lines. Unfortunately there is no way to tell what car is basing its centering recommendation on. We assume it is the painted lines on the pavement, but in this example it very well may not have. Autosteer is just not good enough for the situation.
 
Yes. It WAS your fault. Everything that happens while you are in the driver’s seat IS your fault, by definition.

Was it, really, though?

The driver is responsible for taking over when AP fails, and the OP did, but the AP still did fail on its own. The driver is responsible for taking over when AP fails (and failure to do so would indeed be driver’s fault), but the driver is not at fault for AP’s failures itself... how can they be, AP made the mistake and nothing the driver did could have stopped it (other than not using AP)?
 
  • Like
Reactions: ModelX