Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Ap And Defensive Driving

This site may earn commission on affiliate links.
Looking for others' thoughts and experiences here.

One of the reasons we chose not to purchase AP/FSD on our Model 3 is because I couldn't shake the feeling that it was not situationally aware. When I drive, I am typically scanning traffic in front and back for potential risks, and taking steps to mitigate or accept the risk. Having AP handle the driving task made me very uncomfortable - I am 99.9% sure that AP is not capable of awareness of a potentially risky situation happening some lanes over, or far ahead, etc. and so I found myself using more mental energy worrying about what it did and didn't see, vs. just driving myself.

I am very curious as to others' experience with AP and its ability to sense and react to "edge cases" or other situations where defensive driving would raise red flags.

For instance, below is a video of a couple of things that happened on my commute this morning. The HOV lane I am in is inherently risky, since it is traveling +50mph faster than the adjacent lane, and drivers in the non-HOV lane tend to get antsy and cut in/out of the HOV lane. So I am always extra defensive here.

First ,a flat bed tow truck makes some moves as if it's going to enter my lane while I am traveling at speed, so I cover the brake and slow substantially to evaluate the situation (I also do this on track when it looks like someone in front of me is about to spin). When I had the AP trial, my experience was that it would have just kept on trucking full speed. But I don't know - what are others' experiences here?

Then, 500ft later, a car two lanes over starts cutting toward the HOV lane hard. This is a typical spot where I've seen accidents occur as people are frustrated with the lines at the toll plaza. The car never actually enters my lane, but I identified what was going on pretty early, then cover the brake and move left in anticipation of having to deal with this. Again my experience has been that AP would have just kept going.

In both of these situations, I was never actually in real danger as neither driver actually entered my lane, so driving full speed ahead would have been fine. But defensive driving is about identification and avoidance of risk - the only thing separating a near miss from disaster is luck.

What have others' experience with AP's ability to drive defensively been?

 
Last edited:
EAP would do you no harm in either situation. Since neither crossed into your lane my guess is it would do nothing, but if they got much closer to the line it would start breaking rapidly. Think of it more as a copilot. You would be paying just as much attention as you do now, probably taking over in the same way you did in those 2 situations. If EAP did break it would be right about when you did. Unlikely it would swerve left in that second case.
But 98% of the driving work would be done by EAP . It’s a huge benefit when following traffic because it adjusts breaks and keeps a safe distance, virtually eliminating typical rear end collisions on your part. Ps- I’d be cautious going through that toll booth the first time. Probably fine, but no lines...
 
Last edited:
Love this thread and have thought about this a ton.

Full disclosure, I have Autopilot but not FSD. In my mind, I enjoy the benefits of a lessened operational fatigue of the car (brake/accelerate/lanekeeping) which allows me greater human review of the environment. The car has so much power that I fear very little and I have all the ability to take over. That's not expressly answering the question but to the point of risk, I feel like I'm in a better risk state because Autopilot allows for its own review of traffic but I have more latitude.

For example HOV lanes are one thing but generally speaking I don't rush through an empty highway lane when an adjacent lane is stopped. I know in my mind that everyone - unless they're merging right to exit - are looking to go faster or get out of their bottleneck. That's a human response where a computer is simply going to look at the free and empty lane in front of me. I'm however thinking "how great are the Tesla brakes if some jerk doesn't see me and dives into my lane" so I slow down or take over.

In that one use-case I stated, I believe the neural nets would take this into consideration over time to minimize risk conditions, as this is an easy one to quantify.
 
For example HOV lanes are one thing but generally speaking I don't rush through an empty highway lane when an adjacent lane is stopped. I know in my mind that everyone - unless they're merging right to exit - are looking to go faster or get out of their bottleneck. That's a human response where a computer is simply going to look at the free and empty lane in front of me. I'm however thinking "how great are the Tesla brakes if some jerk doesn't see me and dives into my lane" so I slow down or take over.
Yes, this is exactly how I feel as well!

I think the idea of less operational fatigue is an interesting one. That makes sense to me. Thanks for your input.
 
Good topic.

First, I understand exactly where you are coming from. I too consider myself a defensive driver, and was initially skeptical of AP's ability to handle scenarios as your video illustrated.

I got my car with EAP last July. I upgraded to FSD afterwards. I have no idea how what I have compares to what you get today if you have AP, but I figure that my car has everything that is currently available.

If you haven't watched the Tesla Autonomy Day video, I would suggest you do...at least the part with Andrei. He describes a particular training they are doing on the neural net called "cut in". The purpose of the "cut in" training is to detect when vehicles are about to cut in to your lane.

Last summer/fall, when I was in the right lane on AP and a car was merging, I found that I would have to disengage AP, or adjust the speed to let them merge in (why people can't get those damn ICE vehicles up to speed.... ;)) Then suddenly one day after an update I noticed that the car was letting cars merge in. This was before I knew about "cut in". I thought they probably just recognized on-ramps from the map and did some special handling. But after seeing the "cut in" talk, I figured that's actually what changed. The neural net can now detect when a car is about to cut in (in any lane, at any time). And sure enough, I've seen situations with cars cutting in and the car letting them in.

It really is amazing seeing the car improve its driving like this.

Is it perfect now? No, and it probably will never be perfect. But it does continue to improve. And if anything, I think it is OVER conservative (which is not really a bad thing). It is certainly too conservative as vehicles are EXITING your lane. It doesn't speed up until the car is completely in the adjacent lane.

In the meantime, I still need to be vigilant for things I still don't trust it for. Things like detecting road debris. So it's not like I can hang up my defensive driving hat yet. But I do feel like I have an extra set of eyes (or 8) helping me watch out for things all around the car at the same time.
 
  • Informative
  • Like
Reactions: Enginerd and N5329K
AP is very weak in any of the "social" aspects of driving. It's just getting started with merging and lane changing. But it's not good at all in figuring out what other drivers are doing.

In general I disengage AP when I'm exiting the freeway. It's just easier to drive it myself when we have to interact with other cars and city streets. My wife uses TACC practically whenever the car is moving, but won't use Autosteer. Our main Autopilot use is for long trips, mostly on interstate freeways. I'm very comfortable with it there, and it's a blessing not to have to fiddle with the wheel and peddles. I can keep my eyes further down the road and check out the other drivers. Much more relaxing for me.
 
99.9% SURE???? Might want to lower that some since it is CLEARLY wrong.:eek::D

Dashcam Footage Shows Tesla Autopilot Predicting a Shocking Crash | Inverse
That's cool that it can compute trajectories of the two cars it can see ahead. Didnt know it could do that.

So how far does that capability extend? Sometimes on I-80, you can see a dead-stop traffic jam from a quarter mile out, or a car weaving several lanes over. I find it bothersome we don't actually know what the system is capable of.

I feel very uncomfortable anthropomorphizing AP. My null hypothesis is that it is a simplistic dead reckoning system capable of very little, and any capability beyond this needs to be demonstrated. This is the only safe way I can see of interacting with the system.

At autonomy day they talked about explicitly programming in responses to individual "edge cases." But I haven't seen a list of which edge cases are taken care of and which are not. It seems to be making a kinematic calculation for the crash of the two cars ahead in that video. What else does it do? I will never trust it until I know.

Would it recognize the truck in my video and decide it wasn't a threat? Same for the car on the right. I have seen no evidence that it can or would, but I also cant rule that out. That's why I made this thread!
 
Last edited:
That's cool that it can compute trajectories of the two cars it can see ahead. Didnt know it could do that.

So how far does that capability extend? Sometimes on I-80, you can see a dead-stop traffic jam from a quarter mile out, or a car weaving several lanes over. I find it bothersome we don't actually know what the system is capable of.

I feel very uncomfortable anthropomorphizing AP....
Radar can bounce off the road and see cars ahead of the car in front of you. It is not anthropomorphizing since there is NO WAY a human can see 360º simultaneously and be able to see "through" the cars ahead of you. Also as the neural net learns it will become FAR better than any human could ever be at seeing what is happening all around you 100% of all the time. It NEVER blinks or "wonders" about the bad day at the office.

 
Great thread.

I will probably never rely on FSD as more than a tool to augment my control of the vehicle. I like to think I'm in control by anticipating and thinking ahead. I see of lot of actions on the roads that tell me a lot of drivers are thinking no more than 6-inches ahead of the car. Those are the folks for whom AP/FSD will make the roads safer for the rest of us.

Of course, FSD will never be perfect. There will always be edge cases that no level of artificial intelligence will be able to deal with. But there is a lot of human intelligence on the road that can't even deal with situations well inside the edge.
 
Also as the neural net learns it will become FAR better than any human could ever be at seeing what is happening all around you 100% of all the time.
I don't think we are arguing the same point. We all agree that a really good autonomous car will be far safer than the average human driver. I'm much more interested in what degree of trust we should afford the system right now, and how that trust should be built.

Currently, the discussion around safety seems to be focused on the "neural net" to do object detection and create a 3D view of the world around the car. If you can do this task well, you can easily program in some heuristics using kinematics and avoid accidents like in the video. This seems to be the path AP has taken so far, and your videos show that in some situations it works. This task (making a 3d model of the world) seems like table stakes to me in making an autonomous car work properly.

When I talk of anthropomorphizing, what I mean is the larger challenge of forecasting, decision-making, and control. This is what humans do - we evaluate risk and choose among many potential reactive options. We do not use rote rules. But AP, for instance, as demonstrated in all the posts about ELDA behavior, doesn't understand what is intentional and what is not. It just sees something and reacts without context.

The approach taken by AP so far seems pretty discrete - tackle individual situations one by one using some rules and heuristics. They didn't talk about any optimization strategy for analyzing and deciding on potential actions. And the ability to forecast new situations appears to be somewhat limited right now, based on kinematic models within a specific lane.

So this is what I'm thinking about when I'm deciding if I want AP in control. It appears to be programmed to identify some limited situations, but any amount of intelligence or learning cannot be attributed to it. How much to trust it?
 
Great thread.

I will probably never rely on FSD as more than a tool to augment my control of the vehicle. I like to think I'm in control by anticipating and thinking ahead. I see of lot of actions on the roads that tell me a lot of drivers are thinking no more than 6-inches ahead of the car. Those are the folks for whom AP/FSD will make the roads safer for the rest of us.

Of course, FSD will never be perfect. There will always be edge cases that no level of artificial intelligence will be able to deal with. But there is a lot of human intelligence on the road that can't even deal with situations well inside the edge.
Definitely true. This reminds me a little bit of the Da Vinci robot in regards to prostate surgery. At least as of 2010, the robot brought up the average of poor surgeons, but was still worse than the best surgeons.

Open Versus Laparoscopic Versus Robot-Assisted Laparoscopic Prostatectomy: The European and US Experience
 
So this is what I'm thinking about when I'm deciding if I want AP in control. It appears to be programmed to identify some limited situations, but any amount of intelligence or learning cannot be attributed to it. How much to trust it?

It sounds like you are thinking that using AP is an all or nothing proposition. It's not. You can let it control your car, but you can also be ready to take control if need be.

Do this often enough, similar to how I taught my teenage sons to drive (although that WAS more of an all or nothing situation when I was in the passenger seat and they were in the driver's seat), and trust will be built...perhaps slowly.
 
I use EAP as an assistant. It maintains lane position, speed and separation. As mentioned in earlier post, long trips are far less tiring. Navigate on Autopilot also suggests lane changes.

I scan the road then determine strategy & tactics. Vehicle on entry ramp half-mile ahead? Tap signal, Autopilot confirms my judgement, safely shifts lanes.

Construction zones are far easier with Autopilot threading the Jersey barrier needle, helping me cope with sudden slowdowns.
 
Also as the neural net learns it will become FAR better than any human could ever be at seeing what is happening all around you 100% of all the time. It NEVER blinks or "wonders" about the bad day at the office.

It's ridiculous statements like this that instills overconfidence in people - leading to dangerous/tragic situations

Where is there any proof of a "neural net"?

I also challenge your assertion of it NEVER blinking.....phantom braking?


tesla-model-s-autopilot-crash-fire.jpg

CR-Cars-InlineHero-NTSB-Tesla-Crashed-5-19
 
I don't think we are arguing the same point....

I and no one here is saying you turn AutoPilot on and turn yourself OFF. AutoPilot is a supplemental security system so you are ADDING safety to your drive and not subtracting by turning on. In situations like you have outlined AutoPilot would allow you to devote MORE of your attention to situations you suspect as potentially dangerous while AutoPilot "watches" and does the mundane driving.

Have you did your AutoPilot trial? It takes about a month to become comfortable and familiar with its strengths and weaknesses. You can't say that you are "...99.9% sure that AP is not capable of awareness of a potentially risky situation happening..." while siting on the sidelines and not using.
 
It's ridiculous statements like this that instills overconfidence in people - leading to dangerous/tragic situations

Where is there any proof of a "neural net"?

Wait a minute...are you saying that you don't believe there is a neural network running the AP system? This is 100% well established. Please watch the Tesla Autonomy Day presentation for a very detailed description of both the hardware and the software:

I also challenge your assertion of it NEVER blinking.....phantom braking?

Phantom braking is not an example of the system blinking. On the contrary, it's a false positive. It thinks it sees something, but it turns out it was nothing. It's absolutely true that the system never blinks (as long as the car itself is on). It is always monitoring the environment.

If you have a complaint about things like phantom braking, running into stationary objects (like the fire truck in one of your pictures) or failing to recognize things like trucks that are perpendicular to the direction of travel (the precursor to another one of your pictures), then fine. Those are all examples of where AP is not FSD at the current time and driver oversight is still required. But it is not because the system "blinked".
 
It sounds like you are thinking that using AP is an all or nothing proposition. It's not. You can let it control your car, but you can also be ready to take control if need be.

Do this often enough, similar to how I taught my teenage sons to drive (although that WAS more of an all or nothing situation when I was in the passenger seat and they were in the driver's seat), and trust will be built...perhaps slowly.
That's fair - I've set up a bit of a straw man here, though I did not intend it. My personal commute does not have a lot of segments where AP can really shine, but if I were driving more open highway, say, it would be a different story.

I and no one here is saying you turn AutoPilot on and turn yourself OFF. AutoPilot is a supplemental security system so you are ADDING safety to your drive and not subtracting by turning on. In situations like you have outlined AutoPilot would allow you to devote MORE of your attention to situations you suspect as potentially dangerous while AutoPilot "watches" and does the mundane driving.

Have you did your AutoPilot trial? It takes about a month to become comfortable and familiar with its strengths and weaknesses. You can't say that you are "...99.9% sure that AP is not capable of awareness of a potentially risky situation happening..." while siting on the sidelines and not using.

My "99.9%" assumption is based on my understanding of the state of the art in ML, and my assumptions regarding AP's capabilities (including what was learned from autonomy day). My comment was a bit of hyperbole, but I don't think it's a controversial idea is that AP is not capable of handling many situations that humans deal with. My point is that, believing it's not capable of handling situations I deal with commonly made me more stressed out. For instance, several times it accelerated into a sketchy car's blind spot and I was thinking "I can't allow this" and disengaged AP and put the car into a more comfortable position with "outs" available.

To your point regarding the trial, yes, we had a 60 day trial and I used AP every day for about a week before giving up. My commute pass through residential, city, divided highway with heavy merge, toll booth, bridge, and then back to city, all within the span of 30-45 minutes. And I take a different route every day depending on traffic. So AP really only works well on one segment of that commute which lasts 5-10 minutes. I never developed trust in it during that first week, due to the reasons I list in the original post. There are accidents along my commute almost every day, and I believe I am hyper-vigilant at key points - AP for me added another thing I had to worry about, and I found myself disengaging and intervening very often (as described above). So, since the use for me is so limited, and since I saw no immediate benefit, I decided that I was not willing to pay $3,000 for a capability I did not like and couldn't use very often.

Every day when I drive on my commute, I am reminded at how situationally and socially aware I must be to drive defensively, and why AP felt so eerie to me, hence this post.

I am seeing two trends in this thread, for those of us who are made uneasy by AP:

  • Use it more and you will understand its limits, which will instill trust. My reply to this is - I wish there were training or more documentation regarding what it's actually capable of. As it is, I'm not particularly happy to be surprised by the seemingly strange interventions and false positives of ELDA, for instance, so maybe I'm predisposed against this method as well.
  • Use it in limited situations where it works really well. This seems fair to me. I don't really encounter these situations that often, but if I did, the calculus would be different.