Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Car ahead making right turn

This site may earn commission on affiliate links.
As someone with a FSD subscription but not in the beta, I am using standard autopilot and TACC day to day. One thing I have noticed that I had hoped would be better than my wife's Honda CRV with basic adaptive CC is that when the car ahead of me is making a right turn (with turn signal on), the car keeps slowing down for way too long. I will slow to near 0 until the turning car is almost completely off the road. Any normal driver would only slow down until the turn is initiated and then start accelerating when the other car is about halfway through the turn as there is no need to keep a specific distance from the back of a car leaving the driving lane. Same thing for a car clearly moving over into a turn lane. Does the FSD beta do this better? Can't tell from most videos if it can read turn signals/turns and react appropriately. I had really hoped that even standard TACC would be better, but no luck.
 
Unfortunately it happens occasionally that the turning car suddenly stops. Has happened to me.

The Tesla autopilot will not risk a collision. Perhaps it could be improved such that it does not slow down as much, but still such that an emergency stop or an avoidance maneuver is still possible in the worst case.
 
  • Like
Reactions: superblast
It's pretty variable. Sometimes it handles this better than the public build, but sometimes it's the exact same.
Thanks for the reply. I'm hoping that as the software matures it will do more anticipatory driving like experienced humans do. For now I guess we still have to expect that it's like having a teenager with a learner's permit driving us around with the best case scenario being that it is overly cautious until fully confident.
 
  • Like
Reactions: DirtyT3sla
Unfortunately it happens occasionally that the turning car suddenly stops. Has happened to me.

The Tesla autopilot will not risk a collision. Perhaps it could be improved such that it does not slow down as much, but still such that an emergency stop or an avoidance maneuver is still possible in the worst case.
Yeah, you definitely don't want it pushing into the tailgate of the turning car. Nuanced decisions will require a lot more training and programming to read what may be on the other side of that driver's right turn. You and I may be able to see that traffic on the road they are turning into is stopped or there is a parking lot they need to make a sharp turn into and give more room, but I'm not sure FSD is at the point of thinking this far ahead yet. So better to prepare for the worst. I'm just hoping that it starts recognizing turn signals and making appropriate decisions based on this because I worry about being hit from behind as much as running into the car ahead of me in some of these situations. And on a new car it doesn't matter who's fault it is, you still lose money when that happens.
 
I don't think such intelligent predictions solve the problem. The car in front can still stop, even if you do not see any possible cause.
This is an interesting topic. I think the reasonable compromise of the AutoPilot behavior is basically the same as a good human driver: use all the visual cues available (turn signal & rate of slowing of the turning car, ID & traffic state of the road or property it is entering) to estimate the probable time interval required for the turning car to clear - this estimate being continuously refreshed as the situation develops - and slow down enough that there is only a very low probability that emergency braking will be required. However, never allow the possibility that emergency braking will be insufficient or that abrupt braking is likely to cause you to be rear-ended.

These points basically mean AP/FSD can be both safe and natural - and the last point is a great example of how it can be even better, because it has rear and side cameras so it never lapses in its knowledge of e.g. the gap between you and the car following you. Constantly monitoring this gap means that it should slow a bit more and a bit earlier when the following car is close. Humans can do this, but only if they are experienced and above-average in constant situational awareness (like all of us here, of course :) ).

Yet another twist - sometimes a car in the adjacent left lane may decide to move in behind you as all of this is developing, and this can create a sudden squeeze in which the danger from abrupt braking is much greater. The relative probability of this is another factor in the calculus of optimized following behavior.
 
By the way, yes I do realize that all these descriptions of how correct driving behavior should be calculated are not the way the neural-network FSD works in real time. What I'm really talking about is how their network should be trained, to accomplish smooth and defensive driving behaviour that anticipates and reduces likelihood of 'edge cases' rather than simply reacting to them after they occur.
 
The autopilot should simply do the best within its capabilities. Priorities should be:
  1. Do not cause an acccident.
  2. Try to mitigate other drivers' faults, but only if they pose a serious danger.
  3. Drive in a rule-conformant way.
For example, if the autopilot can prevent a front collision, it should do that, even if it causes a risk of damage from a rear-end collision, caused by another driver who breaks the rules.

If the autopilot has the capability to foresee and pre-calculate such a secondary accident, caused by another car, It might try to mitigate the damage, particularly to people in this car.

A more difficult question could be what to do in a situation where breaking the rules and causing an accident could save peoples' lives or health.
 
The autopilot should simply do the best within its capabilities. Priorities should be:
  1. Do not cause an acccident.
  2. Try to mitigate other drivers' faults, but only if they pose a serious danger.
  3. Drive in a rule-conformant way.
For example, if the autopilot can prevent a front collision, it should do that, even if it causes a risk of damage from a rear-end collision, caused by another driver who breaks the rules.

If the autopilot has the capability to foresee and pre-calculate such a secondary accident, caused by another car, It might try to mitigate the damage, particularly to people in this car.

A more difficult question could be what to do in a situation where breaking the rules and causing an accident could save peoples' lives or health.
Sooooooooo.......how is it going to handle a "Trolly Problem"? :eek: :eek: :oops: :D
 
Sooooooooo.......how is it going to handle a "Trolly Problem"? :eek: :eek: :oops: :D
Could be something useful in here:

Lots of links, but you need to pay or something
 
This is an interesting topic. I think the reasonable compromise of the AutoPilot behavior is basically the same as a good human driver: use all the visual cues available (turn signal & rate of slowing of the turning car, ID & traffic state of the road or property it is entering) to estimate the probable time interval required for the turning car to clear - this estimate being continuously refreshed as the situation develops - and slow down enough that there is only a very low probability that emergency braking will be required. However, never allow the possibility that emergency braking will be insufficient or that abrupt braking is likely to cause you to be rear-ended.

These points basically mean AP/FSD can be both safe and natural - and the last point is a great example of how it can be even better, because it has rear and side cameras so it never lapses in its knowledge of e.g. the gap between you and the car following you. Constantly monitoring this gap means that it should slow a bit more and a bit earlier when the following car is close. Humans can do this, but only if they are experienced and above-average in constant situational awareness (like all of us here, of course :) ).

Yet another twist - sometimes a car in the adjacent left lane may decide to move in behind you as all of this is developing, and this can create a sudden squeeze in which the danger from abrupt braking is much greater. The relative probability of this is another factor in the calculus of optimized following behavior.
You pretty much have it. And the complexity in the description makes it obvious why true level 5 driving is difficult. You would expect these systems to not only protect the occupants but also be good citizens on the road. While I know TACC slowing to near zero for a car turning 50 feet ahead will make sure I don't hit it, if I was driving and a driver ahead of me did that I wouldn't run into them but I would think they were an idiot who doesn't think more than 1 step ahead...again a thing I would associate with a driver in training and not someone fully licensed.

One of the most common reasons I manually disengage autopilot is when I look around the next turn and see brake lights but cars in the lane next to me are blocking the car stopped in my lane. I know that there is almost certainly a stopped car in my lane and I need to slow down, but that requires taking the scene in as a whole, not individual data points which may show that the lane is clear as far as is clearly visible.
 
Sooooooooo.......how is it going to handle a "Trolly Problem"? :eek: :eek: :oops: :D
Without wanting to derail this thread and turn it into a Trolley Problem thread I'll just offer a quick answer IMO. Ethical Rules for AI Driver - Trolley Problem

Moved the post to its own thread.
 
Last edited:
One of the most common reasons I manually disengage autopilot is when I look around the next turn and see brake lights but cars in the lane next to me are blocking the car stopped in my lane. I know that there is almost certainly a stopped car in my lane and I need to slow down, but that requires taking the scene in as a whole, not individual data points which may show that the lane is clear as far as is clearly visible.
In other words, you would be ready to take over if the FSD beta were installed. Me too.

Unfortunately, there are those who expect perfection (i.e., SAE Level 5 autonomy), when in this case "perfection is the enemy of good enough."