Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autopilot Near-Collision

This site may earn commission on affiliate links.

Ben W

Chess Grandmaster (Supervised)
Feb 27, 2009
900
1,029
Santa Barbara, CA
I was driving yesterday on PCH in heavy beach traffic, southbound near Zuma Beach, in the left lane, with Autopilot engaged. A car (white Toyota) tried to abruptly merge in from the right, and I expected Autopilot to recognize the situation and brake or take evasive action, but it didn't. I had to manually slam on the brakes at the last second; if I hadn't, I'm certain there would have been a collision..

I've send the dashcam video and info to Tesla so they can use it as a data point for testing. Meanwhile, this is a good reminder of why Autopilot is no excuse to take your eyes off the road!

 
All these 'little' AP mishaps make me giggle at the idea of FSD and robotaxis coming soon... To think there are ppl who actually believe in Elon's imaginary roadmap.

I doubt the current implementation of AutoPilot is using much, if any, of the upcoming FSD technology (since it requires HW3 and currently there's no real difference between HW2.5 and HW3). So reading too much into the current AutoPilot implementation seems foolish to me.
 
I doubt the current implementation of AutoPilot is using much, if any, of the upcoming FSD technology (since it requires HW3 and currently there's no real difference between HW2.5 and HW3). So reading too much into the current AutoPilot implementation seems foolish to me.

It's not foolish in that HW2.5 Autopilot will be part of the fleet (probably the vast majority) for a long time to come, and the hardware should be eminently capable of handling the pictured type of situation correctly. That scenario is not a FSD-level task; it is an Autopilot-level task, even Hardware 1-level. For it not to be robustly solved at this point is really inexplicable.
 
Last edited:
  • Like
Reactions: OPRCE and cucubits
All these 'little' AP mishaps make me giggle at the idea of FSD and robotaxis coming soon... To think there are ppl who actually believe in Elon's imaginary roadmap.

What, eight 8's of reliability isn't good enough for you? How many more 8's do you want? :p

Agreed that solving FSD is MUCH harder than Elon thinks it is. The best they can hope to do in the next few years (I think) is to become really good at recognizing situations that exceed the computer's scope, prompting the human driver to take over when that happens. (Otherwise known as L3 autonomy.) This includes complex parking situations and navigating tight spaces, situations where lots of human interaction is involved (e.g. cops directing traffic, humans jockeying with each other trying to leave a sporting event parking lot), ambiguous or incorrect/missing lane markings in construction zones, and a LOT of situations where technically breaking the law might be required to navigate a road situation; e.g. car stalled on a two-lane road, and you have to go into oncoming traffic to get around it. General human-level intelligence is required to solve these types of problems in the general case, and there are far too many of them to be enumerated and solved one by one.

No doubt Tesla will get there with HW4 or HW5 in a decade or two, with corresponding advances in machine learning and intelligence research (and perhaps some hard lessons learned along the way). I do expect HW3 to achieve L3 autonomy on highways within a year or two, and perhaps L3 on city streets a couple years after that, but I'll be surprised if HW3 ever reaches L4, even just on highways. The hardware might be theoretically capable of it, but we (by which I mean cutting-edge ML researchers) are nowhere near knowing how to architect the software for it.
 
It's not foolish in that HW2.5 Autopilot will be part of the fleet (probably the vast majority) for a long time to come, and the hardware should be eminently capable of handling the pictured type of situation correctly. That scenario is not a FSD-level task; it is an Autopilot-level task, even Hardware 1-level. For it not to be robustly solved at this point is really inexplicable.

Erm yes, I don't disagree. I was responding to the other person about robotaxis etc.
 
Erm yes, I don't disagree. I was responding to the other person about robotaxis etc.

Understood, but my point is that if they're having this much trouble getting "trivial" Autopilot tasks to work using 2.5 hardware, they can't be anywhere near getting FSD working with 3.0 hardware. Because as they learn how to solve these high-level tasks with the 3.0 hardware, those learnings should absolutely propagate back to the 2.5 hardware. That's because these are "executive function" tasks that sit on top of the basic object recognition stack, and should require relatively little processing power on their own. Either set of hardware already has a very good idea where all the cars are positioned around it, and it should only take a small amount of additional computation to figure out the correct thing to do in such situations (e.g. the scenario in this thread). If they can't get that right, it speaks volumes about their software progress in general.

Put another way: the additional compute capacity in HW3 is mostly used for handling "edge cases" for object recognition, and also for handling more complex FSD-specific tasks. It is not needed (or should not be needed) for Autopilot-level tasks, under easy driving conditions. If HW3 succeeds on the latter, that success should easily translate back to HW2.5. The fact that it hasn't, at least so far, is very concerning to me.
 
Understood, but my point is that if they're having this much trouble getting "trivial" Autopilot tasks to work using 2.5 hardware, they can't be anywhere near getting FSD working with 3.0 hardware. Because as they learn how to solve these high-level tasks with the 3.0 hardware, those learnings should absolutely propagate back to the 2.5 hardware. That's because these are "executive function" tasks that sit on top of the basic object recognition stack, and should require relatively little processing power on their own. Either set of hardware already has a very good idea where all the cars are positioned around it, and it should only take a small amount of additional computation to figure out the correct thing to do in such situations (e.g. the scenario in this thread). If they can't get that right, it speaks volumes about their software progress in general.

Put another way: the additional compute capacity in HW3 is mostly used for handling "edge cases" for object recognition, and also for handling more complex FSD-specific tasks. It is not needed (or should not be needed) for Autopilot-level tasks, under easy driving conditions. If HW3 succeeds on the latter, that success should easily translate back to HW2.5. The fact that it hasn't, at least so far, is very concerning to me.

I hear you. I just don’t necessarily believe we can equate or compare the two things, - what we have now, and what we will in theory have by the end of the year (Elon Time). The situation in this thread... who knows what went wrong here. Technically speaking it’s AutoPilot not being used as it should be (highways etc). Now I use AutoPilot in traffic often, but I don’t actually expect it to work flawlessly there - it’s just an aid.
 
I hear you. I just don’t necessarily believe we can equate or compare the two things, - what we have now, and what we will in theory have by the end of the year (Elon Time). The situation in this thread... who knows what went wrong here. Technically speaking it’s AutoPilot not being used as it should be (highways etc). Now I use AutoPilot in traffic often, but I don’t actually expect it to work flawlessly there - it’s just an aid.

Well... Highway 1 is a highway, and where the incident occurred it was even a divided highway, so I disagree that it's outside Autopilot's domain. The same situation could easily occur on a proper freeway. So I do think this is a situation where Autopilot could reasonably be expected to handle the situation correctly. I don't expect to hear back personally from Tesla about what went wrong, but I do hope they will at least take a look at it and add it to their incident database.

I also hope to see significant improvement by the end of the year, but the last few updates have seemed like a massive regression. I've been getting tons of phantom braking, and a few weird cases where the car is completely stopped in stop-and-go freeway traffic (on Autopilot) and suddenly freaks out madly with all levels of beeping. Also trying to cross double-yellow lines into the carpool lane (Drive on Nav), and then trying to merge left AGAIN into the concrete median. (This happened more than once; I caught it on dashcam and posted it in a different thread.)

Anyway, here's hoping. Tesla has pulled off the impossible before, and I would love to be pleasantly surprised!
 
Can I assume that those of you who are using Autopilot on "surface streets" know that this system is not designed or approved for that use? It states pretty clearly that these Autopilot functions are to be used only on limited access roadways.

From the current manual:

Warning: Do not use Traffic-Aware Cruise Control on city streets or on roads where traffic conditions are constantly changing.

and

Warning: Autosteer is intended for use only on highways and limited-access roads with a fully attentive driver. When using Autosteer, hold the steering wheel and be mindful of road conditions and surrounding traffic. Do not use Autosteer on city streets, in construction zones, or in areas where bicyclists or pedestrians may be present. Never depend on Autosteer to determine an appropriate driving path. Always be prepared to take immediate action. Failure to follow these instructions could cause damage, serious injury or death.

I am surprised to see so many posts about the shortcomings on surface streets when Tesla makes no claim that this should work under any circumstances?
 
I was driving yesterday on PCH in heavy beach traffic, southbound near Zuma Beach, in the left lane, with Autopilot engaged. A car (white Toyota) tried to abruptly merge in from the right, and I expected Autopilot to recognize the situation and brake or take evasive action, but it didn't. I had to manually slam on the brakes at the last second; if I hadn't, I'm certain there would have been a collision..

I've send the dashcam video and info to Tesla so they can use it as a data point for testing. Meanwhile, this is a good reminder of why Autopilot is no excuse to take your eyes off the road!


I've had the exact same incident happen to me with NoA, except the car that merged into my lane completed their lane change already... I'll post a video of it tonight if I can remind myself...
 
Two sides as usual: It doesn't respond quickly enough to intrusions close to the front of the car (not using the B pillar cams?), phantom brakes for cars in other lanes that are not intruding.

FSD uses a bigger and faster neural net so it will be much "smarter" when figuring this stuff out. Providing the training is there. AP is not a great indicator of FSD performance capability, but it might be an indicator of Tesla's programming prowess.
 
I had a similar experience on the NJTPK. Traffic was going about 10MPH. I was on NOA and in the far right lane. Cars were merging from the right hand side after stopping for a stop sign (there was construction around the particular highway entrance). There was about 2 car lengths in front of me when an suv tried to merge into my lane. Instead of stopping, NOA tried to close the distance to the car in front of me almost causing me to side swipe the merging suv. Of course I took over prior to that happening but was surprised my car didn’t slow down to let the merging car in.
 
First of all - I agree completely that everyone needs to be mindful to be fully engaged even when on Autopilot... ESPECIALLY on city streets which (as many will surely come along to remind us any second now) is currently beyond the use case for the current software version.

But this doesn't surprise me in the least. I've seen instances where AP would have actually pulled me right into another vehicle when it's misreading lane lines through an intersection. Again - this is outside of the intended use of autopilot so I know that I"m taking a risk BUT... you'd think that autopilot wouldn't ram you into a car that's beside you because it thinks your lane is shifting.
 
I was driving yesterday on PCH in heavy beach traffic, southbound near Zuma Beach, in the left lane, with Autopilot engaged. A car (white Toyota) tried to abruptly merge in from the right, and I expected Autopilot to recognize the situation and brake or take evasive action, but it didn't. I had to manually slam on the brakes at the last second; if I hadn't, I'm certain there would have been a collision..

This wasn't really a situation that required AP to take evasive action, just slowing down a bit to let the guy in (which is what a human driver would have done). Another example of AP "supervised" by a human being a far less courteous driver than the human driver would have been without AP. If everyone on the road behaved like AP, traffic would be a mess.
 
I've had the exact same incident happen to me with NoA, except the car that merged into my lane completed their lane change already... I'll post a video of it tonight if I can remind myself...
Here's the video:
See around the 0:46 mark, had to quickly cancel NoA otherwise it would have rear ended the minivan (AutoPilot was making the car accelerate faster)...