Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Near freeway divider collision - on autopilot

This site may earn commission on affiliate links.
Today I was at DQ grabbing a burger, and a man came in from a well-respected local body and paint shop.

I asked if the shop had an occasion to repair a Tesla. He said yes, several.
I asked about observed quality; ease of repair; and if there were paint issues.
He said the quality was very good; repairs were similar to other vehicles ... except parts were difficult to acquire, and he had seen no paint issues to speak of.

The biggest problem?
Driver error. Most of the repairs were on cars that didn't stop and drivers relying too much on the vehicle to "control and avoid."
 
In the autonomy day event, Musk said that they won't do that. They want the system make decisions by using computer vision and not rely on high-precision GPS data for many reasons, such as potential GPS spoofing and general availability issues (that happens) and also because of construction popping up and other unforeseen things that humans can deal with easily.
That's great ... but right now you have examples of the cars driving straight into a barricade. Literally, a fatal error. Until they have improved their computer's "vision", which I think will take years and far more hardware, they need to preload "knowledge" into the system. So, right now, you would compare the vision system's idea of the world with that from GPS and preloaded maps, and if the don't closely match, you automatically drop out of AP mode. If the GPS system appears to be unreliable, drop out of AP mode.

As for the hardware, I think I mentioned in a previous post that it'll probably take multiple binocular, ultra HD cameras with focusing ability, each with hardware as sophisticated as the new FSD processor, to even come close a reliable real-world vision system. And that's just the first abstraction layer.

I really recommend watching Autonomy day event. It's 3 hours long (if not longer), but absolutely worth it especially if you come from technical background. They explain how the autopilot and FSD hardware work, how they constantly train and improve their computer vision network, why feedback like OP's is important and how they can utilize existing fleet to collect more training data.
I watched the presentation and skimmed through it again before replying. Kaparthy's talk is the interesting one. You can see from the iguana and dog part at the beginning that the NN approach is *not* correct! He even mentions how people don't need to be fed millions of images of the dog in order to recognize it in different orientations. We see the dog once and can instantly map its features into different orientations. Yes, we've seen examples of dogs before and in different orientations, but we have abstracted that into more of a physics representation internally which allows us to recognize any dog at any orientation automatically (independent of scale, rotation, distance, and obscurity). I don't think a NN works at that level.

None of this even touches on the issue of exactly what the NN is latching onto in the input imagery. In my field, there was a burst of activity in NN for object detection but that has subsided because no one knows how the NN detects the features. So, if it made a false positive or false negative prediction, no one knew why or how to fix it. The researchers have basically gone back to more physics-oriented, deterministic approaches. Personally, I think the "place"/"grid" organization model of the brain might be the best.
 
  • Informative
Reactions: Airport Dog
Although there have been a couple of other posts noticing that the car did not ever cross the lane boundary, they didn't seem to get much attention.

It does look to me like the car did NOT cross the lane line and did NOT "be line [sic] straight toward the divider". It does appear to be POSSIBLE, however, that the car MIGHT have side-swiped the divider if the OP had not taken over. It does NOT appear that the car would have impaled itself on the divider point, since it seemed to already be past that point at the time of the intervention. Maybe frame by frame examination of the video would help clarify that.

I have sometimes had the car edge to the outside of a lane in a curve while in Autosteer. If other cars are around I usually have taken over and sometimes I have gritted my teeth a little longer and seen that the car did not, in fact, leave the lane.

I will not make any further assertions about what actually DID happen, or criticize the OP's actions, since I wasn't there and do not have any data other than the dashcam videos. I certainly applaud the OP for being alert and ready. At best, the car should not have behaved in such an unnerving manner. Presumably, someday Model 3's won't do that kind of thing anymore.

While in beta, I would prefer to drive conservatively and need to have lots of time, buffer, distance... to react rather than to push the envelope to see whether there will be an accident or not.

In this case, it might be true that Autopilot would not slam right into the gore point concrete divider but I would not recommend to "have gritted my teeth a little longer and seen that the car did not, in fact, leave the lane..."

The Apple Engineer in Mountain View, CA already died in this scenario already so I am very much against risking an additional death for taking the risk to play with a very dangerous scenario.
 
I write software for a living, including doing some signal processing and feature detection on real world data. It's extremely messy data and that causes my algorithms to make laughable mistakes, latch onto phantom "features", miss true features, etc.

What do you think, why is it so hard for Tesla (with its neural net for feature recognition) code a rule, that car should never drive to gore point divider? Shouldn’t gore point like this (from OP’s video) be trivial to recognise for neural net feature recognition?

758334DF-C440-4F66-B97A-DE0B729C9142.jpeg
 
People are sentient. After driving a vehicle with driver aids people should, with time, learn it's strong and weak points.

No amount of presentation from a sales person will ever be able to cover every single possible event where a human driver might be better and automatous systems might be better.

Take some cautious time with new cars. Learn their capabilities.

As an example, while I often will take over from autopilot in areas where I feel more capable than it is, there was also an incident where I was on a dangerous curving road. The fog was so thick that I could not see 5 feet ahead of the car. I tried autopilot and it was much better navigating through the fog than I was. Point being that there could have been an accident in either case, but I felt the autopilot superior to my restricted vision.
 
AP will never be FSD at all, and isn't ever intended to be.

They are different products with different features and different operational domains (and even different HW requirements once the additional FSD features come out later this year)
It is if you enable NoA. :p
“Well, we already have Full Self-Driving capability on highways. So from highway on-ramp to highway exiting, including passing cars and going from one highway interchange to another, Full Self-Driving capability is there. In a few weeks, we’ll be pushing an update that will allow the option of removing stalk confirm (for Navigate on Autopilot) in markets where regulators will approve it, which would be the case in the US, for example."
- Elon Musk
 
@calidreamz808 - how did you get the 30-minute call with Tesla setup and who did you speak with/what team?
I called the general customer service number and told the lady what happened, and stated that I had dashcam video and also used the "bug report" feature. She asked me a lot of questions like what type of road was it, was a car in front of me.. behind me.. next to me, and asked about the exit ramp and what side it was on etc. Very specific questions, and then put me on hold numerous times as she checked with the advanced team on what she should record and the information to ask. She then followed up with an email about 10 mins later requesting the videos and asked more questions. That was the last time I had communication with Tesla, and that was on Friday 7/5. The incident happened on 7/4.
 
Also, today I used AP on my way to a doctors appointment using the exact same freeway, however, it wasn't required that I go too far down the freeway to the problem divider. I did however, pay attention very carefully at how autopilot was managing the center of the lane.. I noticed it was weaving a bit while centering but really never stopped the sway. I've noticed this before but it's very subtle and felt I was being too critical. The car did hug the left lane border when in the HOV lane which, again is creepy because often times on CA freeways, the freeway divider is up against that solid line and there is no shoulder... a small deviation and it would result in a sideswipe of that wall. Hugging so close to the border of the left side usually didn't bother me TOO much but now, I'm a little creeped out to use AP in the HOV with having so many HOV lanes that pass those dividers.
 
This was a very scary 4th of July traveling experience on the 55 north / 91 east interchange while on autopilot. I’ve also included the dashcam videos for review.

You are not the only one, I traveled that same path before and experienced the close near-hit at that exact spot with AP on (no fsd). Tried it for two more times and also same result so I never have AP on at that area if on the far left lane.
 
What do you think, why is it so hard for Tesla (with its neural net for feature recognition) code a rule, that car should never drive to gore point divider? Shouldn’t gore point like this (from OP’s video) be trivial to recognise for neural net feature recognition?
Does the Tesla system detect all objects? Does it create a 3d mapping of all of the environment in front of the car (high resolution depth map) or just the objects it has been trained to detect? I don't know. In the Autonomy Day presentation, Kaparthy showed a 3d mapping demo but I got the impression that it was in the research stage, not deployed.

If you roll back to around 0:12 in the video, the barrier is almost indistinguishable from the background. However, we know that it's much closer to the car than the background because it moves horizontally faster across the screen (depth cue from a mono motion camera).

I don't have a Tesla yet so I haven't had a chance to watch it in action. I only did a few test drives and only turned on AP for a short section. Is there a way to turn on a "debug" mode where the computer overlays detections on a front facing camera view?
 
Driving too fast and too much data to process.

Option 1: You're programming a device to scan pictures coming in on a video feed. The FPS is either 60, or even 1. You scan the picture from the top left, down, and move over one column. In an Excel spreadsheet that's A1..A255, B1..B255, C1..C255. Once you get to Z255, you load the next picture.

But depending on how much things change from A1 to Z255, you may need more time. We're looking at things like color / contrast, common shapes and potentially grouping.

The faster you drive, the more likely the system is to disregard frames 3, 9, 11, 12, 13, 14, etc.

One frame at a time, one column at a time.

Option 2: You're programming a device to take pictures coming in on a video feed. You compare the frames between 1..2, 2..3, 3..4, etc. Like a badly compressed JPEG video, you only try to figure out what has changed, and evaluate those changes into objects (e.g. lines, cars, etc.).

But if you're driving too fast, each frame is too different from the prior. The system can't process the differences fast enough in order to ascertain what is a lane marker, a vehicle or a gore point.

This is why a faster processor is necessary, and the more video inputs that are present, the more complex of a picture that can be rendered - and the faster you can overwhelm the system.

Mobile Eye used one camera and a few ultrasonic sensors (plus the radar sensor). The newer versions us more cameras, and ultrasonic sensors that reach out further. But it's still limited by the processing speed of whatever CPU / GPU is out there, and the methodology used in programming.

The moral of the story is to slow down and pay attention. Think like a computer programmer would, think like how the car would process data, and you'll be safer.

Edit: I live off a road with 120-180 degree turns, and the autopilot can handle the entire thing at 25MPH. Speed up to the speed limit of 35 and it'll disengage about one-half through. Speed up to 45MPH that I can handle, and it'll disengage about 10-20% up the hill (or cross over the double yellow and I'll step in).
 
Does the Tesla system detect all objects? Does it create a 3d mapping of all of the environment in front of the car (high resolution depth map) or just the objects it has been trained to detect? I don't know. In the Autonomy Day presentation, Kaparthy showed a 3d mapping demo but I got the impression that it was in the research stage, not deployed.

If you roll back to around 0:12 in the video, the barrier is almost indistinguishable from the background. However, we know that it's much closer to the car than the background because it moves horizontally faster across the screen (depth cue from a mono motion camera).

I don't have a Tesla yet so I haven't had a chance to watch it in action. I only did a few test drives and only turned on AP for a short section. Is there a way to turn on a "debug" mode where the computer overlays detections on a front facing camera view?


Check out this article for more details (on object detection)

https://electrek.co/2018/06/18/what-tesla-autopilot-see-understand/

Weight is assigned to different "objects" in what the Tesla AP sees, and it's also dependent upon the size of the object (e.g. a truck in front of you is more of a risk than a motorcyclist further away).
 
...The car did hug the left lane border...

It's quite risky to operate Autopilot when there's very little clearance on either side. Thanks to @Daniel in SD who gave me this video link:



upload_2019-7-8_14-45-8.png




Also, another famous AP1 in a very similar scenario:



Another very old AP1 hitting the traffic cones on the right. There was very little time to react if your hands were not on the steering wheel to get the tactile feedback that the steering were suddenly turning to the right toward the traffic cones:

 
Last edited:
  • Informative
Reactions: ColoradoFun
The biggest problem?
Driver error. Most of the repairs were on cars that didn't stop and drivers relying too much on the vehicle to "control and avoid."

I was wonder about that when I occasionally look at the Tesla salvage auctions. There are a LOT of brand new Model 3s passing through those auctions regularly. When I look at Copart right now there are 52 Model 3s and 96(!) on IAAI. I don't know if the salvage rates are any higher than an equivalent production number car, but as a layperson it seems like a lot.

Would be interesting to see the internal insurance actuary numbers on them.
 
  • Informative
Reactions: ColoradoFun
I was wonder about that when I occasionally look at the Tesla salvage auctions. There are a LOT of brand new Model 3s passing through those auctions regularly. When I look at Copart right now there are 52 Model 3s and 96(!) on IAAI. I don't know if the salvage rates are any higher than an equivalent production number car, but as a layperson it seems like a lot.

Would be interesting to see the internal insurance actuary numbers on them.
It seems like a lot but it's probably not. There are 675 2018 Toyota Camrys on IAAI right now. :eek:
People just crash cars all the time.
 
When there is a serious accident, it’s in fact almost always, maybe always, the case that it is an experienced user and the issue is more one of complacency. They get too used to it. That tends to be more of an issue. It is not a lack of understanding of what Autopilot can do. It’s actually thinking they know more about Autopilot than they do, like quite a significant understanding of it."

I find that very interesting as the more I learn (and experience) AP the more alert (stressed) I am to make sure I catch when the car makes an error ...

It would match up with this Drivers rely too heavily on car safety systems, study finds but perhaps more towards confusing perceived vs actual abilities of the assistive features
 
Elon Musk says that it's the people with the most experience, those who think they understand Autopilot, who get in the most trouble. Thinking that you understand its limitations may actually be dangerous. Going 60k miles without an accident is not statistically meaningful (the average driver goes 150k without an accident!).

"One of the common misimpressions is that when there is, say, a serious accident on Autopilot, people – or some of the articles – for some reason think that it’s because the driver thought the car was fully autonomous and it wasn’t, and we somehow misled them into thinking it was fully autonomous. It is the opposite.

When there is a serious accident, it’s in fact almost always, maybe always, the case that it is an experienced user and the issue is more one of complacency. They get too used to it. That tends to be more of an issue. It is not a lack of understanding of what Autopilot can do. It’s actually thinking they know more about Autopilot than they do, like quite a significant understanding of it."

Exactly. Well put.
 
...stressed...

Not me!

Knowledge is power.

I am much more relaxed now because the more I learn about all the Autopilot accidents the more I know how to deal with the system.

I used to freak out with every phantom brake but now I just associate with certain locations and I got my foot ready to floor the accelerator as needed.

For unexpected steering, there were so numerous undesirable steering when I first got it in early 2017. I've learned to constantly apply slight counter torque so that I could feel the Autopilot steering and I've never had any problem with correcting it. It's another story if my hands were not on the steering wheel: I think I could get into very nasty accidents by now.

It's so relaxing to me that I don't realize that I've been doing the corrections (but my passengers are not.)
 
  • Like
Reactions: calidreamz808