Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autopilot - What's a Valid Road Line to Follow?

This site may earn commission on affiliate links.

SOULPEDL

Cyber-Bandit is Ready!
Supporting Member
Jul 25, 2016
7,004
33,726
Arizona
Referral Code
I'm noticing this anomaly occasionally (still). There's an evening glare where the color/reflectivity/contrast of road repair patches (even manhole covers) seem to be misinterpreted as road lines. It's as if the algorithm to determine a valid line isn't looking at shapes or circumstances broadly enough. This photo taken today would probably confuse any EAP, but they're everywhere in Chandler. It's how they repair roads before repaving, and dusk is right around rush hour traffic.

20181007_173718.jpg


The lines on the M3's display were jumping all over and steering was wandering. On another occasion, I had a quick swerve from a manhole cover (I believe was the cause) as if it tried to avoid it abruptly, even though it was about 100 ft ahead and right on a proper road line. Did the computer think the line turned on a dime? Is there any common sense applied here or are we only looking at the probability of a line and going with the best at every moment in time without regard to past, future, and reality?

Makes me wonder why there isn't some higher level autonomy that makes assumptions, for example, that roads don't just jump around in real life. Dotted lines have distinct shapes (at first anyway, less with time but within the original boundary) and they typically appear as a regular pattern or distance. Do the cars around it also jump around? Does the car seem centered within the traffic pattern? Sure these questions are more difficult to process, but they maybe necessary.

The higher frame-rates of next gen AI chips are only as smart as the algorithms and what's trained. The fact that a lane jumps at all concerns me. Shouldn't these systems take a meta-cognitive approach (maybe a second computer that double checks the output of the first computer based on higher level sanity rules) or are we just counting pixels here? Defining the lane is so elementary to FSD, this has to be perfect and yet it still happens.
 
It is a seemingly basic task, but training a computer to reliably make decisions like a human is much harder than you think. Especially the idea of "common sense", which is grounded in experience-driven intuition. This is why Autopilot is still a driver assist technology.
 
I'm noticing this anomaly occasionally (still). There's an evening glare where the color/reflectivity/contrast of road repair patches (even manhole covers) seem to be misinterpreted as road lines. It's as if the algorithm to determine a valid line isn't looking at shapes or circumstances broadly enough. This photo taken today would probably confuse any EAP, but they're everywhere in Chandler. It's how they repair roads before repaving, and dusk is right around rush hour traffic.

View attachment 341776

The lines on the M3's display were jumping all over and steering was wandering. On another occasion, I had a quick swerve from a manhole cover (I believe was the cause) as if it tried to avoid it abruptly, even though it was about 100 ft ahead and right on a proper road line. Did the computer think the line turned on a dime? Is there any common sense applied here or are we only looking at the probability of a line and going with the best at every moment in time without regard to past, future, and reality?

Makes me wonder why there isn't some higher level autonomy that makes assumptions, for example, that roads don't just jump around in real life. Dotted lines have distinct shapes (at first anyway, less with time but within the original boundary) and they typically appear as a regular pattern or distance. Do the cars around it also jump around? Does the car seem centered within the traffic pattern? Sure these questions are more difficult to process, but they maybe necessary.

The higher frame-rates of next gen AI chips are only as smart as the algorithms and what's trained. The fact that a lane jumps at all concerns me. Shouldn't these systems take a meta-cognitive approach (maybe a second computer that double checks the output of the first computer based on higher level sanity rules) or are we just counting pixels here? Defining the lane is so elementary to FSD, this has to be perfect and yet it still happens.
I'm noticing this anomaly occasionally (still). There's an evening glare where the color/reflectivity/contrast of road repair patches (even manhole covers) seem to be misinterpreted as road lines. It's as if the algorithm to determine a valid line isn't looking at shapes or circumstances broadly enough. This photo taken today would probably confuse any EAP, but they're everywhere in Chandler. It's how they repair roads before repaving, and dusk is right around rush hour traffic.

View attachment 341776

The lines on the M3's display were jumping all over and steering was wandering. On another occasion, I had a quick swerve from a manhole cover (I believe was the cause) as if it tried to avoid it abruptly, even though it was about 100 ft ahead and right on a proper road line. Did the computer think the line turned on a dime? Is there any common sense applied here or are we only looking at the probability of a line and going with the best at every moment in time without regard to past, future, and reality?

Makes me wonder why there isn't some higher level autonomy that makes assumptions, for example, that roads don't just jump around in real life. Dotted lines have distinct shapes (at first anyway, less with time but within the original boundary) and they typically appear as a regular pattern or distance. Do the cars around it also jump around? Does the car seem centered within the traffic pattern? Sure these questions are more difficult to process, but they maybe necessary.

The higher frame-rates of next gen AI chips are only as smart as the algorithms and what's trained. The fact that a lane jumps at all concerns me. Shouldn't these systems take a meta-cognitive approach (maybe a second computer that double checks the output of the first computer based on higher level sanity rules) or are we just counting pixels here? Defining the lane is so elementary to FSD, this has to be perfect and yet it still happens.
First of all Autosteer isn't meant for the road you have in the picture since there are traffic lights. You shouldn't be using those features.
 
This photo taken today would probably confuse any EAP, but they're everywhere in Chandler. It's how they repair roads before repaving, and dusk is right around rush hour traffic.
Please do not use Auto Steer on roads with cross traffic. Tesla specifically says not to do that.

That kind of road surface is very challenging for the Tesla software to interpret. It’s also hard for humans.
 
  • Like
Reactions: MP3Mike
I'm noticing this anomaly occasionally (still). There's an evening glare where the color/reflectivity/contrast of road repair patches (even manhole covers) seem to be misinterpreted as road lines. It's as if the algorithm to determine a valid line isn't looking at shapes or circumstances broadly enough. This photo taken today would probably confuse any EAP, but they're everywhere in Chandler. It's how they repair roads before repaving, and dusk is right around rush hour traffic.

View attachment 341776

The lines on the M3's display were jumping all over and steering was wandering. On another occasion, I had a quick swerve from a manhole cover (I believe was the cause) as if it tried to avoid it abruptly, even though it was about 100 ft ahead and right on a proper road line. Did the computer think the line turned on a dime? Is there any common sense applied here or are we only looking at the probability of a line and going with the best at every moment in time without regard to past, future, and reality?

Makes me wonder why there isn't some higher level autonomy that makes assumptions, for example, that roads don't just jump around in real life. Dotted lines have distinct shapes (at first anyway, less with time but within the original boundary) and they typically appear as a regular pattern or distance. Do the cars around it also jump around? Does the car seem centered within the traffic pattern? Sure these questions are more difficult to process, but they maybe necessary.

The higher frame-rates of next gen AI chips are only as smart as the algorithms and what's trained. The fact that a lane jumps at all concerns me. Shouldn't these systems take a meta-cognitive approach (maybe a second computer that double checks the output of the first computer based on higher level sanity rules) or are we just counting pixels here? Defining the lane is so elementary to FSD, this has to be perfect and yet it still happens.

AI is hard (to do cheaply).

At least you have a good test area for each update. ;)
 
Seriously? I think people lost the point.
Am I going to the principles office now?

You should use the example that happens up on 101 travelling west just past I17 (north side of town) around 5PM or so. Legit freeway that AP completely fails at reading the lane lines correctly. Frankly, it was hard for me as a human to see the lines once the road was reflecting the low sunlight.
 
  • Informative
Reactions: SOULPEDL
You mean freeways have lines too? OMG! (Sounds like a Steve Martin thing from the Jerk right?)

I've been told by Service that the car recognizes the color and temperature (or color-temperature) of the line's reflection. All that I'm saying is that there should be more robust criteria if not already and higher AI frame-rates don't by themselves solve this. This is straight pattern recognition (easy) + situational awareness (hard).
 
As autonomous becomes more the norm, it will become the responsibility of government to stripe the roadways appropriately.

It actually is already, but might be more so in the future - to your point. My learnings on the color/temp of the line came about with Service in a discussion on how Fleets of cars from each geographic region have different SW builds. The line example they shared was that DOT didn't quite use the right color, so instead of fighting the bureaucratic system, Tesla decided to allow the modified color for that area only.

If you think about it, there has to be exceptions all over the place for various things. Otherwise every car would stop for the same billboard shadow on the road at 5 PM (maybe a poor example but you get the point).

However, thinking that the lines in that picture are road-lines. I'm giving Tesla a C- score there AND I'm telling the principle (not a typo, nobody caught that one?).
 
Furthermore... I'd like to challenge the claim that Tesla is "accumulating millions of miles in real world driving experience". IMO, that statement is for spot learning, not learning in general all the time. When is Tesla going to start gathering data from our cars as in vision camera data (based on feedback from Tesla that they don't yet use our camera data)? My sense is that it's only limited sensor data based on trigger fail events, then that location becomes a focus of attention for further learning (or an exception patch).

Why am I bashing Tesla here when I'm also long on $TSLA? I think it's dangerous to give people false confidence in the system because of the value of the word trust. (Waymos is really bad at this by the way. I just saw one stop suddenly twice in a row at Intel just yesterday. IMO it thought the pedestrian sign was a person as the van had to go around me a bit. To Waymo, it was either a person or a simple object in the way and had no awareness that that exact sign was always there... everyday (and it's been through there over a 1,000 times). You won't hear about these events, all passengers are under NDA and Alphabet controls that narrative.)

It's no secret that I disagree on Tesla's zero user training to explain the system so people know when or where it's safe or less-safe. I'm not talking about giving away IP here as I've pretty much figured much of this out over time by just driving and making it fail often. (And now my defense in court just got weaker if something does go wrong, but at least people will understand more).

Here's another example. I was in a very tight exit cloverleaf coming off the freeway into Santa Cruz a few weeks back and the outside line faded (or was out of visibility from my perspective) and the car started heading off the pavement before I took over. Now put this together with every example we've seen so far that the car is able to determine the usable road surface (quite well actually, but maybe not on a negative edge as was the case here). But there's more.

We know the line following algorithm always tries to place the car evenly between lines (and we've all seen this on EAP). So when the outside line faded off the horizon, there was no "sanity check" that perhaps on an exit ramp, lanes NEVER get wider (there are no lane changes on ramps). So the fact that the right line was alway visible, and the available road data should have been available, leads me to believe that nothing was really looking at circumstances surrounding the events. Again, that higher level system to say "does this make sense" (according to basic rules that can be defined in code)?

If you go back in time on my posts, you'll see other examples of fails where, had I understood the system better, I would have never put myself in that situation, especially regarding the maximum allowed steering torque in EAP. Just sayin' Tesla, step it up guys. And I think we deserve to know why the Fire Truck was hit, and why that can't happen again.
 
  • Like
Reactions: BOBTHEJOCKEY
We know the line following algorithm always tries to place the car evenly between lines (and we've all seen this on EAP). So when the outside line faded off the horizon, there was no "sanity check" that perhaps on an exit ramp, lanes NEVER get wider (there are no lane changes on ramps). So the fact that the right line was alway visible, and the available road data should have been available, leads me to believe that nothing was really looking at circumstances surrounding the events.
Patience. V9 is much better at handling that situation correctly.
 
Furthermore... I'd like to challenge the claim that Tesla is "accumulating millions of miles in real world driving experience". IMO, that statement is for spot learning, not learning in general all the time. When is Tesla going to start gathering data from our cars as in vision camera data (based on feedback from Tesla that they don't yet use our camera data)? My sense is that it's only limited sensor data based on trigger fail events, then that location becomes a focus of attention for further learning (or an exception patch).

I always assumed they were collecting training data and labels. I hadn’t thought they were programming in handling of specific locations, as my understanding is that Tesla is focused on building a general purpose self-driving car that more closely approximates the way a human driver works (versus having one that more or less follows virtual rails or drives through a virtual world map).

The trigger/fail events you referred to seem like important labels for ML training. I’d be fascinated to learn more about the details of how they’ve built AP and are building the future iterations, but for now can only make somewhat educated guesses.
 
I always assumed they were collecting training data and labels. I hadn’t thought they were programming in handling of specific locations, as my understanding is that Tesla is focused on building a general purpose self-driving car that more closely approximates the way a human driver works (versus having one that more or less follows virtual rails or drives through a virtual world map).
The trigger/fail events you referred to seem like important labels for ML training. I’d be fascinated to learn more about the details of how they’ve built AP and are building the future iterations, but for now can only make somewhat educated guesses.

I can't speak for any other exceptions except for the one they shared. And if you think about it, the exceptions could become the general rule if they repeat at multiple locations. If 1 black swan were seen, you'd make the exception. If 2 or 3 were discovered, you'd make a new rule that not all swans are white, so start checking the color too.

As for the "Exceeded Steering Wheel Angle", they solved that (somewhat) by reducing the speed on a freeway interchange nearby that also had a dip in the road causing my own hand's wheel torque to add to the correction torque by the car and causing it to exceed the rotational limit allowed. It repeatedly popped out of EAP at the same spot (dangerous), but only if I helped it (and I couldn't not in fear of hitting the wall at 65 mph).

This is a perfect example where knowledge would have been useful to avoid that situation entirely. I would love to know my probabilities at every moment actually - like a linear warning gauge, not just go/no go. "Dude, you're pushing your luck" would be an excellent warning right? Followed by a review of why when I got home (or a warning code set that I can review and understand). There is clearly a learning curve to understanding EAP limitations, I advocate accelerating that curve. My background is Sr. Technical Training Engineer at Intel 23 yrs, so this training thing is so obvious to me. But now I'm writing code for foot gesture recognition (not AI yet, but eventually). Ya, the whole thing is fascinating!
 
I can't speak for any other exceptions except for the one they shared. And if you think about it, the exceptions could become the general rule if they repeat at multiple locations. If 1 black swan were seen, you'd make the exception. If 2 or 3 were discovered, you'd make a new rule that not all swans are white, so start checking the color too.

As for the "Exceeded Steering Wheel Angle", they solved that (somewhat) by reducing the speed on a freeway interchange nearby that also had a dip in the road causing my own hand's wheel torque to add to the correction torque by the car and causing it to exceed the rotational limit allowed. It repeatedly popped out of EAP at the same spot (dangerous), but only if I helped it (and I couldn't not in fear of hitting the wall at 65 mph).

This is a perfect example where knowledge would have been useful to avoid that situation entirely. I would love to know my probabilities at every moment actually - like a linear warning gauge, not just go/no go. "Dude, you're pushing your luck" would be an excellent warning right? Followed by a review of why when I got home (or a warning code set that I can review and understand). There is clearly a learning curve to understanding EAP limitations, I advocate accelerating that curve. My background is Sr. Technical Training Engineer at Intel 23 yrs, so this training thing is so obvious to me. But now I'm writing code for foot gesture recognition (not AI yet, but eventually). Ya, the whole thing is fascinating!
Some of that is probably proprietary data that Tesla won't want to share. (not that I wouldn't love to see it)
 
I always assumed they were collecting training data and labels. I hadn’t thought they were programming in handling of specific locations, as my understanding is that Tesla is focused on building a general purpose self-driving car that more closely approximates the way a human driver works (versus having one that more or less follows virtual rails or drives through a virtual world map).

The trigger/fail events you referred to seem like important labels for ML training. I’d be fascinated to learn more about the details of how they’ve built AP and are building the future iterations, but for now can only make somewhat educated guesses.
ML? AP? Acronyms DMN
 
Some of that is probably proprietary data that Tesla won't want to share. (not that I wouldn't love to see it)

Or that they don't understand human learning (HL) AND don't consider that as part of the overall safety equation.

If a tire sensor tells me that my air is low, we don't need to know HOW it knows. Machine knowledge is so deep, and we thought reverse engineering Assembly Language was hard (notice I didn't use AL). We still have no clue how our brains work, we only know it does it. See my point?

If you're in EAP (on the freeway of course) and someone cuts you off, should you:
  1. Hit the brakes?
  2. Let off the accelerator and change lanes?
  3. Let EAP perform the safest maneuver?
What if I told you that hitting the brakes disengaged EAP (but you didn't know that)? Would that change your response?

Then what if only the front cameras actually worked and EAP would not (yet) know if there a car was in your blind spot?

See what I'm saying? Tesla is not considering the total safety equation by keeping the human side ignorant, especially in these risk situations. But then again, "who needs training", it's just an added cost. Eventually, yes, this will be true when the Human response doesn't matter. During the migration where it's Human-Machine Control, I completely disagree!
 
  • Like
Reactions: Nocturnal