I'm noticing this anomaly occasionally (still). There's an evening glare where the color/reflectivity/contrast of road repair patches (even manhole covers) seem to be misinterpreted as road lines. It's as if the algorithm to determine a valid line isn't looking at shapes or circumstances broadly enough. This photo taken today would probably confuse any EAP, but they're everywhere in Chandler. It's how they repair roads before repaving, and dusk is right around rush hour traffic.
The lines on the M3's display were jumping all over and steering was wandering. On another occasion, I had a quick swerve from a manhole cover (I believe was the cause) as if it tried to avoid it abruptly, even though it was about 100 ft ahead and right on a proper road line. Did the computer think the line turned on a dime? Is there any common sense applied here or are we only looking at the probability of a line and going with the best at every moment in time without regard to past, future, and reality?
Makes me wonder why there isn't some higher level autonomy that makes assumptions, for example, that roads don't just jump around in real life. Dotted lines have distinct shapes (at first anyway, less with time but within the original boundary) and they typically appear as a regular pattern or distance. Do the cars around it also jump around? Does the car seem centered within the traffic pattern? Sure these questions are more difficult to process, but they maybe necessary.
The higher frame-rates of next gen AI chips are only as smart as the algorithms and what's trained. The fact that a lane jumps at all concerns me. Shouldn't these systems take a meta-cognitive approach (maybe a second computer that double checks the output of the first computer based on higher level sanity rules) or are we just counting pixels here? Defining the lane is so elementary to FSD, this has to be perfect and yet it still happens.
The lines on the M3's display were jumping all over and steering was wandering. On another occasion, I had a quick swerve from a manhole cover (I believe was the cause) as if it tried to avoid it abruptly, even though it was about 100 ft ahead and right on a proper road line. Did the computer think the line turned on a dime? Is there any common sense applied here or are we only looking at the probability of a line and going with the best at every moment in time without regard to past, future, and reality?
Makes me wonder why there isn't some higher level autonomy that makes assumptions, for example, that roads don't just jump around in real life. Dotted lines have distinct shapes (at first anyway, less with time but within the original boundary) and they typically appear as a regular pattern or distance. Do the cars around it also jump around? Does the car seem centered within the traffic pattern? Sure these questions are more difficult to process, but they maybe necessary.
The higher frame-rates of next gen AI chips are only as smart as the algorithms and what's trained. The fact that a lane jumps at all concerns me. Shouldn't these systems take a meta-cognitive approach (maybe a second computer that double checks the output of the first computer based on higher level sanity rules) or are we just counting pixels here? Defining the lane is so elementary to FSD, this has to be perfect and yet it still happens.