Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Did V11.3.6 silently add support for No Turn On Red?

This site may earn commission on affiliate links.
I had a surprising (in a good way) experience today. I thought FSD had no support at all for respecting No Turn on Red signs, but today it clearly waited at a red when it had multiple chances to make a right. Here's the dashcam footage (I wasn't recording with anything higher quality, unfortunately).

Dashcam is flakey so it didn't actually capture when it finally made the turn, but that part is actually kind of boring. The interesting part is it waited, without any creeping, and it had good opportunities to go (that I'm pretty confident it would have taken or at least crept a little). No interventions from me for the whole thing. As soon as it got a green, it went, so it clearly seemed to be waiting on that.

It's not obvious to me if this is map based or real-time vision based.

Is this novel FSD behavior? Anyone seen this before?
 
Yep, I assume if one disengages to prevent an illegal RToR and then reports it, map data might get updated.

That’s a really interesting potential mode of data collection. They may also be sourcing it from Openstreet maps directly or from the fleet. If they have a marginally reliable vision system for No Turn signs they could be feeding that through human verification before feeding it in to map data.
 
That’s a really interesting potential mode of data collection. They may also be sourcing it from Openstreet maps directly or from the fleet. If they have a marginally reliable vision system for No Turn signs they could be feeding that through human verification before feeding it in to map data.
I just checked a few intersections near my house with NRToR signage and none had the tag:restriction=no_right_turn_on_red on OSM, but some others probably do.
 
I just checked a few intersections near my house with NRToR signage and none had the tag:restriction=no_right_turn_on_red on OSM, but some others probably do.

And the car can handle these correctly? That rules out OSM as the only source of this info then.

I'm thinking this is probably Tesla fleet collected. Detecting NRToR signs shouldn't be a particularly difficult vision (and/or OCR) problem so not surprising that they're working on it (even if it's not utilized "live" in real time on the car at the moment).
 
And the car can handle these correctly? That rules out OSM as the only source of this info then.

I'm thinking this is probably Tesla fleet collected. Detecting NRToR signs shouldn't be a particularly difficult vision (and/or OCR) problem so not surprising that they're working on it (even if it's not utilized "live" in real time on the car at the moment).
No, sorry I should have mentioned that the car ignores the No RToR signs at all those intersections.

This presents a difficult problem for vision because in the US the signage is widely inconsistent.
 
This presents a difficult problem for vision because in the US the signage is widely inconsistent.

It's definitely harder than, say, stops signs. But it's a problem where optical character recognition might actually be a better approach. OCR would have no problem extracting the text from signs (with close to 100% accuracy) and then just filtering down to every sign that contains the words "red" and "turn" and "no" probably gets you pretty close to the set you're looking for. Apply some human filtering to find 5 samples of the top 50 variants of NRTOR signs in the US and you're probably pretty close to have a sufficient training set for detection for 95% of cases. At least a rough V1 version.

I'm speculating somewhat here, but despite the variety of signs this doesn't seem like an overwhelmingly hard problem for a company of Tesla's resources to build a rough solution to and wouldn't be at all surprised if our cars today are looking for NRTOR signs in the background all the time.

Much harder would be a general vision system or OCR-based system that can understand any right turn on red sign, with any wording.
 
But it's a problem where optical character recognition might actually be a better approach
I disagree. Anything that is static on a route should be stored somewhere and communicated to the car - which is how I understand that Tesla does it. From there, the car has to figure out the rest. Cars, bikes, pedestrians, debris, animals crossing, and so forth. Then there's the hybrids, of things that are static, but temporary. Pot holes, construction, etc.

Perhaps our cars could be turned into mapping devices when we're driving manually.
 
I disagree. Anything that is static on a route should be stored somewhere and communicated to the car - which is how I understand that Tesla does it. From there, the car has to figure out the rest. Cars, bikes, pedestrians, debris, animals crossing, and so forth. Then there's the hybrids, of things that are static, but temporary. Pot holes, construction, etc.

Perhaps our cars could be turned into mapping devices when we're driving manually.

I think you misunderstood my point a little bit. I was comparing OCR vs neural net machine learning. The latter doesn't "read" signs, it's purely doing visual pattern matching. OCR actually detects each letter and/or word.

I was just pointing out how NRTOR signs are not visually the same across the US but they do have common words that make them amenable to using OCR and looking for keywords.
 
I think you misunderstood my point a little bit.
Not that I ever do that.
I was comparing OCR vs neural net machine learning.
Yes, but you did go on to speculate that Tesla might already being doing the OCR stuff. I just can't see value in the car running any form of recognition software on its own. If I confused things by quoting the wrong bit of your post, I apologize.
 
Yes, but you did go on to speculate that Tesla might already being doing the OCR stuff. I just can't see value in the car running any form of recognition software on its own
I think OCR for speed limit signs seems reasonable. (I'm pretty sure that Mobileye does that.)

That way you train the NN to recognize a speed limit sign, then pass it to OCR to determine the actual limit on the sign. (Otherwise, I think that you would have to train it on every possible version of every possible speed limit in both MPH and KMH.)
 
  • Like
Reactions: gsmith123 and EVNow
Yes, but you did go on to speculate that Tesla might already being doing the OCR stuff. I just can't see value in the car running any form of recognition software on its own. If I confused things by quoting the wrong bit of your post, I apologize.

All good!

I don't think it's impossible the car is running OCR in the background (why not? It's not that compute intensive to run it on one image). It seems well within the realm of possibilities that when the car sees a sign (other than the usual stop signs, yields, etc) it might try to run OCR in the background to extract the text from it ("No right turn on red"). Or upload a a short video of the sign to Tesla and they run OCR + human post processing in the backend.

They could easily be fleeting sourcing where NRTOR signs are and even collecting training data for NRTOR sign detection. I have no real evidence they actually are doing it, but this seems totally doable and is consistent with how Tesla leverages their fleet for shadow testing and data collection.