Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Visualizations!!!

This site may earn commission on affiliate links.
See 6:30, if the link below doesn't preserve the time.


That is awesome. I love the amateur science that Autopilot inspires.

Interesting. @verygreen had earlier discussed how traffic lights seem to be using metadata in the maps - Musk had also mentioned that as the fall back option. May be they have gone to fully image based traffic lights now.

u/brandonlive on Reddit speculated that maps may be used as a signal for data curation. That is, if the car's NN doesn't detect a stop sign or a traffic light where the map says there should be one, the video clip can be uploaded to Tesla's labelling workforce for annotation. Brilliant idea!
 
Last edited:
  • Informative
Reactions: Saghost
Posted this in the patch thread but figures it makes sense to post here too.



Just tried something I hadn’t yet tried on the 40.50.x update — coming up to a stop sign, I was on autosteer and manually reduced my TACC to 0. It stopped two times at the stop line rendered. That seems promising that there is some logic already in there.

I’ll try it again later with some more stop signs or possibly lights if there’s no other cars around.
 
  • Informative
Reactions: Saghost
Just tried something I hadn’t yet tried on the 40.50.x update — coming up to a stop sign, I was on autosteer and manually reduced my TACC to 0. It stopped two times at the stop line rendered. That seems promising that there is some logic already in there.

I’ll try it again later with some more stop signs or possibly lights if there’s no other cars around.

False alarm. I wasn’t reliably able to reproduce.
 
  • Helpful
Reactions: strangecosmos2
FSD visualizations now show parking spots for people with disabilities in the latest 2019.40.50.7 update!! Tesla is definitely adding a lot of good stuff in the visualizations. Making progress towards city driving!! Nice!!

upload_2020-1-3_8-45-24.png


You can see more of the visualizations in this video:

 
Last edited:
So much false positives and false negatives..

It's clearly a work in progress. But the presence of false positive and false negatives is why Tesla has just released the visualizations so far as a preview and not actually released any features for the car to act on the visualizations. Obviously, Tesla needs to eliminate all those false positives and false negatives before it can have the car act on what it sees. So we don't have stopping at red lights or stop signs yet or making turns at intersections yet.
 
  • Like
Reactions: PaulJohn
It does seem like perception is something that could be developed entirely in "shadow mode". For example if someone stops at a place where you know there's a traffic light and there's nothing in front of them you can infer the status of the light.

Not sure what shadow mode has to do with anything. But certainly the idea is that Tesla can use labeled maps to help eliminate false positives and false negatives. The car could check the map to see if there is supposed to be a traffic light or stop sign there to validate what the camera vision thinks it is seeing.
 
We don't really have a precedent for this level of accuracy, do we?

Yes we do, humans can be that good! I think I've only run one red light in my life accidentally (I was really tired and it was very late, and I did notice just before entering the intersection - and I think if there had been cars around I would have snapped out of it earlier). To be clear, I've never run one deliberately, unless it was defective (a next-level challenge for FSD I guess). I think I've probably had something like 150k tests at this point (wild guess). Pretty sure that computers will have to be quite a bit better than me though, I would think at least 100x better.