Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
The trouble is it’s not quite as clear-cut as that. Based on the few hints we’ve had so far it’s pretty clear that the Vision system uses map data as an input to the NN .. so if the Vision system isnt sure (to use a too-crude example) if the road ahead has 2 or 3 lanes, it can use map information to resolve the ambiguity. Of course its much more complex than that, but it means that when map data and vision data differ the car may reach a point where its confidence level is too low to continue. Sure, as the vision system improves Tesla can weight that more heavily than map data, but you are still going to face times when the two are so dissonant that the car cannot progress by itself (which seems pretty sensible for an L2 system).

There is also safety to be considered. If the vision system sees a 50 mph road sign with 60% confidence, but the map system has a 40 mph limit, which should the car choose? If the car saw a 25 mph sign with the same 60% confidence, which should it choose? So it’s not just a matter of correctness, its a matter of what is the safest course of action.
I don't disagree with anything you wrote. This is a complex problem, so of course the solution (or "solution space" in a universe of possible behaviors) is not "just" a matter of anything. But in order to discuss it in a forum such as this, one has to be able to propose a necessarily simplified description of an approach that seems promising. That's the intent of my subsequently posted prioritized list and I think it's a pretty sensible one, in response to @sleepydoc's prior question about what to do in conflicts between Vision and map.

I don't automatically subscribe to the notion that the AV should respond to everything the way a human would, but it's a reasonable principle when in doubt - first because other road users expect it and this is likely to be true for the foreseeable future, and second because the road engineering, to the extent it is properly executed, is based on providing human drivers with mostly familiar visual cues.

I'd be completely ready to say that the AV should take advantage of its specific and sometimes superhuman capabilities, to make up for its lack of human experience and intuition. And in theory, this could include the ability to utilize an accurate and precise (HD) map. However, even in the best of circumstances, if Tesla devoted significant resources maintenance of such maps for the entire ODD, they still cannot keep up with minute-by-minute developments. And we know that the current Tesla maps are very far from this; they seem to have many gaps, errors or insufficiencies - in fact they seem to be generating more than their fair share of map-vs-vision problems. Because of this reality, I conclude that FSDs reliance on maps should be about the same as a human driver's reliance on maps and/or memory of the route: a guide and expectation, but in no way a substitute for real-time sensory perception of the environment.

I certainly agree also that unexpected differences should be more heavily weighted in favor of "safer" behavior. If the car doesn't see a stop sign or a one-way restriction that the map says is there, that's not an excuse to ignore it. On the other hand, driving slower or stopping is not necessarily safe behavior in high speed traffic. That's why I suggest prioritizing proximate observed traffic behavior above the map info, to help resolve such dilemmas.

Finally, I really enjoy this kind of discussion, where the participants goal is to explore possible solutions and approaches. None of us has a fully-formed, unassailable solution, but including too many disclaimers in the proposals don't insulate against that; they just clog up the discussion. For some people, the point of the back-and-forth is to Be Right, draw sides and look for soft spots to tear down the wall. I don't perceive that at all in your posts, so I responded and I'll just reiterate that I still feel good about my prior comments; they're not meant to be the basis of simple codable Self-Driving Laws, but a proposed framework, implemented in the inherently softened logic of NNs, for resolving the well-known data conflicts under discussion.
 
The map system apparently does not include stop signs.

It absolutely does- there's myriad examples of the car stopping (or starting to stop, specifically citing an upcoming stop sign) for non-visible signs that appear in the maps.

That doesn't mean 100% of all signs, everywhere, are in the maps-- there's an insane # of miles of road out there, you won't ever had real-time-perfect maps.
 
I don't disagree with anything you wrote. This is a complex problem, so of course the solution (or "solution space" in a universe of possible behaviors) is not "just" a matter of anything. But in order to discuss it in a forum such as this, one has to be able to propose a necessarily simplified description of an approach that seems promising. That's the intent of my subsequently posted prioritized list and I think it's a pretty sensible one, in response to @sleepydoc's prior question about what to do in conflicts between Vision and map.
I also agree with @drtimhill. What he and you wrote is part of what I was trying to say without getting overly complicated.
I don't automatically subscribe to the notion that the AV should respond to everything the way a human would, but it's a reasonable principle when in doubt - first because other road users expect it and this is likely to be true for the foreseeable future, and second because the road engineering, to the extent it is properly executed, is based on providing human drivers with mostly familiar visual cues.

I'd be completely ready to say that the AV should take advantage of its specific and sometimes superhuman capabilities, to make up for its lack of human experience and intuition. And in theory, this could include the ability to utilize an accurate and precise (HD) map. However, even in the best of circumstances, if Tesla devoted significant resources maintenance of such maps for the entire ODD, they still cannot keep up with minute-by-minute developments. And we know that the current Tesla maps are very far from this; they seem to have many gaps, errors or insufficiencies - in fact they seem to be generating more than their fair share of map-vs-vision problems. Because of this reality, I conclude that FSDs reliance on maps should be about the same as a human driver's reliance on maps and/or memory of the route: a guide and expectation, but in no way a substitute for real-time sensory perception of the environment.
It's very possible that in the end Tesla's (or Waymo's or xxx's) system will end up just being different; better in some respects and worse than others. Human nature being what it is people will still tend to focus on the areas where humans do better while completely ignoring the areas where the computer wins.

The critical flaw with using GPS and map data is precisely what you state. No matter how much energy is put into maintaining it it will always be out of date to some degree.
I certainly agree also that unexpected differences should be more heavily weighted in favor of "safer" behavior. If the car doesn't see a stop sign or a one-way restriction that the map says is there, that's not an excuse to ignore it. On the other hand, driving slower or stopping is not necessarily safe behavior in high speed traffic. That's why I suggest prioritizing proximate observed traffic behavior above the map info, to help resolve such dilemmas.
Yes - I hope and expect that the algorithms will improve as they evolve.
Finally, I really enjoy this kind of discussion, where the participants goal is to explore possible solutions and approaches. None of us has a fully-formed, unassailable solution, but including too many disclaimers in the proposals don't insulate against that; they just clog up the discussion. For some people, the point of the back-and-forth is to Be Right, draw sides and look for soft spots to tear down the wall. I don't perceive that at all in your posts, so I responded and I'll just reiterate that I still feel good about my prior comments; they're not meant to be the basis of simple codable Self-Driving Laws, but a proposed framework, implemented in the inherently softened logic of NNs, for resolving the well-known data conflicts under discussion.
👍
 
  • Like
Reactions: JHCCAZ and tmoz
The map system apparently does not include stop signs.

It absolutely does- there's myriad examples of the car stopping (or starting to stop, specifically citing an upcoming stop sign) for non-visible signs that appear in the maps.
It would be great if we could look forward to reliable mapping of stop signs that FSDb can use for decision making whether signs are visualized or not.

My observations stem from FSDb running a particular stop sign (with no visualization of the sign). Despite cleared vegetation and a moved sign finally allowing visualization of the sign, FSDb still does not stop, neatly ignoring the sign. If there are maps that pick up the slack, I don't know where they are or how they can be changed.

Any self driving system that does not stop for stop signs is not safe.
 
  • Like
Reactions: sleepydoc
It would be great if we could look forward to reliable mapping of stop signs that FSDb can use for decision making whether signs are visualized or not.

My observations stem from FSDb running a particular stop sign (with no visualization of the sign). Despite cleared vegetation and a moved sign finally allowing visualization of the sign, FSDb still does not stop, neatly ignoring the sign. If there are maps that pick up the slack, I don't know where they are or how they can be changed.

Any self driving system that does not stop for stop signs is not safe.


Do you hit the special beta snapshot button when it does so? You can also follow that report up with an email to the beta folks (assuming they're still giving out the email address when they add new people)
 
Absolutely (hit the snapshot button). I went to great lengths getting this intersection cleaned up and improved (the trigger was watching a neighbor, who should have known better, run through the sign and nearly get rear ended by an oncoming car - poor visibility was a significant safety issue). The sign crew claimed there are many similar back roads here in rural Virginia where stop signs are not well seen and vegetation is overgrown.
 
Last edited:
The map system apparently does not include stop signs. Signs on small, wooded roads in moderately hilly areas can be hard to see and the vision system can miss them or see them late enough that they don't respond, just run through the stop signs (what happened today). This has to change.
I'm not so sure - I've had several instances where the car stopped like it would for a stop sign even though there wasn't one.
 
  • Informative
Reactions: Z_Lynx
The map system apparently does not include stop signs.
Here is case demonstrating a way in which FSD is not yet ready for prime time, a three way stop. The small yellow sign below the STOP says "Oncoming traffic does not stop." My Tesla clearly does not read this.

One time it got tested: (not in this photo, sorry) my FSD β stopped, but then proceeded just as an oncoming car turned in front of us without stopping, as it should have. Of course I hit the brakes, hard, and then the report button and later I sent an explanatory email. Of course I did not wait to see if a crash would have ensued, but my car clearly acted wrongly. It saw the oncoming car, but having not read the yellow sign, assumed the oncoming car would stop.

My point is that this odd-ball 3-way stop intersection (Golf Links Rd X Mountain Blvd, just off of I580 in Oakland, Ca) is probably not mapped as such, and the yellow addendum to the stop sign was certainly not read and understood by FSDβ.

This is not exactly an edge case, but I think this makes it clear that reading signs is necessary. That or perfect mapping. (which also would need to include potholes, recent road kill, etc ...)

SW

3wayStop.jpg
 
Here is case demonstrating a way in which FSD is not yet ready for prime time, a three way stop. The small yellow sign below the STOP says "Oncoming traffic does not stop." My Tesla clearly does not read this.

One time it got tested: (not in this photo, sorry) my FSD β stopped, but then proceeded just as an oncoming car turned in front of us without stopping, as it should have. Of course I hit the brakes, hard, and then the report button and later I sent an explanatory email. Of course I did not wait to see if a crash would have ensued, but my car clearly acted wrongly. It saw the oncoming car, but having not read the yellow sign, assumed the oncoming car would stop.

My point is that this odd-ball 3-way stop intersection (Golf Links Rd X Mountain Blvd, just off of I580 in Oakland, Ca) is probably not mapped as such, and the yellow addendum to the stop sign was certainly not read and understood by FSDβ.

This is not exactly an edge case, but I think this makes it clear that reading signs is necessary. That or perfect mapping. (which also would need to include potholes, recent road kill, etc ...)

SW

View attachment 833666


Situations that are already tricky for humans are going to be tricky for FSD. I can easily see humans not familiar with that intersection miss the yellow addendum sign completely and have to slam on their brakes just like you did on FSDb. Would be interesting to see the accident frequency at that intersection.

Now if FSDb does figure out how to read signs and know what to do properly, it will at that moment be way safer than a human for that intersection.
 
Apologies if this has already been discussed but in this video at the 17:17 mark, we see a near collision with a parked FedEx truck that is sticking out a bit into the lane. It appears FSD Beta has not solved this problem of parked vehicles that partially stick out into the lane. It would seem like FSD Beta is not able to judge when objects are only partially in another lane. It was probably treating the FedEx truck as simply a parked vehicle that is not in the lane.

 
Apologies if this has already been discussed but in this video at the 17:17 mark, we see a near collision with a parked FedEx truck that is sticking out a bit into the lane. It appears FSD Beta has not solved this problem of parked vehicles that partially stick out into the lane. It would seem like FSD Beta is not able to judge when objects are only partially in another lane. It was probably treating the FedEx truck as simply a parked vehicle that is not in the lane.

40 to 0 on the middle of a bridge. No explanation.

 
Apologies if this has already been discussed but in this video at the 17:17 mark, we see a near collision with a parked FedEx truck that is sticking out a bit into the lane. It appears FSD Beta has not solved this problem of parked vehicles that partially stick out into the lane. It would seem like FSD Beta is not able to judge when objects are only partially in another lane. It was probably treating the FedEx truck as simply a parked vehicle that is not in the lane.

Hard to know what it would have done, but looked (from a very quick glance) that it would have changed lanes and perhaps sideswiped the other vehicle. I think the beeping occurred just after the disengagement, though that is just a quick impression.

Seems like it made a poor decision anyway. It did slow down 2mph but should have slowed down a lot more in that case. Or sped up and shot the gap; I would be cool with that too and in fact it seemed if done soon enough it would be safer.

40 to 0 on the middle of a bridge. No explanation.
Yeah, wonder if it would have behaved differently with a destination (I think it didn't have one - I never run on full screen mode)? Nothing in the visualizations that seemed odd.
 
  • Like
Reactions: diplomat33
40 to 0 on the middle of a bridge. No explanation.

Yep. It's really weird too because there does not appear to be anything that should confuse vision. It is a clear road with clear lane markings unless maybe the slow down was a delayed reaction to passing under the part of the bridge that goes over the road with the signs similar to seeing an overpass. Or maybe it is bad map data that is telling the car the wrong speed limit?
 
Or maybe it is bad map data that is telling the car the wrong speed limit?
Still thought it was 25mph.

There are lights on the bridge for the opening of the bridge which perhaps are not mapped correctly, which could be what it is looking for. But nothing shows in the visualization of course because nothing is there.
 
2-3 seconds is an estimate but my reaction time was quicker. it’s not like I was sitting there passively waiting for the event to occur but it happened very quickly and it would have been impossible to anticipate and prevent it.

The point I was trying to make is autopilot on highways has been around for a long time and phantom braking is still unsolved. FSD is a much higher aspiration with more difficult challenges. I’m not an expert but my gut tells me that you’re not going to achieve full self driving with only software updates and with the current Hardware In the cars.
it eventually happens to us all at some point with this crazy FSD beta...


 
it eventually happens to us all at some point with this crazy FSD beta...


Dan D. said:
40 to 0 on the middle of a bridge. No explanation.

 
40 to 0 on the middle of a bridge. No explanation
I've encountered similar behavior on a bridge. It's slowing down to confirm if there's traffic control probably because it's confused by map data that indicates there's an intersection at that same position. The intersection is actually for the road that runs directly underneath the bridge at that location: Way: 25355805 | OpenStreetMap

FSD Beta 10.13 has notes related to right of way at intersections even without traffic controls, so Tesla might be on a path of removing map dependency as a backup for intersection detection.
 
Apologies if this has already been discussed but in this video at the 17:17 mark, we see a near collision with a parked FedEx truck that is sticking out a bit into the lane. It appears FSD Beta has not solved this problem of parked vehicles that partially stick out into the lane. It would seem like FSD Beta is not able to judge when objects are only partially in another lane. It was probably treating the FedEx truck as simply a parked vehicle that is not in the lane.
I had almost the same scenario 2 or 3 weeks ago with a double parked UPS truck and a van with more cars behind it in the left lane and my car not slowing or showing evidence of wanting to change lanes. Had to also hit the brakes pretty hard.
 
  • Helpful
Reactions: diplomat33
Apologies if this has already been discussed but in this video at the 17:17 mark, we see a near collision with a parked FedEx truck that is sticking out a bit into the lane. It appears FSD Beta has not solved this problem of parked vehicles that partially stick out into the lane. It would seem like FSD Beta is not able to judge when objects are only partially in another lane. It was probably treating the FedEx truck as simply a parked vehicle that is not in the lane.


The blue planning line shows that FSD wanted to go around the FedEx vehicle (no perception issue). But at that moment, because FSD would cross into the adjacent lane, and a car was coming (that car turned red on the visualization), it prevented FSD from moving over.

At that point, FSD has to brake hard, and given how close we already are to the FedEx truck, I wouldn't blame any human from manually intervening instead of waiting to find out what FSD would do.

Seems like the challenge is for FSD to identify a course of action way sooner. This has been an issue with highway lane changes on NoA since forever. The cameras are always on, but the car doesn't appear to check adjacent lanes for traffic until after the intent to switch lanes is initiated. I really hate when NoA turns on the blinkers when there's clearly a car approaching, and then it keeps the blinkers on for a long time as the obstructing car moves ahead. For that other car, they are unclear whether I even see them because of my blinker.