Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD fails to detect children in the road

This site may earn commission on affiliate links.
Someone in another thread here mentioned painting road lines that lead off a cliff, would FSD just follow the lines and drive the vehicle off a cliff because it looks like a roadway?
Ah, now you're talking.
Dan O'Dowd & Omar in a simultaneous test of chicken to see if Tesla Vision will drive them off a cliff or save them. For pink slips.
 
  • Funny
Reactions: DrGriz
Someone in another thread here mentioned painting road lines that lead off a cliff, would FSD just follow the lines and drive the vehicle off a cliff because it looks like a roadway?
In NNs’ defense this would be felony manslaughter, and a lot of humans would do the same thing and drive off the cliff. Kind of like removing a stop sign.

But yeah, before we start worrying about all the moral decisions these systems are going to have to make, we could start with robust detection.

Looks like we have years to go to get to an adequate level.
 
In NNs’ defense this would be felony manslaughter, and a lot of humans would do the same thing and drive off the cliff. Kind of like removing a stop sign.

But yeah, before we start worrying about all the moral decisions these systems are going to have to make, we could start with robust detection.

Looks like we have years to go to get to an adequate level.
Yeah.
The situation has happened many time without malicious human action. Overpass collapse due to eathquake. Bridge collapse due to deterioration and material storage. Road missing due to landslide or flood. Lane missing due to construction.

Reliable drivable space detection is critical.
 
Yeah.
The situation has happened many time without malicious human action. Overpass collapse due to eathquake. Bridge collapse due to deterioration and material storage. Road missing due to landslide or flood. Lane missing due to construction.

Reliable drivable space detection is critical.
Good point it doesn't need to be malicious, there are also circumstances like this that would be very rare from an individual perspective but aren't as rare when you apply it across millions of people spread around a country all using software with the same weaknesses at its core.

Then you think about generalized autonomy for the entire globe and how road infrastructure in many other areas is... well it's not like it is in North America. And these places that lack the $$$ to keep their infrastructure in good condition, those tend to be where road accidents/fatalities happen at the highest rates and where such technology would be the most beneficial.
 
Yeah.
The situation has happened many time without malicious human action. Overpass collapse due to eathquake. Bridge collapse due to deterioration and material storage. Road missing due to landslide or flood. Lane missing due to construction.

Reliable drivable space detection is critical.
I think for that specific scenario, it goes down to if it is a situation that would fool a human, is that acceptable for the computer to be fooled? Or will the market hold the computer to a far higher standard?
 
I think for that specific scenario, it goes down to if it is a situation that would fool a human, is that acceptable for the computer to be fooled? Or will the market hold the computer to a far higher standard?
The issue I see is that it is theoretically possible to train a computer to recognize any specific situation; so, in retrospect, it could have avoided whatever the problem was. Thus, it is a failing of the SW as it was.

Failures will happen, but should not be repeated and, via generalization, should become less over time covering more of reality. As opposed to the set of all driver who, collectively, never improve overall.

In other words: you can fool all of the people some of the time; however, fool computer once, shame on you, fool computer twice, shame on it.
 

I just watched this video. I think it has some insights on the supposed 34" limit to object detection. From the test it clearly recognizes the dog (which is around 24" tall) and also recognizes the person on his knees (although it still visualizes as a full adult) until basically he curled up into a ball (and perhaps camera lost sight of him). So that pretty much disproves any hard 34" limit to object detection.

There are some things I noticed, which is that for the smaller objects the car does seem to like to drive around (so maybe the cones play a big role for this reason), while for full adults, it seems to come to a complete stop and ask the driver to press accelerator to confirm before proceeding.

The other thing is that if the car approaches right up to the dog, it proceeds after seemingly "forgetting" there is one when the camera loses sight of it. From previous presentations on how persistence works, this kind of makes sense as there is limited buffer. FSD also marks objects darker when it is tracking them actively, but it is unknown how long this tracking persists once the camera loses direct sight and how many objects it can track at a time.

Not sure how to fix this issue of losing track of a low object (without additional hardware like a low parking camera). One way is when there is a low object, try to stop in a way to keep it in view. But if low object moves closer to car (or car overshoots when stopping), it can still lose track, so it's not a perfect solution.

Other way is when detecting a low object moving into front of car, always warn and ask driver to confirm before proceeding. If done in an automated way (without confirmation), basically assume the object is still in front, and don't move until the object comes back into view of camera. However, this does come with challenges of how to recognize if the object is the same one that was lost track of, and also other corner cases (like object getting in front while mostly in the blind spot the whole time). If there are multiple objects that would complicate things further.

Hardware-wise, a parking camera would seem to be a fairly straightforward way to address this though and eliminate that front blindspot.
 
Last edited:
So that pretty much disproves any hard 34" limit to object detection.
Hard to say.

1) It’s not clear at what height he disappeared, but he is more than 34” from the knees up.

2) While they show the object disappears when curled into a ball, this is only the detection portion of the test. Response is a separate portion, which was not tested here for the very small height objects (dogs excluded). Remember it looked like in the O’Dowd tests that in all three tests the object was at least briefly visualized, and was hit anyway.

3) As previously mentioned, dogs and cones are in a different category and I would expect they would be detected at lower heights. Smaller humans are potentially just thrown out (or perhaps perceived to be further away though not sure about that?); just because a dog less than 34” is detected (and responded to!) does not mean a human would be.

4) Velocity-dependence of response is another potential issue.

So I don’t think this video disproves any 34” limit on object detection and response. I don’t know whether any such limit exists (evidence suggests there is a gap somewhere though), but this does not rule it out.

Not sure how to fix this issue of losing track of a low object (without additional hardware like a low parking camera). One way is when there is a low object, try to stop in a way to keep it in view. But if low object moves closer to car (of car overshoots when stopping), it can still lose track, so it's not a perfect solution
This is an issue but seems like a minor one for now relative to the issue at hand. Using parking sensors to track movement of a known animate object around the car is one option. Stationary items can just be mapped just like a human does. Anyway that stuff (and limited memory, etc.) would be kind of off-topic here and there is a lot of discussion on this elsewhere. Obviously there are corner cases (not saying they are rare or should not be handled) with children popping out last second which would be relevant but should start with the simpler stuff…
 
Last edited:
Hard to say.

1) It’s not clear at what height he disappeared, but he is more than 34” from the knees up.

2) While they show the object disappears when curled into a ball, this is only the detection portion of the test. Response is a separate portion, which was not tested here for the very small height objects (dogs excluded). Remember it looked like in the O’Dowd tests that in all three tests the object was at least briefly visualized, and was hit anyway.

3) As previously mentioned, dogs and cones are in a different category and I would expect they would be detected at lower heights. Smaller humans are potentially just thrown out (or perhaps perceived to be further away though not sure about that?); just because a dog less than 34” is detected (and responded to!) does not mean a human would be.

4) Velocity-dependence of response is another potential issue.

So I don’t think this video disproves any 34” limit on object detection and response. I don’t know whether any such limit exists (evidence suggests there is a gap somewhere though), but this does not rule it out.
I think it disproves 34" limit on living subject detection (dog is an example) or object detection (as you point out cones). But you are right it does not necessarily disprove 34" limit on human detection. It could be the way the NN is trained (not much samples of small children that short). The visualization at least doesn't appear to even have them as a separate category (even though animals are), and just visualizes everyone as an adult.

This may be related, but the NCAP AEB child dummy height is 1154 ± 20mm (45.4 ± 0.79 inches)
https://cdn.euroncap.com/media/21509/euro-ncap-aeb-vru-test-protocol-v101.pdf
This is an issue but seems like a minor one for now relative to the issue at hand. Using parking sensors to track movement of a known animate object around the car is one option. Stationary items can just be mapped just like a human does. Anyway that stuff (and limited memory, etc.) would be kind of off-topic here and there is a lot of discussion on this elsewhere. Obviously there are corner cases (not saying they are rare or should not be handled) with children popping out last second which would be relevant but should start with the simpler stuff…
Yes, I guess the first step is to even attempt to come to a stop, but the loss of track still needs to be addressed, because as it is, even when it comes to a stop, it eventually forgets there is still a low object in front and just proceeds (other than the examples where it explicitly asks driver to confirm).
 
Last edited:
Someone in another thread here mentioned painting road lines that lead off a cliff, would FSD just follow the lines and drive the vehicle off a cliff because it looks like a roadway?
I SWEAR the Road Runner did this SAME thing to Wile E Coyote once. In that episode I think Wile E had on rocket powered roller skates.

Another episode, the Road Runner painted a mural of the lanes of the highway continuing to go miles ahead, but it was an illusion as the mural was on the side of a flat mountain.
 
Not sure how to fix this issue of losing track of a low object (without additional hardware like a low parking camera). One way is when there is a low object, try to stop in a way to keep it in view. But if low object moves closer to car (of car overshoots when stopping), it can still lose track, so it's not a perfect solution.
It does have ultrasonics, but I wonder if the low pressure inflated doggo is a difficult object to detect with them.
 
I SWEAR the Road Runner did this SAME thing to Wile E Coyote once. In that episode I think Wile E had on rocket powered roller skates.

Another episode, the Road Runner painted a mural of the lanes of the highway continuing to go miles ahead, but it was an illusion as the mural was on the side of a flat mountain.
A recurring theme on that classic cartoon show.
Here's just one example:
 
  • Funny
Reactions: 2101Guy
Whole Mars is still testing FSD Beta on other people's children. Either he set this up, saw the kid was circling on the street, or this kid has the worst mom ever.

In any case, he didn't manually disengage, so he was testing FSD Beta on this child.

According to his Twitter thread: "the car is always watching and knows not to ever hit anything".

Child.png

Now hold on CNBC before you get too excited and call YouTube again, I did NOT tell that little girl to ride her bike in front of her car or plan that in any way. It just happened... happens all the time. Wasn't even the only kid in the street that day

Before the car even puled out of its parking spot, it noticed that there were pedestrians on the sidewalk. It tempered its acceleration profile just in case. As it started pulling out it noticed that there were two separately moving VRUs to track. Whether the driver has noticed or not it is already watching the little girl to see what she will do, and limiting acceleration just in case.

As the car pulled out, it recognized that the child was on a bicycle. This allowed it to more accurately predict the possible future movements of the child and factor that into its route planning. Tracking the movement of the bike eventually ends with the car making a full stop and waiting for the vulnerable road user to get out of the way before continuing

Just imagine the power of this technology in every car. No matter, what the driver is doing — checking their phone, spilling their drink — the car is always watching and knows not to ever hit anything. Think about how many senseless tragedies could be prevented.
 
Last edited:
Whole Mars is still testing FSD Beta on other people's children. Either he set this up, saw the kid was circling on the street, or this kid has the worst mom ever.

In any case, he didn't manually disengage, so he was testing FSD Beta on this child.

According to his Twitter thread: "the car is always watching and knows not to ever hit anything".

View attachment 843652

Now hold on CNBC before you get too excited and call YouTube again, I did NOT tell that little girl to ride her bike in front of her car or plan that in any way. It just happened... happens all the time. Wasn't even the only kid in the street that day

Before the car even puled out of its parking spot, it noticed that there were pedestrians on the sidewalk. It tempered its acceleration profile just in case. As it started pulling out it noticed that there were two separately moving VRUs to track. Whether the driver has noticed or not it is already watching the little girl to see what she will do, and limiting acceleration just in case.

As the car pulled out, it recognized that the child was on a bicycle. This allowed it to more accurately predict the possible future movements of the child and factor that into its route planning. Tracking the movement of the bike eventually ends with the car making a full stop and waiting for the vulnerable road user to get out of the way before continuing

Just imagine the power of this technology in every car. No matter, what the driver is doing — checking their phone, spilling their drink — the car is always watching and knows not to ever hit anything. Think about how many senseless tragedies could be prevented.
I guess the question for Whole Mars is why isn’t Tesla running this awesome system in the car in the background all the time in all vehicles? (Yes it does sound like they are starting to, which is great!)

Anyway, the argument that FSD is good because it will prevent you from hitting children or improves safety is silly - that feature will apply to ALL Teslas, according to Elon - they will never charge for safety features.

So he’ll need to come up with a new argument. Everyone is getting all the safety features to prevent collisions (within the hardware capabilities). The only thing people who get FSD get is a tool which will end up increasing or decreasing the rate at which these dangerous situations are actually encountered (it’s not clear how this will go and it may depend on the driver).

At the current time, it should be assumed that FSD Beta decreases safety, both because it cannot prevent collisions that a human can, and because it puts itself in more risky situations (and that is what the safety driver is for, to diligently avoid tricky situations, through heightened attention whenever using FSD Beta).
 
Last edited:
  • Like
Reactions: Dan D.
I guess the question for Whole Mars is why isn’t Tesla running this awesome system in the car in the background all the time in all vehicles? (Yes it does sound like they are starting to, which is great!)

Anyway, the argument that FSD is good because it will prevent you from hitting children or improves safety is silly - that feature will apply to ALL Teslas, according to Elon - they will never charge for safety features.

So he’ll need to come up with a new argument. Everyone is getting all the safety features to prevent collisions (within the hardware capabilities). The only thing people who get FSD get is a tool which will end up increasing or decreasing the rate at which these dangerous situations are actually encountered (it’s not clear how this will go and it may depend on the driver).

At the current time, it should be assumed that FSD Beta decreases safety, both because it cannot prevent collisions that a human can, and because it puts itself in more risky situations (and that is what the safety driver is for, to diligently avoid tricky situations, through heightened attention whenever using FSD Beta).

The thing with Whole Mars, is he has a popular following, is influential in his own way, and says things like this:
"the car is always watching and knows not to ever hit anything"

That makes HIM the thing that is decreasing safety, because people seem to believe him and this kind of silly statement.
 
Last edited:
"the car is always watching and knows not to ever hit anything"
He would claim this is a reference to a hypothetical future, not the current state, when taken in context. I read it that way and it did not jump out at me when read within the text.

But yeah, this sort of talk is way more dangerous than FUD, for Tesla. And of course dangerous to road users.
 
  • Like
Reactions: Dan D.
He would claim this is a reference to a hypothetical future, not the current state, when taken in context.

But yeah, this sort of talk is way more dangerous than FUD, for Tesla. And of course dangerous to road users.
I didn't continue his Twitter quotes. This was how it went. I think he's talking about what it can already do:


Just imagine the power of this technology in every car. No matter, what the driver is doing — checking their phone, spilling their drink — the car is always watching and knows not to ever hit anything. Think about how many senseless tragedies could be prevented.

It's not a hypothetical. It happens hundreds of times a day through Tesla's active safety features and Autopilot. What kind of a sick person sees this and only thinks about their money? We need this tech yesterday. Idiots...
 
  • Like
Reactions: AlanSubie4Life