Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD fails to detect children in the road

This site may earn commission on affiliate links.
But in other tests carried out both informally by other owners and also by government agencies the car DOES stop. So is it purely a coincidence that tests carried out by a competitor who has a vested interest in Tesla failing are the only ones that DO show the car failing badly?
Well that's not true, there are many videos of the car failing badly that were taken by Tesla owners.

Dirty Tesla videos show the car running over fake dogs and not seeing people who are "ducked down".
TechGeek Tesla runs through mannequins on the street.
Mother Frunker runs over child-sized objects.

All these people are Tesla fans. These informal tests aren't very scientific but show the car running over or failing to detect objects that in theory could be live humans, large dogs, or objects large enough to seriously damage the car if they were concrete street furniture for example.

There are many videos of FSD Beta about to hit walls, fences, trains, etc and only didn't because the human stopped the car just in time. That is still a fail of an informal "test" while driving, the car didn't stop, and those videos are all from Tesla customers. The Dan O'Dowd video being a "competitor" showing it failing seems actually to be a rare occurrence, I'm not even sure I can name other video tests carried out by a competitor.
 
Great point. Another TMC poster - @Discoducky - conducted a series of tests and found out anything taller than 32" Tesla had NO problems detecting and stopping. Lower than that is where it fails
Yep, and that was a static mannequin. I'm going to try moving objects next. I've heard the system is seeing and controlling for objects as small as cats or even squirrels. My kids have lots of stuffies 😉
 
Certainly it was breaking the rules, but obviously we’d like FSD Beta to stop for children whether or not we have hands on the wheel, since it will only detect these things periodically, and the interwebs are inundated with people whom Elon interacts with breaking these cardinal rules CONSTANTLY and extolling the virtues of FSD Beta.
So true. I'd even say the majority of FSD Beta videos show people who don't have their hands on the wheel most of the time, some of the official Tesla and Elon videos too Plus the drivers of the yoke who physically can't KEEP their hands on the yoke but have to hover their hands near it in turns. Touching against a random bit of the yoke is not "hands on the wheel".

Mostly in fact you see people only touching the wheel in response to the computer's warning to touch the wheel. These are the same people who will be instantly shunned when a crash occurs because they were using it wrong. That's most of the testers - not all of course.

Some do hold the wheel, good for them. For the rest of the testers who don't hold the wheel, "there but for the grace of god [fate] go I". A recognition that others' misfortune could be one's own, if it weren't for the blessing of the Divine, or for fortune or fate. Can it be done by everyone, hands on the wheel at all times?

After all, Tesla knows when people aren't holding the wheel at all times from the telemetry. If they are so quick to blame the driver and know the system needs more monitoring they should actually decrease the delay time to 2-3 seconds then.
 
Last edited:
"If FSD disengaged X seconds before the impact Dan O'Dowd should say that. "

I don’t think it did based on the videos.
1. the released vids are only the failures, thus Dan oD is already not reporting when FSD stopped/ disengaged.
2. First curated vid: FSD kicks out 2 seconds from impact. Drops from 35 to 19 MPH, engages hazards.
SmartSelect_20220826-071325_Firefox.jpg
SmartSelect_20220826-070852_Firefox.jpg
SmartSelect_20220826-070922_Firefox.jpg
 
  • Like
Reactions: drtimhill
Other observations, it looks like mannequin shape, contrast, and size combined with its placement on the vehicle centerline along with the lines of cones causes the system to misjudge the distance.
Shape also does not always map to a stock UI element, possibly due to small size.

Test 2:
Sees object in lane, would go around if it could. Note luminance of jacket vs background, contrast of jacket to pants, placement on painted line, and turn signal activation.
SmartSelect_20220826-072806_Firefox.jpg


Test 3:
Sees object in lane and would route around it, if possible. Note turn signal which gets canceled. Also lack of contrast between face/ hand and background plus distended abdomin.
SmartSelect_20220826-072352_Firefox.jpg
SmartSelect_20220826-072433_Firefox.jpg


SmartSelect_20220826-072544_Firefox.jpg
 
Dirty Tesla videos show the car running over fake dogs and not seeing people who are "ducked down".
TechGeek Tesla runs through mannequins on the street.
Mother Frunker runs over child-sized objects.

All these people are Tesla fans. These informal tests aren't very scientific but show the car running over or failing to detect objects that in theory could be live humans, large dogs, or objects large enough to seriously damage the car if they were concrete street furniture for example.
I can't talk to the TechGeek/MF videos as I 've not seen those (and you dont provide links). But Dirty Tesla pretty much had the car stopping every time, except when he crouched down so close to the car as to be hidden from the cameras, which would fool a human driver also. As for the other "in theory" objects, I'm not sure how theoretical concrete street furniture is relevant here.
 
1. the released vids are only the failures, thus Dan oD is already not reporting when FSD stopped/ disengaged.
Yes. I pointed this out first in the related threads on this topic. It’s very misleading!
First curated vid: FSD kicks out 2 seconds from impact. Drops from 35 to 19 MPH, engages hazards.
I stand corrected on this, partially. However, note that it is NOT two seconds. That is perhaps 1.5 or 2 seconds of video time, but the video is slowed down. If you look at distances and the indicated speed, it is closer to 0.5 to 1 seconds before impact, concurrent with engagement of hazards (if max braking applied it was 10mph speed change at 0.9g (~27mph to ~17mph) which is about 0.5 seconds). What functionality exists in this hazard case I do not know (whether it still stops and steers - I believe or does but I don’t know). I have never had it happen.

Anyway it is true that FSD disengaged, but also it disengaged after collision was unavoidable (red hands probably appeared while collision was still avoidable).

Anyway it doesn’t really matter (except what I thought to be incorrect was not technically incorrect - I was wrong). It detected the object in the road (objects appear in all three tests) and it hit it anyway. I think we can all agree this is undesirable even if the situation is contrived.



Note turn signal which gets canceled
FSD will put on signals and cancel them, yes. Clearly this test was designed to make FSD fail. However, a human would not fail this test (if attentive).

Reiterating above, and correcting record:
So the mannequin was struck when autopilot was disengaged,
Yes though it was not due to inattention. It was apparently due to detected unavoidable collision.
Incorrect
I stand corrected. FSD was disabled, though car was still likely controlling steering (?) and braking.
 
Last edited:
  • Helpful
  • Love
Reactions: swedge and mongo
Mongo, excellent collection of pics to illustrate a point.
Other observations, it looks like mannequin shape, contrast, and size combined with its placement on the vehicle centerline along with the lines of cones causes the system to misjudge the distance.

Since this is a head on approach, cues like parallax may be missing. An fsd Rorschach test (pic 3) could interpret this as an irregularity in the road or tarred patch.
 
  • Like
Reactions: mongo
Well… today I had some phantom braking for someone's 8' campaign sign on a corner fence where there's crosswalks. I suppose one way to get politicians to act is to see if FSD Beta would detect a candidate's sign and run it over or not…
Oh, Oh, Oh!!! What an opportunity! Let's use politicians for live testing. If a few get run over, meh. I nominate McConnell as the first. I'm pretty sure he is of an age who signed an agreement that he was willing to be drafted if American needed him.

Edit: If we run out of politicians, there are plenty of lawyers available. Surely FSD will be out of Beta before we get to the used car salesmen?
 
Last edited:
I stand corrected on this, partially. However, note that it is NOT two seconds. That is perhaps 1.5 or 2 seconds of video time, but the video is slowed down. If you look at distances and the indicated speed, it is closer to 0.5 to 1 seconds before impact, concurrent with engagement of hazards (if max braking applied it was 10mph speed change at 0.9g (~27mph to ~17mph) which is about 0.5 seconds). What functionality exists in this hazard case I do not know (whether it still stops and steers - I believe or does but I don’t know). I have never had it happen.
Good point on video speed. I was watching at 0.25 to get the stills and didn't catch the original was also slowed. Looked like it went from FSD to FCW (red wheel, no hazards) to AEB (hazards).
 
  • Like
Reactions: AlanSubie4Life
Good point on video speed. I was watching at 0.25 to get the stills and didn't catch the original was also slowed. Looked like it went from FSD to FCW (red wheel, no hazards) to AEB (hazards).
How can it be slowed down? Aren't you looking at the "Raw, unedited video footage from four different cameras of the Tesla striking the three (3) mannequins can be viewed here."
 
  • Love
Reactions: mongo
I'd like to point out that Ashok Elluswamy, the head of the Autopilot, said at CVPR 22 that the goal of FSD is not to hit anything. He points out that it is impossible to train the perception NN on every type of object.
"this is obviously an impossible fight to fight and we do not want to fight this fight we just want to avoid the obstacles and hit neither moving nor stationary obstacles or anything for that matter"
He even shows a picture of someone crawling on the road as an example:

Not sure why we can't just stick to the "it's beta" explanation for what happened.
1661524362116.png
 
Last edited:
  • Like
Reactions: AlanSubie4Life
I'd like to point out that Ashok Elluswamy, the head of the Autopilot, said at CVPR 22 that the goal of FSD is not to hit anything.
"this is obviously an impossible fight to fight and we do not want to fight this fight we just want to avoid the obstacles and hit neither moving nor stationary obstacles or anything for that matter"
He even shows a picture of someone crawling on the road as an example:

Not sure why we can't just stick to the "it's beta" explanation for what happened.
View attachment 845455
I think we agree it shouldn't hit anything, and that it is not perfect.
However, how imperfect is it? Is it "will hit every child crossing a road imperfect" which is DoD's message. Or is it : "will sometimes hit an object smaller than a typical mobile child when penned in by cones, driving into the sun, at 40 MPH, when the object is placed on a line (with cones), along the centerline of the vehicle, posed sideways, with specific clothing."?

Edge cases need to be dealt with, and this shows an area for improvement, but there is a huge gulf between what DoD is saying and the reality of the test.
Further, FSD Beta is designed with the understanding that the driver is supervising. So if there is ambiguity, it can defer action rather than, say, phantom braking. The confidence in the cones was higher than the confidence of the mannequin and it didn't reach the threshold of a hard stop. Were it configured as unsupervised, it may well have stopped.
 
  • Like
Reactions: Electroman
So what's the difference between a speed bump and someone dressed in black sweats lying passed out on the road? The car will at least slow down before running them over.

Obviously Tesla will train the cars to spot little kids. The company is not a monster desperate to murder people. There are a lot of objects out there to train for, and some that cannot be trained for and must just be avoided.

A few 10.69.0 videos are showing the new grey matter on the visualization. Untrained 3D objects that the car knows to move around. It'll get there.
 
So what's the difference between a speed bump and someone dressed in black sweats lying passed out on the road? The car will at least slow down before running them over.

Obviously Tesla will train the cars to spot little kids. The company is not a monster desperate to murder people. There are a lot of objects out there to train for, and some that cannot be trained for and must just be avoided.

A few 10.69.0 videos are showing the new grey matter on the visualization. Untrained 3D objects that the car knows to move around. It'll get there.

Also needs to avoid leaf piles large enough to hide children.

Puddles vs flooded roads will be an interesting use case, how far down is the bottom?
 
Do we think children do not use residential streets? Is FSD Beta so bad that we don’t expect it to be used there so it doesn’t matter? Do we not expect toddlers to be zooming around cul-de-sacs unattended at times? (This happens all the time…are we going to blame the inattentive parents rather than the FSD Beta car being used by an inattentive driver?) I am so confused.

I’m going to go out on a limb and say FSD Beta isn’t going to detect all children and objects at all times, and that’s why people need to be super careful and not make up things about superhuman abilities it does not have.
I have children, they could NOT use any streets at the age of 1. Trikes, balls, other fun without need of constant help happens closer to the age of 2 or rather 3. Between age 1 and 2 children must be walked holding your hand 95% of time and they are never more than say 20 feet away from an adult.

My point is that it is misleading and totally irrelevant from the perspective of safety to test the PAEB (or AP or FSD) on a 1 year old child. There is a much more important child safety scenario when the car starts moving and a 1 year old is sitting right in front of the car or right behind the car. But that is not what has been presented.

For safety of the FSD we only need it to detect all RELEVANT objects. Sorry, but 1 year olds walking in the construction zone of a 40 mph street or going alone to school are not relevant.
 
Other observations, it looks like mannequin shape, contrast, and size combined with its placement on the vehicle centerline along with the lines of cones causes the system to misjudge the distance.
Shape also does not always map to a stock UI element, possibly due to small size.
I remember a recent research article on small object detection by machine vision, and there is a perspective problem of determining the distance to objects that are smaller than typical object of the same class. If Tesla is trained to determine children that exist in the wild, it may have a hard time to find the distance to a 1 year old child that size is not much taller than a safety cone. It is likely that Tesla misjudges the distance to that impossible child when compares its height to the cone height.
 
  • Like
Reactions: mongo
The NHTSA says 181 children pedestrians were killed in 2019. That is 1 per 17.7 billion miles driven. The goal of FSD is to achieve greater than human safety and it's not as low a bar as many here seem to think. I don't think a strategy of running over random toddler size objects in the road is going to achieve greater than human safety (which is why Tesla is not pursuing that strategy!).
 
  • Like
Reactions: AlanSubie4Life
I remember a recent research article on small object detection by machine vision, and there is a perspective problem of determining the distance to objects that are smaller than typical object of the same class. If Tesla is trained to determine children that exist in the wild, it may have a hard time to find the distance to a 1 year old child that size is not much taller than a safety cone. It is likely that Tesla misjudges the distance to that impossible child when compares its height to the cone height.
Exactly. In some of the screenshots I posted you can see the object being located just past the cones.

So small child looking object resolves to child at the distance that size corresponds to. Until it gets closer then no longer matches what a child is expected to look like. At that point, it knows something is there, but can't label it for the UI.