Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD fails to detect children in the road

This site may earn commission on affiliate links.
but purely from a safety standpoint the bar is average humans. What else could it be?
I also agree that if FSD Beta is significantly safer than average humans it should be widely released.
wanting the car to be super-human at driving,
It literally has superhuman capabilities when it comes to reaction time (and arguably its vision system), so not asking that much here, I feel. It should be better at some things than humans at least.


In light of recent posts, it’s interesting to think about common ground here. It seems that some people are retreating to their familiar dogma here, and I don’t think that is really necessary. Mostly we agree, I think?

As has been hopefully mostly clear from the beginning, from my comments in this and a related thread :

1) I believe this was a contrived test.
2) I don’t think DOD is very interested in improving safety.
3) FSD has limitations, which are likely being reduced over time.
4) I don’t think the results of this test were surprising to Tesla’s FSD team.
5) If FSD is significantly safer than an average human driver it should certainly be released and used by many people.
6) I believe there is still quite a lot of work to get to that safely level.
7) I complain a lot about smoothness elsewhere but in reality I think that is probably easy to fix relative to driving properly & safely.
8) I think it is dangerous to ascribe abilities to FSD it currently does not have, and that it is dangerous to view any abilities it has as black and white “it can or it can’t.”

Not sure if there is any common ground. Hopefully there is!

Looking forward to seeing this scenario being tested on 10.69.2. I fear Dan O’Dowd will not come through for us.
 
  • Like
Reactions: KArnold
It literally has superhuman capabilities when it comes to reaction time (and arguably its vision system), so not asking that much here, I feel. It should be better at some things than humans at least.
Clearly it wont have the vices of humans (drunk, lack of attention, distraction), and this alone will help it get better. It will probably never have the full skills of a human driver but it will operate consistently at all times. This in itself seems to me a huge benefit .. most accidents seem to be caused by inattention or stupid mistakes.
 
I guess we need a brave volunteer to never disengage and find out. Otherwise we have no idea. :confused:
Not it!
lol yes ... me neither .. but that IS part of the problem. We all grab the wheel back when we think the car is about to do something bad, and no doubt we are right many times .. but how many times would the faster reaction times of the car have got us out of trouble? Probably Tesla can look at the cars "plan" at the moment of disengagement and judge the "what if", but I've no idea what percentage of disengagements fall into the "car would have handled it" bucket.
 
lol yes ... me neither .. but that IS part of the problem. We all grab the wheel back when we think the car is about to do something bad, and no doubt we are right many times .. but how many times would the faster reaction times of the car have got us out of trouble? Probably Tesla can look at the cars "plan" at the moment of disengagement and judge the "what if", but I've no idea what percentage of disengagements fall into the "car would have handled it" bucket.
Well if the average beta user is right even once then that proves that we’re so far from human performance that it’s premature to start talking about superhuman performance. Except with regards to smoothness and precision, that should be relatively easy for a machine to achieve.
 
  • Like
Reactions: AlanSubie4Life
Well if the average beta user is right even once then that proves that we’re so far from human performance that it’s premature to start talking about superhuman performance. Except with regards to smoothness and precision, that should be relatively easy for a machine to achieve.
Depends, how many times did beta being engaged prevent the user from doing sonething bad?
 
  • Like
Reactions: drtimhill
most accidents seem to be caused by inattention or stupid mistakes.
I think probably the statistics would agree with this, but the more interesting statistic to me is how many accidents and deaths are avoided by people driving normally, and not drunk and high?

It’s quite possible that FSD Beta is better than the drunkest, highest human, who is having a bad day, and is asleep at the wheel. But that’s not the bar.
 
I think probably the statistics would agree with this, but the more interesting statistic to me is how many accidents and deaths are avoided by people driving normally, and not drunk and high?

It’s quite possible that FSD Beta is better than the drunkest, highest human, who is having a bad day, and is asleep at the wheel. But that’s not the bar.
I didn't say it was, and most humans are sensible cautious drivers. But not all. And that's the point, once ONE Tesla car is a decent driver they all are. Note most of the cars, all of them.
 
Well if the average beta user is right even once then that proves that we’re so far from human performance that it’s premature to start talking about superhuman performance. Except with regards to smoothness and precision, that should be relatively easy for a machine to achieve.
No, it doesnt. It proves the car isnt perfect .. but perfect isnt the bar, better is the bar. Technically, that's all superhuman means.
 
In terms of overall public safety impact, we can all be aligned the proposition that FSD can (even must) be released once its accident rate is lower than humans (here setting aside endless possible argument about the conditions on that statistical metric).

However, and I think this is a likely development, if those few accidents that do happen on FSD are not familiar human-like accidents, then it will be easy to sensationalize them and attack the concept of FSD deployment, whatever the statistics. This is an inevitable barrier of new-technology acceptance, and is traditionally not an easy argument to win with a strategy of citing statistics

When people see or hear about car accidents, the human tendency is to consider whether "that could have happened to me" (or my family). Further, average drivers consider their skills to be above average, and as humans we trust the situation more if we are in control (here literally "in the driver's seat"). Hence why many people fear airplane travel despite overwhelmingly reassuring statistics.

So it will be cold comfort when, on rare occasions, an FSD car does something highly unexpected and crashes (or causes someone else to crash) and an injured party is told how rare this actually is.

For example, if a 10e-9 glitch causes the FSD car to slam on its brakes, maybe because a Wendy's bag with its cartoon of a sweet little girl blows across the highway, and one or more cars in the queue behind it end up having technically at-fault collisions.

I'm definitely not arguing that property developed and qualified FSD shouldn't be released, or that the statistically-safer argument is invalid- quite the contrary. But the statistics certainly aren't going to stop the Dan O'Dowds of the world, or the Washington Post staff or even the local Nine on Your Side news crew, from publishing damning coverage and misrepresentations of FSD safety.
 
  • Like
Reactions: Dewg
No, it doesnt. It proves the car isnt perfect .. but perfect isnt the bar, better is the bar. Technically, that's all superhuman means.
Yep. How often does the average human driver have a severe collision and how often would FSD have a collision without supervision? That’s the question.
So if the average beta user has one disengagement that prevents a severe collision and 5000 miles driven you could compare that to the collision rate for a human driven Tesla.
Obviously not every severe collision is equally severe so it’s a little more complicated. You keep trying to disagree when we are actually agreeing, I think.
Depends, how many times did beta being engaged prevent the user from doing sonething bad?
The most it can possibly do is prevent every collision a human driver would have which is 1 per 2 million miles according to Tesla. Even if we assume it prevents all those collisions it still must cause only 1 collision per 2 million miles to be on par with a human.
 
Last edited:
  • Like
Reactions: AlanSubie4Life
You keep trying to disagree when we are actually agreeing, I think.

Yep, we are all in agreement here.

It will be very exciting to get to these safety levels. Since Tesla says “wrong thing, worst time,” my guess is we aren’t there yet, without measuring. Also I can guess this based on my personal experience, though anecdotes are worthless - I am VERY unlucky, after all, so I could be an aberration.
 
Yep, we are all in agreement here.

It will be very exciting to get to these safety levels. Since Tesla says “wrong thing, worst time,” my guess is we aren’t there yet, without measuring. Also I can guess this based on my personal experience, though anecdotes are worthless - I am VERY unlucky, after all, so I could be an aberration.
I disagree that we are agreeing though i agree we are disagreeing to agree. I think.