Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

AI experts: true full self-driving cars could be decades away because AI is not good enough yet

This site may earn commission on affiliate links.
An other interesting article ,this time regarding state of the art ML image recognizing problems



View attachment 685859
All this demonstrates that with an obstruction of 90% of an object's visible surface, you expect something more. What, exactly?

Serious question.
 
I think people expect it to be a good as a human. Recognizing partially obstructed objects is critical to driving.

A human, when paying attention, can basically be a perfect driver.

Your logic implies that a computer, which will always be paying attention, should be a perfect driver.

I would argue that computer driven cars are useful once they pass human-level accident rates by some margin... 2X, 5X, 10X? I don't know. Even then, the failure modes will be different than human failures.
 
A human, when paying attention, can basically be a perfect driver.

Your logic implies that a computer, which will always be paying attention, should be a perfect driver.

I would argue that computer driven cars are useful once they pass human-level accident rates by some margin... 2X, 5X, 10X? I don't know. Even then, the failure modes will be different than human failures.
I didn't mean to imply that, I'm hoping 2X human safety is enough for cultural acceptance.
No one knows what it will take to achieve that though. Current computer vision may be "good enough" if you can improve other components of the system.
 
I didn't mean to imply that, I'm hoping 2X human safety is enough for cultural acceptance.
No one knows what it will take to achieve that though. Current computer vision may be "good enough" if you can improve other components of the system.

Amnon says that 50,000 AVs driving at average human safety would be about 1 accident every hour which would be an unacceptable business model. So he is arguing for 1000x safer than humans.


From Amnon's calculation, we can extrapolate that 2x safer than humans would mean 50,000 AVs getting into an accident every 2 hours. I don't think that would be acceptable either.

This study surveyed 499 people and found that cultural acceptance is around 100x safer than humans. Tolerable risk would be 4-5x safer than humans.

Two risk-acceptance criteria emerged: the tolerable risk criterion, which indicates that SDVs should be four to five times as safe as HDVs, and the broadly acceptable risk criterion, which suggests that half of the respondents hoped that the traffic risk of SDVs would be two orders of magnitude lower than the current estimated traffic risk.


So I think that 2x safer is not good enough.
 
Last edited:
1000x safer than humans.
This is obviously impossible, even the best defensive driver on earth would not be able to achieve that.
What about a 4-5x reduction in at fault accidents and only a little better than a human at avoiding not at fault accidents? That seems like a realistic scenario for achieving 2x human safety while being culturally acceptable.
 
This is obviously impossible, even the best defensive driver on earth would not be able to achieve that.

I think Amnon means 1000x safer than average human, not 1000x safer than best human. But yeah, even 1000x safer than average human might not be achievable. And the survey found that 100x safer than average human was acceptable. So 1000x safer might be going too far.

What about a 4-5x reduction in at fault accidents and only a little better than a human at avoiding not at fault accidents? That seems like a realistic scenario for achieving 2x human safety while being culturally acceptable.

Yeah, I think that sounds reasonable to me.

It should be noted that "x times safer than humans" is too vague. It really depends on the ODD. Human safety in LA is different than human safety in a small midwest town. Human safety on the highway on a nice day is different from human safety on the highway in a storm or human safety in downtown rush hour traffic. So we really need to specify the ODD that we are using to compare AV safety.
 
So the average human driver gets in an accident once every 50,000 hours of driving? Does not sound right to me. If I average 3 hours of driving per day (which is waaaay higher than reality) then I would go like 45 years between accidents. Are average humans this safe? (Or am I butchering the statistics?)
 
So the average human driver gets in an accident once every 50,000 hours of driving? Does not sound right to me. If I average 3 hours of driving per day (which is waaaay higher than reality) then I would go like 45 years between accidents. Are average humans this safe? (Or am I butchering the statistics?)
That probably doesn’t count very minor fender benders.
 
  • Like
Reactions: Spuzzz
I think Amnon means 1000x safer than average human, not 1000x safer than best human. But yeah, even 1000x safer than average human might not be achievable.
I meant 1000x safer than average. There is way too high a percentage of collisions that are simply unavoidable. I was rear ended once while stopped at a light. I suppose if the lane next to me were open (I don't remember) an AV could move over...
 
So the average human driver gets in an accident once every 50,000 hours of driving? Does not sound right to me. If I average 3 hours of driving per day (which is waaaay higher than reality) then I would go like 45 years between accidents. Are average humans this safe? (Or am I butchering the statistics?)

I think the stat is average human has 1 accident per 500,000 miles. But to get accidents per hour, you have to convert that to hours of driving. So it depends on your driving speed. So at an average speed of 50 mph, it would be 1 accident per 10,000 hours of driving. At an average speed of 10 mph, it would be 1 accident per 50,000 hours of driving.

But like I said, it gets messy because it really depends on ODD and what types of accidents we are counting.

Tech isn’t really the problem; societal acceptance is. Amnon Shashua, chief executive of Intel-owned Mobileye, points out that if a Level 4 system can get to a point of crashing only once every 1m miles — two times better than a human driver — that would risk massive reputational blowback. “If I drive 10 miles per hour, that means I crash once every 100,000 hours of driving,” he explains. “So if I deploy 100,000 cars, I’ll have a crash every hour. From a business perspective that is very, very challenging.

Source: Subscribe to read | Financial Times
 
  • Like
Reactions: Spuzzz
I feel that discussions revolving around human safety while driving don't take inconsistency into consideration. Somebody who's normally a fantastic driver can get into an accident if they get distracted at the exact worst possible time. Or is a human who drives 10 times above average "only" 99.99% of the time excluded from being considered a "fantastic" driver?

Computers will, ideally, operate at the same performance all of the time (within the same ODD). So it's difficult to compare that to a human, who can be inconsistent on the same roads, from one day to the next.

When considering traffic accidents, I'd be interested in analysis that classifies accidents as those that were made by a human driving at a lower than average attentiveness and those that were made by a human driving at good attentiveness. It would stand to reason that an AI driver _should_ be able to avoid most, if not all, of those cases in the former category. It's really only accidents in the latter category where we have to decide how good is "good enough" for AI. Eventually you'll reach unavoidable accidents that you really can't blame on the operator of the car.

I don't really understand what "100x safer than humans" means. I get that it's an average across all domains of driving, but I feel like reality is more nuanced.
 
I meant 1000x safer than average. There is way too high a percentage of collisions that are simply unavoidable. I was rear ended once while stopped at a light. I suppose if the lane next to me were open (I don't remember) an AV could move over...

In their RSS model, MB does say that AVs should try to avoid accidents if it can be done without causing an accident. Rule #5:

EoEb4Xq.png


But MB recognizes that some accidents are unavoidable. So AVs cannot be expected to avoid ALL accidents. However, AVs should never directly cause an accident:

Mobileye has proposed that AVs should never cause a crash and should significantly reduce the number of crashes caused by other vehicles, but they need not avoid every possible crash. Mobileye reasoned that AVs will not be of any practical use if they are designed to achieve that goal (after all, keeping a vehicle completely safe from crashing "amounts to staying in the parking lot").48 Instead, Mobileye proposed that the RSS allow the ADS to make "reasonable assumptions" about the "worst case" actions of other drivers even though the human drivers in other vehicles may sometimes make unreasonable decisions that cause collisions with the AV.49 For example, Mobileye points out that there may be no way for an AV moving in crowded traffic on a multi-lane highway with vehicles in lanes on both sides as well as in front and back to avoid a collision if one of the surrounding drivers intentionally or negligently steers into the AV.

Page 17. https://2uj256fs8px404p3p2l7nvkd-wp...021/05/Kevin-Vincent-Regulatory-Framework.pdf
 
  • Like
Reactions: Daniel in SD
That would make more sense. Plus I’d think you could only count accidents where the driver observed caused the accident. I’m sure there will be many FSD accidents that will be caused by the other car. They’ll make headlines too unfortunately.
Humans are actually very good at avoiding accidents that would be caused by others. If you don't count those then you could end up with FSD making the roads less safe.
I feel that discussions revolving around human safety while driving don't take inconsistency into consideration.
That's why you look at average collision rate and average fatality rate.
 
Well, it's not fully obstructed. I can tell exactly what it is with my neural net.
It's just an example of an area where AI still struggles and may not be "good enough" yet.

Well, I think it's a P-P-P-Powerbook.

 
Comment: This was a very dumb example and not applicable in this context.

Nope. This isn't relevant to driving at all. A lot of driving boils down to "don't run into that." Especially if you don't know what it is.

I think people expect it to be a good as a human. Recognizing partially obstructed objects is critical to driving.

Dunno about that. The "AI" isn't struggling here.

1) Humans have persistence built-in.

The first picture is a granny smith apple. Seem like the "AI" didn't keep that in "mind" when trying to classify the second picture. Did they train the "AI" like video where the first picture "granny smith" was factored into the second picture classification (zero-shot)? Doesn't seem like it so it makes me wonder what they expected.

2) It's not hard-coded, but the AI is giving you exactly what it was trained for. In this case, probably recognize one thing. I think the "iPod" text writing, which is most prominent, on the second picture was the right choice.

It's a lot harder to get "iPod written on paper in front of an apple sitting on a board in a fenced-in backyard".

Well, it's not fully obstructed. I can tell exactly what it is with my neural net.
It's just an example of an area where AI still struggles and may not be "good enough" yet.
 
  • Love
Reactions: rxlawdude