Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

AI experts: true full self-driving cars could be decades away because AI is not good enough yet

This site may earn commission on affiliate links.
In their RSS model, MB does say that AVs should try to avoid accidents if it can be done without causing an accident. Rule #5:

EoEb4Xq.png
It says crash. Accident and crash have different implications.

But MB recognizes that some accidents are unavoidable. So AVs cannot be expected to avoid ALL accidents. However, AVs should never directly cause an accident:



Page 17. https://2uj256fs8px404p3p2l7nvkd-wp...021/05/Kevin-Vincent-Regulatory-Framework.pdf
It's all pretty obvious.

Corollary: in AV development the only trolley problem is the people who try to bring up the trolley problem.
 
  • Like
Reactions: diplomat33
Nope. This isn't relevant to driving at all. A lot of driving boils down to "don't run into that." Especially if you don't know what it is.
Sure, that's the part of self driving that was solved a decade ago...
It's a lot harder to get "iPod written on paper in front of an apple sitting on a board in a fenced-in backyard".
Or "there's a person walking behind that car", or "there's a moving car behind those bushes".
 
  • Like
Reactions: diplomat33
In what sense is this related to the apple and ipod written on the piece of paper?

(Seriously--take a toy app, get a "wrong" output, and use it to declare "AI needs work" therefore self-driving is impossible. 😄😅 )

Sure, that's the part of self driving that was solved a decade ago...

Or "there's a person walking behind that car", or "there's a moving car behind those bushes".
 
  • Love
Reactions: rxlawdude
In what sense is this related to the apple and ipod written on the piece of paper?

(Seriously--take a toy app, get a "wrong" output, and use it to declare "AI needs work" therefore self-driving is impossible. 😄😅 )
I didn't post it. I was just saying that it is a real problem with computer vision systems. Looking at any single task and saying "AI can't do that as well as a human" and therefore declaring self-driving "impossible" is silly. Machines don't need to drive exactly the way humans do.
 
Here is a real world example of how vision can be tricked. Tesla Vision does not know the difference between a real car and a car on a billboard:

E601w5IWYAUbm5G


This is from one of Frenchie's FSD Beta V9 videos:

But is it tricked? Certainly it looks like a car, and it is rendered like a car. But if it doesn't move, and it is not in the roadway, is it a car? Does it impact planning at all?

Also, because you cite "Tesla Vision," you imply that behavior might be different with radar. How so? To the best of my knowledge, a radar return could be used to confirm presence of an object, but lack of a return would not confirm lack of presence.
 
  • Like
Reactions: WarpedOne
But is it tricked? Certainly it looks like a car, and it is rendered like a car. But if it doesn't move, and it is not in the roadway, is it a car? Does it impact planning at all?

It is tricked in the sense that it thought that a car on a billboard was a real car when it wasn't. No, it might not have affected planning in this particular case. But it shows that vision can be tricked so it could be tricked in other cases that would affect planning.

Also, because you cite "Tesla Vision," you imply that behavior might be different with radar. How so? To the best of my knowledge, a radar return could be used to confirm presence of an object, but lack of a return would not confirm lack of presence.

I did not say anything about radar. I don't think radar would help in this case. Lidar could help with this case.
 
  • Disagree
Reactions: WarpedOne
It is tricked in the sense that it thought that a car on a billboard was a real car when it wasn't. No, it might not have affected planning in this particular case. But it shows that vision can be tricked so it could be tricked in other cases that would affect planning.
Right. My point is that just because the perception layers sees something as a real car, it is not necessarily processed as a real car higher in the stack. Of course, I don't have evidence to support my case, but I think it is an unwarranted assumption that this object is treated as a real car (although you may very well be correct).

Lidar could help with this case.
Undoubtedly.
 
  • Like
Reactions: diplomat33
With recent Tesla's progress in leaps and bounds with the latest FSD beta 9 in pure vision and radarless feature, it still can't recognize stationary obstacles such as undrivable space under a series of columns as that space is erroneously recognized as a drivable lane with no obstacles on the left of the car:

E6Myd7WVoAEvyhl


Below are real-life big columns obstacles that are missed by Tesla's system above:

E6Myd7RVIAAl9my



The system was heading toward the collision course with the column with a right turn below and the driver had to manually steer it away:

1626885901881.png



Recognizing obstacles should be a basic task so this lack of recognition may indicate that it would take Tesla a long time to accomplish this. This happens despite the advance of pure vision and radarless feature that eliminates the sensor fusion issue that made Tesla stop installing radars in recent Model 3 and Y in North America.
 
Your responses sounds like hand-waving and deflection.

My take:
That example wasn't applicable to "AI" or autonomous driving. I don't see it as a "real" problem in computer vision systems, either.

I didn't post it. I was just saying that it is a real problem with computer vision systems. Looking at any single task and saying "AI can't do that as well as a human" and therefore declaring self-driving "impossible" is silly. Machines don't need to drive exactly the way humans do.
 
This shows that vision stack is working. The Tesla would probably register it as a stopped/parked vehicle and avoid running into it.

Radar probably wouldn't help since it's not moving and would likely get filtered out.

But is it tricked? Certainly it looks like a car, and it is rendered like a car. But if it doesn't move, and it is not in the roadway, is it a car? Does it impact planning at all?

Also, because you cite "Tesla Vision," you imply that behavior might be different with radar. How so? To the best of my knowledge, a radar return could be used to confirm presence of an object, but lack of a return would not confirm lack of presence.

This could be an interesting sensor fusion case.

I did not say anything about radar. I don't think radar would help in this case. Lidar could help with this case.
 
Right. My point is that just because the perception layers sees something as a real car, it is not necessarily processed as a real car higher in the stack. Of course, I don't have evidence to support my case, but I think it is an unwarranted assumption that this object is treated as a real car (although you may very well be correct).
This image from Frenchie's other drive shows a small construction vehicle/zone wrongly rendered as a huge semi truck. FSD was happily driving through it. Certainly didn't slam on the brakes.

Objects shown on the screen cannot be trusted as real nor that the car will actually avoid them.

Screenshot_20210713-105255.png
 
Your responses sounds like hand-waving and deflection.

My take:
That example wasn't applicable to "AI" or autonomous driving. I don't see it as a "real" problem in computer vision systems, either.
It's hand waving because I'm not an expert in the field. I see that experts in the field say that it's a real problem. There's plenty of recent research on it so I assume it's still a real problem. (Google scholar search: https://scholar.google.com/scholar?...clusion&hl=en&as_sdt=0,5&as_ylo=2020&as_vis=1)
I viewed it as an example of object occlusion when I guess it was supposed to be an example of object misrecognition.
 
With recent Tesla's progress in leaps and bounds with the latest FSD beta 9 in pure vision and radarless feature, it still can't recognize stationary obstacles such as undrivable space under a series of columns as that space is erroneously recognized as a drivable lane with no obstacles on the left of the car:

E6Myd7WVoAEvyhl


Below are real-life big columns obstacles that are missed by Tesla's system above:

E6Myd7RVIAAl9my



The system was heading toward the collision course with the column with a right turn below and the driver had to manually steer it away:

View attachment 686666


Recognizing obstacles should be a basic task so this lack of recognition may indicate that it would take Tesla a long time to accomplish this. This happens despite the advance of pure vision and radarless feature that eliminates the sensor fusion issue that made Tesla stop installing radars in recent Model 3 and Y in North America.


Actually follow up tests showed the system does see the objects and puts them in non-driveable space...then proceeds to turn into them. Path planning sux
 
  • Informative
Reactions: diplomat33
Interestingly, China thinks even L3 is still not ready anytime soon:

"However, experts say that even achieving L3 on public roads is some time away and will take a large amount of money.
“Even if L3 is achieved, the cost will increase steeply, making it harder to commercialise [the technology],” Chen said."

So there's 1) the time issue: I guess it takes time for the technology to gain competency and 2) money: even when the technology is mature, the cost is still an issue.

 
Last edited:
I like to apply something similar to the Technology Readiness Level scale to things like this. Technology readiness level - Wikipedia

Everywhere it say "space" replace it with whatever your target is, but the assessment model holds fairly true for everything. Even after we see successful tech demonstrations, we're many years away from products landing in consumer hands. Otherwise, consumers end up with half-functional products that end up failing or being outright dangerous.