Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

For anyone who wants to know why we *won't* have self-driving cars soon...

This site may earn commission on affiliate links.
Thanks, dondy, for agreeing with me.

Obviously, what Tesla is doing is just NNs -- not AI, as you agree -- and are valid on the data sets they're trained on, ONLY -- as you agree. So they can't be used for general purpose driving. The other car companies are doing the same thing, except Waymo, which is running on rails so is even less general-purpose. Quod Erat Demonstrandum.

You are right deep NN is just a form of regression...but so what?

They are not valid just on their training sets, a well trained model is valid on the distribution that their data spans. If the data variability is large then unseen examples should be generalized well. This training/test variability is a key consideration in best practices.

Deep reinforcement learning algorithms are able to succeed in many games like Go or chess without knowing the rules explicitly. They just need a well degined reward policy.

Tesla could apply all these techniques to the "game" of driving a car.

The main limitation IMO is static object detection. The sensitivity and specificity need to be insanely high to never run into static objects and never have random slow downs for false positives.
 
Reliability of machine vision is the most important and arguably the hardest part of self driving application. Physics of driving, path prediction of others has already been solved - all that is needed is right amount of computational power.
 
  • Disagree
Reactions: neroden
Deep learning is not a panacea although none of failure examples I have seen would be a show stopper for machine vision application used for driving. Where it could be an issue would be reasoning and prediction.

Therefore I think that eventually we will end up with a hybrid, symbolic AI for driving/path planning and multiple learning algorithms (ensemble learning) for vision.
 
Last edited:
  • Like
Reactions: neroden
Thanks, dondy, for agreeing with me.

Obviously, what Tesla is doing is just NNs -- not AI, as you agree -- and are valid on the data sets they're trained on, ONLY -- as you agree. So they can't be used for general purpose driving. The other car companies are doing the same thing, except Waymo, which is running on rails so is even less general-purpose. Quod Erat Demonstrandum.
Obviously Tesla self-driving is not designed for Nascar. Probably it could be used there but only if retrained repurposed.
It would be a new system.
Obviously since Tesla is using "behavior cloning" i.e. copying driver behavior when they disengage from EAP, self-driving systems won't mimic or even compete against professional drivers. It is not designed for that. It is designed to drive you from point A to point B using public roads and hopefully not killing you or anybody else in the process.
Obviously for now they focus on interstate only because interstate roads are more or less well maintained, i.e. it is much less chance to run against some not standardized features.

Interesting and relevant detail: the line divider on that Cal road was totaled "right" before Tesla accident, since we do not know who made this accident it was not a Tesla for sure, and definitely not an auto in autopilot. Bad design features are not surprisingly dangerous also for us-human drivers.

Opposite example: people use very successfully (you hear nothing about=no major accidents) EAP in the Netherlands and North Germany on all types of 2+line roads. They can do it because the roads 100kms away from each other are pretty much identical, and EAP is left to do what it does best, manage known well trained situations.
 
Am I correct in that the linked article was based on a presentation to the IMF, a very difference use case than driving? IMF would be trying to predict longer term future trends, whereas driving is current action based on current conditions.

Any driving system is analogous to a hash function with sensor input mapping to a force vector (steering, braking, accelerating) and signaling as outputs.

Object categorization is important for dealing with special cases like trolleys and emergency vehicles, but object existence recognition is the critical function. If there is an object in the driving path, it needs to acted on regardless of whether it is a dog or calf. Further, the recognition of free space (noise pattern from roadway) can serve as a gating function on object identification (also useful for distance estimating versus standard low clearance vehicles, but not against overhanging loads.).

Rules of the road are important for base driving, but cannot be hard coded lest the system be too brittle. Examples:
Detours on the wrong side of the road
People who don't do 4-way stops priority correctly
Dual lane roundabout with construction in one lane forcing a merge and travel in the 'wrong' lane
Pedestrians lacking in self preservation skills
Other drivers in general

May require an illegal lane change, thus changing a rule , 'shall not' to a guidance 'should not'.

Regarding alien failure modes. If you only look at the final output, the NN is opaque in its 'reasoning'. However, if you look at the layers and kernels therein, along with running heat maps, then you can see what the NN is keying on, and whether that makes sense. You would also set up your test cases to verify internal performance. Then, running the final result against as much data as possible (with fuzzing), you find out if you are over fitted or weak on some area.

nteresting and relevant detail: the line divider on that Cal road was totaled "right" before Tesla accident, since we do not know who made this accident it was not a Tesla for sure, and definitely not an auto in autopilot.

It was totaled many days before by a drunk driver who survived... (long thread elsewhere on TMC)
 
  • Like
Reactions: Vitold
Examples:
Detours on the wrong side of the road
People who don't do 4-way stops priority correctly
Dual lane roundabout with construction in one lane forcing a merge and travel in the 'wrong' lane
Pedestrians lacking in self preservation skills
Other drivers in general
I to have been trying to build a list of things that the AutoPilot doesn't do well. Certainly, the emergency vehicles was one, what with there not being a microphone onboard (that we know of) to listen for sirens. Maybe when everyone else pulls over it will get a clue.

The only ones that I had originated from my own experience with AP were:
Wide load - the car seems unconcerned about intrusions into your lane, instead focused on staying in the center unless ultrasonics are tripped
Pilot car - I cannot imagine HOW it will work if you come upon a flagman who wants the car to stop and then to follow a pilot car
City merge - When EVERYONE is trying to gain advantage and merging from 12 different sources to get onto the bridge, geez

-Randy
 
  • Love
  • Like
Reactions: mongo and neroden
I to have been trying to build a list of things that the AutoPilot doesn't do well. Certainly, the emergency vehicles was one, what with there not being a microphone onboard (that we know of) to listen for sirens. Maybe when everyone else pulls over it will get a clue.

The only ones that I had originated from my own experience with AP were:
Wide load - the car seems unconcerned about intrusions into your lane, instead focused on staying in the center unless ultrasonics are tripped
Pilot car - I cannot imagine HOW it will work if you come upon a flagman who wants the car to stop and then to follow a pilot car
City merge - When EVERYONE is trying to gain advantage and merging from 12 different sources to get onto the bridge, geez

-Randy

My additions:
- suburban unlined roads (the functional lane moves side to side due to cars parked on either side, oncoming traffic)
- stuff on the road (live and dead animals, trees / branches, chunks of disintegrated tire, etc..)
- double parked cars (mostly in really big cities), and the need at times for the autonomous vehicle to double park itself
- Parking in a grass / dirt lot (at say my local softball field when out for an afternoon of softball)
- crooked hilly road - it cuts medium sharp corners more sharply than it should
- narrow roads - sometimes you need to bias yourself to the far side of your narrow lane so the oncoming semi has room
- My driveway (paved, but unmarked single lane, with sharp corners and a narrow 'bridge' where being precisely in the middle of the driveway is a good idea)
- The parking garage at work (parking garages in general)


My take is that we're going to see radical improvements in driver assist, and I welcome all of them. I find the current Navigate on Autopilot to be an excellent forward step, while not particularly changing the driving experience on the freeway for me (it suggests lane changes - it doesn't change lanes on its own).

Beyond freeway driving, I don't expect my Model X to be a hands off self driving vehicle for years. Full autonomy I can imagine being possible, but I'm thinking decade(s) rather than year(s).
 
  • Love
Reactions: neroden
\
Obviously for now they focus on interstate only because interstate roads are more or less well maintained, i.e. it is much less chance to run against some not standardized features.
This is the critical point.

Opposite example: people use very successfully (you hear nothing about=no major accidents) EAP in the Netherlands and North Germany on all types of 2+line roads. They can do it because the roads 100kms away from each other are pretty much identical, and EAP is left to do what it does best, manage known well trained situations.
Rural roads (and urban streets) will never be as standardized as the roads you refer to, which has always been one of my main points.
 
Read the damn article I linked, please. Special attention to the section on "alien failure modes".

I read it now, and understand everything he is talking about.

Again, alien failure modes in self driving can occur, both hopefully mostly mitigated.

Like, whoops we didn't train on refrigerators in the middle of the road, when it's both rainy AND sunny outside. Self-driving failure, until we have that data (or at least other similar objects in that environment).

Adversarial examples like adding white noise to pixels is less relevant to a real-world system. But analogs to it could be. What happens if some weird stuff gets on the camera sensors? What about glitter?

GANs can help make the detection manifolds more robust and not as sensitive to random crap.

The more data in the more varied set of conditions will reduce potential 'alien-modes'.
 
  • Like
Reactions: neroden
Regarding alien failure modes. If you only look at the final output, the NN is opaque in its 'reasoning'. However, if you look at the layers and kernels therein, along with running heat maps, then you can see what the NN is keying on, and whether that makes sense. You would also set up your test cases to verify internal performance. Then, running the final result against as much data as possible (with fuzzing), you find out if you are over fitted or weak on some area.

Sure. So far it's consistently simultaneously overfitted and weak even on the basics, and they haven't even come close to specifying the problem scope.

They don't have a decent suite of test cases, and they don't even have a sound evaluation method for how it's performing in test cases.

Right now, all this stuff works as a driver-assist, because there's always a driver to notice the alien failure modes and take over. It will probably act as a better and better driver assist. Enhanced Autopilot has a bright future as a driver assist system.

But it will be a very, very long time before it allows you to sleep in your car for a whole trip. Full self driving is not happening before your model 3 rusts out.

I wish to emphasize the contrast between the two. Driver assist -- effective; full self-driving -- fantasy material.
 
  • Helpful
Reactions: Mader Levap
I to have been trying to build a list of things that the AutoPilot doesn't do well. Certainly, the emergency vehicles was one, what with there not being a microphone onboard (that we know of) to listen for sirens. Maybe when everyone else pulls over it will get a clue.

The only ones that I had originated from my own experience with AP were:
Wide load - the car seems unconcerned about intrusions into your lane, instead focused on staying in the center unless ultrasonics are tripped
Pilot car - I cannot imagine HOW it will work if you come upon a flagman who wants the car to stop and then to follow a pilot car
City merge - When EVERYONE is trying to gain advantage and merging from 12 different sources to get onto the bridge, geez

-Randy

For arguments sake: Microphones would be useful, but deaf people are allowed licenses. If going with audio detection, also need two+ mics for directional cues.

Agree on the atypical loads, things suspended are hard to judge the distance of (see also kangaroos).

Level 5 self driving seems beyond NP hard. Level 4 with grass/ verbal direction restrictions seems possible.
 
  • Like
Reactions: neroden
Sure. So far it's consistently simultaneously overfitted and weak even on the basics, and they haven't even come close to specifying the problem scope.

They don't have a decent suite of test cases, and they don't even have a sound evaluation method for how it's performing in test cases.

Right now, all this stuff works as a driver-assist, because there's always a driver to notice the alien failure modes and take over. It will probably act as a better and better driver assist. Enhanced Autopilot has a bright future as a driver assist system.

But it will be a very, very long time before it allows you to sleep in your car for a whole trip. Full self driving is not happening before your model 3 rusts out.

I wish to emphasize the contrast between the two. Driver assist -- effective; full self-driving -- fantasy material.

Ha, my Model 3 will never rust out! (I ain't gots one).

I wonder if people are trying to get too fancy with FSD. people could play Pole Position on a 2600, and that was severely low resolution.

Determine if there is an object, if so avoid it. (medium)
Stay in lane (medium)
Stop for pedestrians (hard)
Don't hit overhanging loads (hard)
Know when you aren't sure (depends how well it does normal driving)
Deal with other drivers (impossible)

So yeah, Level 4 with caveats about weird situation (verbal, grass lots) I can see. Gets my Tesla pickup to Texas for the BFS launch. But even humans can't do Level 5 (if it is defined as always succeeding, some can't go level 5 normally...).
 
  • Like
Reactions: Sudre and neroden
The FSD car doesn't need to be able to handle extreme border cases. It just needs to quit gracefully and call a human. Waymo has both a "help" and "pull over" button for the passenger.

I think the author has tied AI too closely to FSD. FSD is an engineering problem that probably doesn't require even narrow AI.

IIRC the author was even pessimistic about driverless trucks on the expressway. But I don't see why that subset isn't doable, as Waymo has probably already exceeded that difficulty level with their cars in Phoenix.
 
  • Like
Reactions: mongo
So we're getting more realistic now. Driver at all times.
This has certainly been my assumption for the 5-10 years since there are so many edge cases.
And lets not forget snow. Many times I've managed to get thru snow storms by driving in normally unconventional ways based on what has or hasn't been plowed
and how I can best avoid the deepest snow on the road.
 
I have read often that one of the reasons why self-driving cars will not happen soon is that modeling human behavior is hard. I have been of the opinion that fluid dynamics could be applied to this problem as people driving
are constrained in how they express their individuality.

I was happy to stumble upon recent research paper stating something similar - here it is if anyone is interested:
Dynamic response and hydrodynamics of polarized crowds
 
  • Informative
Reactions: replicant