Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

BBC Autopilot video

This site may earn commission on affiliate links.
I'd say this also kind of shows weakness of neural net based driving - it's difficult to enforce "corner case behavior". I.e. If car ahead of you moves out of the way and "noise" cloud moving very quickly towards you appears on radar, there is something fishy - do something (i.e. the same as the car ahead of you). I know there can be false positives, but RELIABLE obstacle avoidance is necessary before even thinking about FSD. At least we did it that way in our (model scale) robotic projects.
 
I'd say this also kind of shows weakness of neural net based driving - it's difficult to enforce "corner case behavior". I.e. If car ahead of you moves out of the way and "noise" cloud moving very quickly towards you appears on radar, there is something fishy - do something (i.e. the same as the car ahead of you). I know there can be false positives, but RELIABLE obstacle avoidance is necessary before even thinking about FSD. At least we did it that way in our (model scale) robotic projects.

That shouldn't be a problem for neural net training. Companies like Google, Tesla, or GM do millions of simulations just like that, to train their neural nets for vehicles/people suddenly entering the road and such. The AI learns pretty quickly, that it's better to steer way from an object, if possible.

As long as the sensors detect it, an AI can be trained to avoid it. In real life, though, you need to be really certain, that what your sensors tell you is actually true. Because if your car suddenly changes lanes and brakes, just because it saw some shadow on the road and the vehicle in front just happened to change lanes, then you can't put that in a consumer vehicle.

And since shadow braking still happens, I don't think shadow lane changing plus braking is a good idea.
 
I'm not NN expert (at all) but my understanding is that you have no guarantee what NN does in situation which even slightly differs from training/testing data (based on some unrelated input). I.e. it stops on red normally (tested) but suddenly, if there is someone in blue jacket waiting by the crossing (untested situation), it decides not to stop. Or it may vary based on version.

So I'd expect some low level (not NN based) "reflexes" (if wall then stop). And the intelligence running on top of them.