Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Wouldn't neural-networks easily solve bridge/overpass phantom braking?

This site may earn commission on affiliate links.
If there really was neural networks - wouldn't identifying/solving phantom braking events be the easiest to solve?

TBH - while I don't doubt that Tesla looks at videos from consumers' cars - I'm not convinced a "neural network", as defined by the industry, truly exists....

Perhaps the ingredients for a NN exist, but the evidence certainly doesn't point to a system that is "learning" to any measurable degree.....the USB port power-loss, loss of audio, and associated bugs in the latest release indicates a software team continuing to chase its tail on a multitude of small fires it continues to introduce
 
The bugs you list wouldn't be part of a NN system, I don't think.
There are several separate CPUs with their own programming.

The primary NN learning loop would be key'ed on when the driver
disengages AP. I've had recurring NOA errors that were solved.

Of course nobody outside the top level developers knows exactly.
 
  • Like
Reactions: alexGS
How well the software works has nothing to do with whether or not it is a neural network. You can have well programmed neural networks and poorly programmed neural networks. But all of that is irrelevant. It doesn’t matter what the underlying software is, but how well it works.

And the current implementation does obviously have some limitations including recognizing overpasses (things that are obvious to humans, but not to the software). I suspect part of that is due to how much trust is Tesla willing to have in the cameras vs. the radar. It is the radar that is confused by the overpasses, and when there is a discrepancy between the radar and the cameras, it seems that the current implementation sometimes trusts the radar (which can tell that there is a blockage ahead, but not where it is exactly).

This is one reason why some people say that LIDAR is necessary. LIDAR gives a more precise 3D mapping, and can tell the difference between an overpass and a blockage on the road. I used to believe this as well (feeling that LIDAR is the best way to prevent crashes), but after reading more I am no longer certain.
 
My understanding is that they’ve been training the NN to identify specific things such as other vehicles, road markings, signs, traffic signals, etc. They haven’t really trained it to recognize snowy conditions as mentioned in their autonomy day presentation. And they likely haven’t been focusing on things like recognizing overpasses. The NN’s primary purpose is image recognition and prediction. All of the learning is done on Tesla’s side of things. They collect camera images from the fleet of various road conditions then tag the images so that they can be processed and run them through the NN so that it can learn from the new images that it has been fed. There is no actual learning done on the vehicle side of things. The Tesla fleet is the eyes and ears but the brain resides at Tesla HQ.
 
It's really hard for Tesla's NNs to identify and model the cropped world in 3D through camera vision. Give them more time, more power, and more data, eventually they will make it. /s

upload_2019-5-13_21-11-52.png
 
If there really was neural networks - wouldn't identifying/solving phantom braking events be the easiest to solve?

TBH - while I don't doubt that Tesla looks at videos from consumers' cars - I'm not convinced a "neural network", as defined by the industry, truly exists....

Perhaps the ingredients for a NN exist, but the evidence certainly doesn't point to a system that is "learning" to any measurable degree.....the USB port power-loss, loss of audio, and associated bugs in the latest release indicates a software team continuing to chase its tail on a multitude of small fires it continues to introduce
Hmmmm

I wish there was a way for us non-AP and non-EAP to freeze our firmware. Like iOS 12, if it’s the perfect software for your phone, why not freeze it and stop the install. I just want an electric car that is stylish, efficient, fun to drive. No gas used. That’s it. I’m happy. Enuff said.
 
  • Informative
Reactions: Leafdriver333
My speculation is that it is not the NN that causes the braking at bridges, but the automatic accident avoidance system. Radar picks up the bridge and starts to apply the brakes before the NN has a chance to analyse the camera input to see that the bridge is not in the way and cancel the emergency braking. Perhaps the shadow condition under the bridge makes the NN have to examine more frames to make the decision that the road is really clear, all the while the car is braking.

Different times of day produces different visibility under the bridge and affects the amount of time it takes to analyse the images.
 
How well the software works has nothing to do with whether or not it is a neural network. You can have well programmed neural networks and poorly programmed neural networks. But all of that is irrelevant. It doesn’t matter what the underlying software is, but how well it works.

And the current implementation does obviously have some limitations including recognizing overpasses (things that are obvious to humans, but not to the software). I suspect part of that is due to how much trust is Tesla willing to have in the cameras vs. the radar. It is the radar that is confused by the overpasses, and when there is a discrepancy between the radar and the cameras, it seems that the current implementation sometimes trusts the radar (which can tell that there is a blockage ahead, but not where it is exactly).

This is one reason why some people say that LIDAR is necessary. LIDAR gives a more precise 3D mapping, and can tell the difference between an overpass and a blockage on the road. I used to believe this as well (feeling that LIDAR is the best way to prevent crashes), but after reading more I am no longer certain.

I wonder how much of this will change with HW3 in wide deployment ... as I understand it, that’s what’ll enable full resolution processing of all of the camera inputs.

I’m hopeful (and somewhat confirmed by Elon’s tweet) that we’ll get quasi-360 camera view with hw3 as well. Being able to process all the video streams at line rate will be a game changer, and actually enable the camera+radar equation you mention above to change a bit.

Just my wishful thinking!
 
You mean go off the grid, go free, go rogue? No chain, no ties,
no tracking? Wow, that's a tall order, Citizen @Rottenapplr ;-)[/QUOTE

Well the reason why I say this is there are still minor bugs in the system such as cruise not working or screen freeze. I’d love to have a stable windows xp final build lol jk for the tesla that’s bug free. Steering is good. Brake is good. Charge is good. You know stable stable. That’s where I’m coming from hehe. Maybe someday when the model stops receiving updates. It will get a final software patch that’s extremely stable.
 
I can predict with near 100% probability that there won't ever be a "final software" version unless Tesla sinks (much to the joy of the bad/fake news disseminators [i.e. the other auto makers and the short traders on NASDAQ]).

Specifically, a distributed system like this requires corrections to the display subsystem with changes to the control subsystem. In theory, they (try to) be programmed as distinct modules, over well-defined interfaces, that's the ideal, but in practice expedient short term fixes always cause un-anticipated side-effects. That's the reality of software-based machines. Something like a toaster stabilizes because development is abandoned.
 
Last edited: