Some interesting results testing FSD 12.5 on HW3 versus HW4.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
the most annoying part is we don't even how how many variables their data model is taking into consideration? There could be 1000 parameters visible to the eye (vision). It doesn't mean we need to account for all of them before we make a decision.I can see that. Too many variables involved in each run.
It would be really interesting if we could just understand SOME of the decision tree. Within the modeling, and NN/AI/whatever logic, there has to be some decision tree and stack rank of risks vs. rewards (technically it IS rewards when training) that the system has developed on its own, or that the system has been giving as a tiny bit of commandments..the most annoying part is we don't even how how many variables their data model is taking into consideration? There could be 1000 parameters visible to the eye (vision). It doesn't mean we need to account for all of them before we make a decision.
Do a Google search for "neural net black box" might help you understand a bit better.It would be really interesting if we could just understand SOME of the decision tree. Within the modeling, and NN/AI/whatever logic, there has to be some decision tree and stack rank of risks vs. rewards (technically it IS rewards when training) that the system has developed on its own, or that the system has been giving as a tiny bit of commandments..
I’d like to know how it compares vehicle distance and speed from the LEFT vs. vehicles and approaching speed from the RIGHT. It SHOULD rank and compare these two differently for certain. There are higher risk for LEFT (drive side approach) than RIGHT and there are more opportunities for corrective action from RIGHT vs. LEFT vs. when left AND RIGHT have approaching vehicles.
I’d also really like to know is it interpreting things like a median or median turn-merge lane vs.. no median and how does something like THAT affect the higher up decision tree logic and risk vs. reward.
Certainly, SOMEWHERE this type of logic argument and analysis must be visible (and understandable) by Tesla engineering.
There has been research into getting neural networks to show their work, but it's still mostly experimental with limited practical use (especially for something as complex as the FSD neural network).It would be really interesting if we could just understand SOME of the decision tree. Within the modeling, and NN/AI/whatever logic, there has to be some decision tree and stack rank of risks vs. rewards (technically it IS rewards when training) that the system has developed on its own, or that the system has been giving as a tiny bit of commandments..
It always surprises me that folks think that speed bumps are not designed to go over then at the speed "limit" of the road. I mean otherwise the idea is to speed down the road at more than the speed limit, slow for the speed bump, and then speed right back up. That definitely the whole purpose between them being there. For years I have gone over speed bumps at practically the speed limit of the road and been fine.
Well, if the training data shows people acting differently it will show up in the NN - not otherwise. As we know its all action based, not "understanding" based.It SHOULD rank and compare these two differently for certain. There are higher risk for LEFT (drive side approach) than RIGHT and there are more opportunities for corrective action from RIGHT vs. LEFT vs. when left AND RIGHT have approaching vehicles.
Don't try to understand the decision tree, that's impossible. Instead, realize there is no decision tree. (points if you get the reference).It would be really interesting if we could just understand SOME of the decision tree.
So why does it drive so poorly? Is Tesla showing it both terrible and great drivers? Seems they would somehow filter these to only desirable types of training.You feed it data. This can be from speed to object location and vectors, to pedal position, steering angle, steering force, distance to the stop sign ...anything and everything you can think of including tons of parameters extracted from the camera feeds.
Plus we're dealing with a task far more complex than anything anyone has previously done with a neural network, especially accounting for edge cases.oh the answer is WAY simplistic. I mean they call it neural nets for a reason. It's extremely similar to teaching a human. You show them the right way to do it, doesn't mean they will do it right. It's not binary logic - if this then that. It's "given this input set, what's the closest match to an answer based off what we were trained on"... where most of the time, the training did not contain those exact inputs.
Sure looks that way,,,Just checking in on the progress. From what I gather all new FSD software updates are stalled right now. Is that correct? We are still on 12.3.6.
Thanks.
I've experienced this too. If I recall it moved over too, but it may have been doing the normal moving over just to get out of the right lane.12.5.1.5/HW3/legacy
On a few past occasions I've seen the "emergency lights detected" pop-up but never saw the car do anything as a result. Maybe I've just missed it in my experience. Interesting that in the Interstate where this happened I believe is still the v11 stack. But tonight it clearly did something.
Driving on a clear night, interstate, 71 MPH max/current speed, 65 MPH limit, heavily patrolled, light traffic, center lane - came across emergency vehicle with flashing lights parked on the shoulder. Impossible to miss actually, almost blinding. Being in the center lane already and nobody in the right I had no intention of changing anything. But FSD gave the pop-up and slowed to about 63 MPH before we passed. It then resumed speed. Actually nicely done in this case.