Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD v12.5 on HW3 and HW4/AI4

This site may earn commission on affiliate links.

diplomat33

Average guy who loves autonomous vehicles
Aug 3, 2017
13,061
19,415
USA
Some interesting results testing FSD 12.5 on HW3 versus HW4.

screenshot-teslamotorsclub.com-2024.08.29-14_55_38.png
 
the most annoying part is we don't even how how many variables their data model is taking into consideration? There could be 1000 parameters visible to the eye (vision). It doesn't mean we need to account for all of them before we make a decision.
It would be really interesting if we could just understand SOME of the decision tree. Within the modeling, and NN/AI/whatever logic, there has to be some decision tree and stack rank of risks vs. rewards (technically it IS rewards when training) that the system has developed on its own, or that the system has been giving as a tiny bit of commandments..

I’d like to know how it compares vehicle distance and speed from the LEFT vs. vehicles and approaching speed from the RIGHT. It SHOULD rank and compare these two differently for certain. There are higher risk for LEFT (drive side approach) than RIGHT and there are more opportunities for corrective action from RIGHT vs. LEFT vs. when left AND RIGHT have approaching vehicles.

I’d also really like to know is it interpreting things like a median or median turn-merge lane vs.. no median and how does something like THAT affect the higher up decision tree logic and risk vs. reward.

Certainly, SOMEWHERE this type of logic argument and analysis must be visible (and understandable) by Tesla engineering.
 
  • Like
Reactions: enemji
It would be really interesting if we could just understand SOME of the decision tree. Within the modeling, and NN/AI/whatever logic, there has to be some decision tree and stack rank of risks vs. rewards (technically it IS rewards when training) that the system has developed on its own, or that the system has been giving as a tiny bit of commandments..

I’d like to know how it compares vehicle distance and speed from the LEFT vs. vehicles and approaching speed from the RIGHT. It SHOULD rank and compare these two differently for certain. There are higher risk for LEFT (drive side approach) than RIGHT and there are more opportunities for corrective action from RIGHT vs. LEFT vs. when left AND RIGHT have approaching vehicles.

I’d also really like to know is it interpreting things like a median or median turn-merge lane vs.. no median and how does something like THAT affect the higher up decision tree logic and risk vs. reward.

Certainly, SOMEWHERE this type of logic argument and analysis must be visible (and understandable) by Tesla engineering.
Do a Google search for "neural net black box" might help you understand a bit better.
 
It would be really interesting if we could just understand SOME of the decision tree. Within the modeling, and NN/AI/whatever logic, there has to be some decision tree and stack rank of risks vs. rewards (technically it IS rewards when training) that the system has developed on its own, or that the system has been giving as a tiny bit of commandments..
There has been research into getting neural networks to show their work, but it's still mostly experimental with limited practical use (especially for something as complex as the FSD neural network).
 
  • Like
Reactions: diplomat33
Some interesting results testing FSD 12.5 on HW3 versus HW4.

View attachment 1076708
It always surprises me that folks think that speed bumps are not designed to go over then at the speed "limit" of the road. I mean otherwise the idea is to speed down the road at more than the speed limit, slow for the speed bump, and then speed right back up. That definitely the whole purpose between them being there. For years I have gone over speed bumps at practically the speed limit of the road and been fine.
 
It SHOULD rank and compare these two differently for certain. There are higher risk for LEFT (drive side approach) than RIGHT and there are more opportunities for corrective action from RIGHT vs. LEFT vs. when left AND RIGHT have approaching vehicles.
Well, if the training data shows people acting differently it will show up in the NN - not otherwise. As we know its all action based, not "understanding" based.
 
My limited understanding is
It would be really interesting if we could just understand SOME of the decision tree.
Don't try to understand the decision tree, that's impossible. Instead, realize there is no decision tree. (points if you get the reference).
Versions <12 were code-based decision trees. V 12 is a NN.. meaning it doesn't make decisions as much as recall answers.

You feed it data. This can be from speed to object location and vectors, to pedal position, steering angle, steering force, distance to the stop sign ...anything and everything you can think of including tons of parameters extracted from the camera feeds. During training, you say "these are the inputs" and you also feed it the "answer" which is essentially the user inputs that the model will need to generate, which forms these matrices of outputs for given inputs. Feed it enough data and you have yourself a model.

That model is then fed the same data and it returns it's answer. Or in this case, answers like desired torque, steering angle, etc.

There is no 'hey this is a divided highway, I cant make a left", it's more like "every time there was a curb in the center of the road, I never make a left."
Obviously this is a super simple explanation but that's the problem with NNs, What the engineers understand is how it learns, not how it makes decisions. So you try to teach it with the right data so it makes the right decisions.

* I am by no means an expert and may contain errors.
 
  • Informative
Reactions: tivoboy
You feed it data. This can be from speed to object location and vectors, to pedal position, steering angle, steering force, distance to the stop sign ...anything and everything you can think of including tons of parameters extracted from the camera feeds.
So why does it drive so poorly? Is Tesla showing it both terrible and great drivers? Seems they would somehow filter these to only desirable types of training.

And very often under the speed limit? It would be hard to find a dozen people in CA who drive that slowly, let alone millions of miles to use for training.

Your answer sounds too simplistic but I don't know.
 
oh the answer is WAY simplistic. I mean they call it neural nets for a reason. It's extremely similar to teaching a human. You show them the right way to do it, doesn't mean they will do it right. It's not binary logic - if this then that. It's "given this input set, what's the closest match to an answer based off what we were trained on"... where most of the time, the training did not contain those exact inputs.

I suspect the model also has "knobs" either during training, or after that can be tuned to a certain degree to give certain inputs more or less "weight" in coming to an answer, but that's just complete speculation. I barely know what I'm talking about. lol.
 
  • Like
Reactions: KArnold
It's been said that FSD 12.5.2 is supposed to have the same neural network for HW3 and HW4. I wonder if Tesla has stopped the rollout of 12.5.1.x in favor of 12.5.2, as they set aside 12.4.

oh the answer is WAY simplistic. I mean they call it neural nets for a reason. It's extremely similar to teaching a human. You show them the right way to do it, doesn't mean they will do it right. It's not binary logic - if this then that. It's "given this input set, what's the closest match to an answer based off what we were trained on"... where most of the time, the training did not contain those exact inputs.
Plus we're dealing with a task far more complex than anything anyone has previously done with a neural network, especially accounting for edge cases.
 
  • Like
Reactions: rlsd
12.5.1.5/HW3/legacy

On a few past occasions I've seen the "emergency lights detected" pop-up but never saw the car do anything as a result. Maybe I've just missed it in my experience. Interesting that in the Interstate where this happened I believe is still the v11 stack. But tonight it clearly did something.

Driving on a clear night, interstate, 71 MPH max/current speed, 65 MPH limit, heavily patrolled, light traffic, center lane - came across emergency vehicle with flashing lights parked on the shoulder. Impossible to miss actually, almost blinding. Being in the center lane already and nobody in the right I had no intention of changing anything. But FSD gave the pop-up and slowed to about 63 MPH before we passed. It then resumed speed. Actually nicely done in this case.
 
12.5.1.5/HW3/legacy

On a few past occasions I've seen the "emergency lights detected" pop-up but never saw the car do anything as a result. Maybe I've just missed it in my experience. Interesting that in the Interstate where this happened I believe is still the v11 stack. But tonight it clearly did something.

Driving on a clear night, interstate, 71 MPH max/current speed, 65 MPH limit, heavily patrolled, light traffic, center lane - came across emergency vehicle with flashing lights parked on the shoulder. Impossible to miss actually, almost blinding. Being in the center lane already and nobody in the right I had no intention of changing anything. But FSD gave the pop-up and slowed to about 63 MPH before we passed. It then resumed speed. Actually nicely done in this case.
I've experienced this too. If I recall it moved over too, but it may have been doing the normal moving over just to get out of the right lane.