Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Yeah, I find that FSD Beta jerks the wheel a lot during turns like it is kind of feeling its way through the turn. Also, sometimes, coming to a complete stop a red light but where the car will need to make a turn, it turns the wheel before stopping instead of keeping the wheel straight and only turning when it starts the turn.
It *is* feeling its way through the turn (at least, that is my guess based on observations). The car starts the turn based on the path prediction which is in turn based on the BEV generated by the cameras/NN. As the car turns, the new view causes updates to the BEV and path, and the car applies corrective steering to match the new path. What seems to happen (again, my take based on observations) is that the car tries to correct to the new path too aggressively (short-distance), rather than plotting a path that gradually moves from the old to the new path. Since this whole process is very dependent upon lighting and atmospheric conditions it goes a long way to explaining the variability in the car when making the same turns (again, speculation on my part, but I'm probably not far wrong).
 
Yes, not bugs.

You're right. We have a different concept of bugs.
In a certain sense I tend to agree. A "bug" is generally defined as a software defect that cause the software to behave in an unintended manner. But what is the "intended" manner? Ultimately, that must be some design document(s) that specify the intent of the software. So, in a certain formal sense, you cannot have "bugs" in design documents, though you can of course have flaws/errors/omissions in the design.

When discussing this issue with developers, I often half-jokingly note that software that has no design specification is, by definition, bug-free, since the only specification is the source code, and so whatever the software does is correct according to the specification.

(Way off-topic, I know)
 
Ultimately, that must be some design document(s) that specify the intent of the software.
In the old days the Functional Design Specification (FDS) defined the software. Anything not adhering to that was a bug.

Ofcourse that was in the old waterfall days. Over the decades people figured out analysis/design itself can have serious errors of omission and commission, leading to high rate of software project failures. One of the answers to that was agile - where you quickly send the software to users and ask for feedback and act on those feedback. The feedback becomes your bug list.

BTW, in this case I don't think any "design" would have specified anything about jittery, twitchy behavior of the steering wheel. But probably one of the design goals was to minimize "discomfort"(from AI day) - and that is what the jittery behavior is breaking.
 
BTW, in this case I don't think any "design" would have specified anything about jittery, twitchy behavior of the steering wheel. But probably one of the design goals was to minimize "discomfort"(from AI day) - and that is what the jittery behavior is breaking.
That would be a pretty detailed design document to specify all the possible behaviors of every event down to that level. A more concise and appropriate early design would be to determine a path through the drivable area and follow it. There are an infinite number of paths, so some constraints must also be given, such as minimizing the length, and curvature (this would be part of comfort) among others. Each of these is given a weight, and if the number of constraints is more than the degrees of freedom, not all of them can be satisfied simultaneously. The larger weights are given to what are considered by the developer to be most important at the current stage of development.

So clearly some weight has been given to comfort, for the path could certainly be much more tortuous. Therefore the design isn't broken. It's what they have currently spec'd. Personally, you might have assigned a higher weight to "comfort". But the assignment of weights is a tradeoff and highly interrelated. Giving comfort more weight means other criteria are not met as accurately.

I'm sure the design of the conestoga wagon didn't specify that the trip across the country should take 9 months. Certainly everyone would want it to take less time, so by your criterion the design was broken. There were faster modes of travel at the time.
 
Last edited:
  • Funny
Reactions: AlanSubie4Life
That would be a pretty detailed design document to specify all the possible behaviors of every event down to that level. A more concise and appropriate early design would be to determine a path through the drivable area and follow it. There are an infinite number of paths, so some constraints must also be given, such as minimizing the length, and curvature (this would be part of comfort) among others. Each of these is given a weight, and if the number of constraints is more than the degrees of freedom, not all of them can be satisfied simultaneously. The larger weights are given to what are considered by the developer to be most important at the current stage of development.

So clearly some weight has been given to comfort, for the path could certainly be much more tortuous. Therefore the design isn't broken. It's what they have currently spec'd. Personally, you might have assigned a higher weight to "comfort". But the assignment of weights is a tradeoff and highly interrelated. Giving comfort more weight means other criteria are not met as accurately.

I'm sure the design of the conestoga wagon didn't specify that the trip across the country should take 9 months. Certainly everyone would want it to take less time, so by your criterion the design was broken. There were faster modes of travel at the time.
Probably (obviously) the highest weight is to safety. But equality likely is the car is VERY cautious when I comes to safety, so it over-reacts to things that it thinks are an issue, and jerks the wheel. This is mostly a sign of the maturing NN making safety predictions about other vehicles (I suspect). We currently have first-order rules like "dont stray into the predicted path of another car", but have not yet got enough second-order rules that handle special cases where its ok to do this. As that second-order rule set grows it's likely we will see less jerkiness, but it will take a long time for it to get to 99%.
 
Elon said the sensitivity for the voxel map still needs some tweeking. Perhaps FSD is detecting phantom unclassified objects because of this.
Do we have some evidence from somewhere that the voxel map is actually being used? I know it was operating in the background for some time without being used and there were a bunch of Twitter threads about how it wasn't being used for driving at that time. But that was a while back. Do we have some direct evidence that they're actually using it in the released FSD Beta builds now (note - turned on and using it are two distinctly different things)? I did some brief poking around the other day but in the end I didn't find anything conclusive.

For example, this tweet does not say they are using it in FSD 10:

 
Do we have some evidence from somewhere that the voxel map is actually being used? I know it was operating in the background for some time without being used and there were a bunch of Twitter threads about how it wasn't being used for driving at that time. But that was a while back. Do we have some direct evidence that they're actually using it in the released FSD Beta builds now (note - turned on and using it are two distinctly different things)? I did some brief poking around the other day but in the end I didn't find anything conclusive.
I'm pretty sure they would have to be now they no longer use radar. Speculating, of course.
 
Probably (obviously) the highest weight is to safety. But equality likely is the car is VERY cautious when I comes to safety, so it over-reacts to things that it thinks are an issue, and jerks the wheel. This is mostly a sign of the maturing NN making safety predictions about other vehicles (I suspect). We currently have first-order rules like "dont stray into the predicted path of another car", but have not yet got enough second-order rules that handle special cases where its ok to do this. As that second-order rule set grows it's likely we will see less jerkiness, but it will take a long time for it to get to 99%.
But from AI day - planner is not rule based - its cost optimization based.

That is why I ask - what exact cost is being minimized when jerking the wheel back and forth without moving ?

Elon said the sensitivity for the voxel map still needs some tweeking. Perhaps FSD is detecting phantom unclassified objects because of this.
That would be a pretty detailed design document to specify all the possible behaviors of every event down to that level. A more concise and appropriate early design would be to determine a path through the drivable area and follow it. There are an infinite number of paths, so some constraints must also be given, such as minimizing the length, and curvature (this would be part of comfort) among others. Each of these is given a weight, and if the number of constraints is more than the degrees of freedom, not all of them can be satisfied simultaneously. The larger weights are given to what are considered by the developer to be most important at the current stage of development.

So clearly some weight has been given to comfort, for the path could certainly be much more tortuous. Therefore the design isn't broken. It's what they have currently spec'd. Personally, you might have assigned a higher weight to "comfort". But the assignment of weights is a tradeoff and highly interrelated. Giving comfort more weight means other criteria are not met as accurately.

How come it doesn't detect these when taking the same kind of right turn without stopping because of a stop sign ?

This is the reason I think there is an issue somewhere in the code to do with this specific scenario - turns from a stop. Cost optimization or planner code that uses the optimization is not working correctly.

Yes - we all understand how cost optimization works. But see above for why I think there is an actual issue - its not a simple result of properly applied cost optimization. May be there are some parameters that don't apply to this condition and need to be reset ... or there is some memory corruption in this condition. Who knows ...

The path drawn themselfs look fine and don't move around all that much - so I think the optimization may be working fine here. But planner's interpretation or implementation of how to put the cost optimized path into action has issues here.
 
I'm pretty sure they would have to be now they no longer use radar. Speculating, of course.
From what I understand, this is not true, because Tesla Vision has been around for months and it was indicated (by rice_fry on Twitter as I recall) that the voxels might exist but they weren't being used for guidance at that time. There was some tweet about the pillars.

I believe their distance estimation, which is what you'd need for Tesla Vision, doesn't have to use the voxel map. Could be wrong though. I'm just looking for the evidence from a reasonably reliable source; I'm not making claims.
 
Last edited:
  • Like
Reactions: Phlier
Do we have some evidence from somewhere that the voxel map is actually being used? I know it was operating in the background for some time without being used and there were a bunch of Twitter threads about how it wasn't being used for driving at that time. But that was a while back. Do we have some direct evidence that they're actually using it in the released FSD Beta builds now (note - turned on and using it are two distinctly different things)? I did some brief poking around the other day but in the end I didn't find anything conclusive.

For example, this tweet does not say they are using it in FSD 10:

That was the impression I got from his latest (that I'm aware of) tweet on the subject. Also there was a video of FSD avoiding overhanging branches. But I don't know for sure.
 
  • Like
Reactions: impastu
How come it doesn't detect these when taking the same kind of right turn without stopping because of a stop sign ?
Are you trolling now? I've already answered this question twice.
That is why I ask - what exact cost is being minimized when jerking the wheel back and forth without moving ?

Distance from the current path. It's unable to effect this though because the car is not moving.

The suppression of the jerking isn't happening because there is a relatively low weight to do so. They don't care at this point.
 
  • Informative
Reactions: impastu
Are you trolling now? I've already answered this question twice.

Distance from the current path. It's unable to effect this though because the car is not moving.
Doesn't make sense. What does "distance from the current path" even mean ?

You have a current fixed position. The car needs to go to another location - and needs to turn for that. The target location is a very short distance away and very easily seen. There is simply no reason why there would be a lot of uncertainty in this scenario compared to when the car is already moving.

The suppression of the jerking isn't happening because there is a relatively low weight to do so. They don't care at this point.
Nope - even if something is low weight, cost optimizer won't ask for jerks, if its not helping in something else. What would jerking help ?

BTW, just had a drive, FSD did 2 unprotected lefts and a protected right, so smoothly - I had to wonder - is it the same software that makes all those jerks !
 
Doesn't make sense. What does "distance from the current path" even mean ?
Fsd calculates a path and the car is not on that path. Let's say it's 1 ft to the left of where your car is, then the distance to the path is 1 foot.

Nope - even if something is low weight, cost optimizer won't ask for jerks, if its not helping in something else. What would jerking help ?
The optimizer isn't asking for the jerks. The jerks are a natural consequence of noise and slow speed. If you don't do something to suppress them they will occur.

You have a current fixed position. The car needs to go to another location - and needs to turn for that. The target location is a very short distance away and very easily seen. There is simply no reason why there would be a lot of uncertainty in this scenario compared to when the car is already moving.
It's not a difference in uncertainty.

If you are not currently on the path you have to either move to the left or right to get to it. You have to cover a certain distance in a certain amount of time; that's the speed toward the path. The control loop will have at least proportional control. That means the larger the distance the car is to the path the more speed it will request to get there. So for a given distance, FSD will specify a certain speed.

How do we move to the left or right? We turn the steering wheel. How much do we have to turn the wheel? Enough so that we are now moving toward the path at the requested speed. How much is enough? The sine of the angle between the direction of the path and the direction of the car times the speed of the car has to equal the speed the control loop asked for. So we have two parameters, speed and angle. The faster the speed the smaller the angle to achieve the requested speed toward the path. So even if the distance to the path is the same (the uncertainty), it take a larger angle when you are going slowly.

There's another control loop involved. The one that takes us from our current heading to the one required to intercept the path. I hesitate to cover that now, since the post is already getting pretty long.
 
  • Like
Reactions: impastu
I think what @EVNow is saying is the car’s position is the starting point - period. The path planner is supposed to calculate the path to the “endpoint” whatever / wherever that is (which is sort of the problem). So the path can’t possibly exclude the current position of the car otherwise it’s planning a path for a different car. If the planner comes back with “start with a jump to the left and a turn to the right,” well, then we have a horror show of a path planner.

Now the vector of the path from the current position, well, that can go all over the place based on whatever the planner posits from time to time so the wheel would turn. But unless something drastic changes in the environment, it really shouldn’t.
 
  • Funny
  • Like
Reactions: Jeff N and impastu
From what I understand, this is not true, because Tesla Vision has been around for months and it was indicated (by rice_fry on Twitter as I recall) that the voxels might exist but they weren't being used for guidance at that time. There was some tweet about the pillars.

I believe their distance estimation, which is what you'd need for Tesla Vision, doesn't have to use the voxel map. Could be wrong though. I'm just looking for the evidence from a reasonably reliable source; I'm not making claims.
That's quite possible, as I noted I'm speculating, we we all are to an extent :)
 
  • Like
Reactions: impastu