Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
This is because new paths are being calculated, and Beta is aligning the wheels along the new direction in preparation for when the car starts to move.
Scrubbing the tires at zero speed isn’t good for the tires or the steering system. I’ve designed autosteer systems and I explicitly freeze the steering when almost stopped, and wait until the wheels are rolling again to begin steering again. Doesn’t take much distance to do it this way and is much smoother.
 
Scrubbing the tires at zero speed isn’t good for the tires or the steering system. I’ve designed autosteer systems and I explicitly freeze the steering when almost stopped, and wait until the wheels are rolling again to begin steering again. Doesn’t take much distance to do it this way and is much smoother.
Don't they let you use the steering wheel for arcade games input?
 
Don't they let you use the steering wheel for arcade games input?
Oh, a comedian! Must explain why your comments in this thread are mostly silly :cool:

I have actually designed the algorithms and implemented the software for autosteer systems for multiple large production vehicles. Details about what vehicles are proprietary. But in general, behavior at very low speeds is a bugaboo of an autosteer system, because the path following control law is typically outputting a yaw rate command, and the steering angle control law gain that relates yaw rate command to steering angle is inversely proportional to speed, so that gain goes to infinity as speed goes to zero; obviously, you don‘t actually let the gain go to infinity, but how to limit that gain, and how to smoothly reduce steering commands to a steady state value as the vehicle comes to a stop (and do the opposite when starting from a stop), are nontrivial problems. Whether that’s what’s going on with FSD steering wheel jerkiness, or something else, I have no opinion; I don’t have FSD (and have no interest is being a beta tester for it), and have no direct insight into its steering control law, other than having done autosteer for other vehicles, knowing the physics of turning motion, and having successfully eliminated the steering jerkiness at low speeds (and high speeds too, but that’s another tale for another day…).
 
Okay, regarding “Auto-steering on city streets” FSD BETA, I think I found a way to utilize the software without being completely stressed out going on errands. I configured Autopilot speeds for under the speed limited to -5 mph so the car doesn’t dart around like speed buggy. Then I feather the acceleration up to appropriate deceleration points and allow the car to come to stops, I then used the accelerator to continue with appropriate right and left turns with regards to timing, then taking over at first signs of anything outside how I want Tesla Vision driving, my question is, does DOJO monitoring these actions? Is it learning anything, will it eventually start driving more fluidly as I force it to drive, or does it take an update for any improvements? I want one improvement at the end of each day. Chop chop Tesla team!
 
  • Like
Reactions: 101dals and impastu
Oh, a comedian! Must explain why your comments in this thread are mostly silly :cool:

I have actually designed the algorithms and implemented the software for autosteer systems for multiple large production vehicles. Details about what vehicles are proprietary. But in general, behavior at very low speeds is a bugaboo of an autosteer system, because the path following control law is typically outputting a yaw rate command, and the steering angle control law gain that relates yaw rate command to steering angle is inversely proportional to speed, so that gain goes to infinity as speed goes to zero; obviously, you don‘t actually let the gain go to infinity, but how to limit that gain, and how to smoothly reduce steering commands to a steady state value as the vehicle comes to a stop (and do the opposite when starting from a stop), are nontrivial problems. Whether that’s what’s going on with FSD steering wheel jerkiness, or something else, I have no opinion; I don’t have FSD (and have no interest is being a beta tester for it), and have no direct insight into its steering control law, other than having done autosteer for other vehicles, knowing the physics of turning motion, and having successfully eliminated the steering jerkiness at low speeds (and high speeds too, but that’s another tale for another day…).

Apparently, Tesla doesn't think steering wheel input while stopped is a big deal. They allow you to do it while playing some of the arcade games. So I guess this will be about the 9th time I've said that they will have to ultimately address this, but it's not of great concern to them now, obviously.

"But in general, behavior at very low speeds is a bugaboo of an autosteer system"

This is what I've been saying, but trying to avoid jargon.

"the path following control law is typically outputting a yaw rate command"

Ultimately.
This is the other control loop I referred to, but said my post was getting too long. The class wasn't ready to move on yet.
 
Last edited:
he steering angle control law gain that relates yaw rate command to steering angle is inversely proportional to speed
Doesn't dTheta / dt = steering deflection * speed / (distance between front and rear wheels); So the gain of steering input to yaw rate goes to zero when the speed is zero. This would require an infinite steering deflection for any requested yaw rate. But I was going to try and avoid things like yaw, yaw rates, gains, etc.

I guess you mean the gain of yaw rate to steering input which makes sense when you're trying to determine how much to turn the wheel. That would become infinite.

Honestly, this is more of they type of discussion I was hoping for, rather than why Tesla shouldn't be doing what they are doing. I welcome your input.
 
Last edited:
The class wasn't ready to move on yet.
That is odd because control laws and control theory definitely seem simpler conceptually (and much more theoretically grounded!) than path planning and perception, which seem to be where the difficulty lies.

But, it is understandable to stick with what is easier to capture in a closed-form solution.
 
  • Like
Reactions: impastu
That is odd because control laws and control theory definitely seem simpler conceptually (and much more theoretically grounded!) than path planning and perception, which seem to be where the difficulty lies.

But, it is understandable to stick with what is easier to capture in a closed-form solution.
I agree that perception and path planning have issues, but the exaggerated steering movements at low speed is a consequence of the path following control problem.
 
  • Like
Reactions: impastu
I agree that perception and path planning have issues, but the exaggerated steering movements at low speed is a consequence of the path following control problem.
I feel like we are going in circles. I think it was pretty obvious to everyone why the steering wheel jerks (and it seems like it could be addressed relatively easily). But why does the path move so much? Even if you fix the jerking steering, that would remain a problem. Seems like a lot of work left to be done. It will be interesting to see how different it is in 12-24 months.
 
  • Like
Reactions: _jal_ and impastu
There’s another layer between perception and path planning that isn’t getting mentioned, which I’ll call the inference or prediction layer (I don’t know what Tesla calls it). Once you’ve identified everything in the relevant area - this is perception - you have to estimate the state (in the physical/mathematical sense of state) of each identified element, which for many of the elements, means you have to predict what they will do over the relevant time horizon (which could be anywhere from fractions of a second to a minute or more). “Jitter” in the inference layer is likely part of what‘s driving jitter in the path planning (which can be ameliorated in the path planner but maybe isn’t Tesla’s focus right now?). Boyd’s OODA loop is a good framework for the problem, so what I’m calling the inference layer is part of the second “O” - the Orient step.

The inference or prediction layer is (IMO) the hardest part of the whole autonomy stack. Several years ago, I was the lead autonomy engineer on a program looking at coordinating groups of autonomous vehicles in pursuit of high level team goals in a non-cooperative environment, and in the first phase of the program, we made very simplistic assumptions about the “intent prediction“ part of the problem, just so we could make progress on other aspects, such as the team planning function. We got to about TRL 3 with the team planner in about 18 months and did some very impressive demonstrations, but the customer had an internal disagreement and pulled the funding before the next phase. But even if funding had continued, we were still just at the playpen level on the intent prediction stuff, and I don’t think the state-of-the-art has advanced enormously since then, and is why I think we’re still years if not decades out from real L5 autonomy for self-driving cars. Waymo is the closest but from what I’ve seen, I think they still have quite a ways to go to a solution that can scale to displacing a large portion of the current population of human-driven cars in dense urban environments. I think Tesla’s current approach is decades out and will have to be revamped many times to get to L5. But I‘ll be happy if I’m wrong :cool:
 
Yeah, I find that FSD Beta jerks the wheel a lot during turns like it is kind of feeling its way through the turn. Also, sometimes, coming to a complete stop a red light but where the car will need to make a turn, it turns the wheel before stopping instead of keeping the wheel straight and only turning when it starts the turn.
My experience as well 🤘🏽
 
  • Like
Reactions: impastu
Yeah, I find that FSD Beta jerks the wheel a lot during turns like it is kind of feeling its way through the turn. Also, sometimes, coming to a complete stop a red light but where the car will need to make a turn, it turns the wheel before stopping instead of keeping the wheel straight and only turning when it starts the turn.
Its just a bug they need to fix.

Essentially - they try a lot of simulations and come up with an optimized path they take. I guess there is a threshold "ok to go" probability they want to exceed before starting to apply any acceleration. Until then they should not apply any torque to the steering wheel. As simple as that.
 
Its just a bug they need to fix.

Essentially - they try a lot of simulations and come up with an optimized path they take. I guess there is a threshold "ok to go" probability they want to exceed before starting to apply any acceleration. Until then they should not apply any torque to the steering wheel. As simple as that.

There's nothing fundamentally different happening when you are stopped. One of the control parameters, speed, is just set to zero. The control loops continue to operate. It's best to keep them running continuously to avoid transients and instability caused by starting and stopping the digital control algorithms.

Let's say you've come to a stop, and while you were travelling at a slow speed FSD had to turn the wheel sharply to the right to follow the path. Now someone in the lane to your right pulls a yoohoo and edges over close to or into your lane because they are in wrong lane. If the control loop hasn't been tracking the required changes in steering continuously, now there is a big step change which might produce an exponentially decaying sinusoidal response in steering. Not to mention there probably won't be enough room to move forward while you align the steering for the new direction.

Now FSD has to decide whether to screw the other driver(which is what seems to happen), or wait for them to move forward and into your lane.

There are untold, unforseen possibilities.
 
Yup.

I noticed this yesterday while stopped at a red light waiting for a left turn. The path planning was continuously being updated for the cars coming in the opposite direction, seemingly to steer around them. It was impressive seeing these cars were going ~50mph.

There's nothing fundamentally different happening when you are stopped. One of the control parameters, speed, is just set to zero. The control loops continue to operate. It's best to keep them running continuously to avoid transients and instability caused by starting and stopping the digital control algorithms.

Let's say you've come to a stop, and while you were travelling at a slow speed FSD had to turn the wheel sharply to the right to follow the path. Now someone in the lane to your right pulls a yoohoo and edges over close to or into your lane because they are in wrong lane. If the control loop hasn't been tracking the required changes in steering continuously, now there is a big step change which might produce an exponentially decaying sinusoidal response in steering. Not to mention there probably won't be enough room to move forward while you align the steering for the new direction.

Now FSD has to decide whether to screw the other driver(which is what seems to happen), or wait for them to move forward and into your lane.

There are untold, unforseen possibilities.
 
I re-watched some of the AI Day slides about control/planning and it seems that the current path planner isn't long for this world. They mentioned moving to some learning-based approach to handle more complicated driving environments (cue photo of traffic in India) but there weren't any real details given. I'm assuming they mean some RL-like approach is under development.