Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

How to give the Mothership good data regarding disengagement?

This site may earn commission on affiliate links.
I want some training about how FSD classifies a disengagement to the mothership when the human input is different than the FSD choice. I want my choices regarding how to disengage, to reflect the level of danger.

I suspect that training a NN requires good data. How does the mothership reporting program differentiate from an error that FSD made which leads to danger (very bad), vs a choice that the humans just changed their mind (disengage but no error?) There are also FSD choices that make the humans uncomfortable, but are not yet unsafe (in between)

As a human who wants FSD to get better data and send it to the mothership, is there any difference in the severity of disengagement between hitting the brakes and pushing up on the drive selector? Does the mothership want me to report a total failure one way maybe with brake pedal, and perhaps not report disengagements when moving the drive selector up to cancel FSD?
What about when my choice of steering is different, FSD disengages and TACC is still in control, is that ranked as the same level of disengagement as other types? If FSD is downright dangerous should I hit the brakes rather than just the steering wheel disengagement?

This leads to other questions, like whether I should allow FSD a little, or a lot of latitude when it seems to be making a questionable but not yet dangerous choice. My standards are pretty high, and when it is crossing the double yellow around a blind turn, I want to disengage immediately and have the mothership know this is a failure, even at 20 mph. However, going over the double yellow is required for landslide hazards every day on my narrow 2 lane road. It is interesting to test how good each iteration is at making the choice. I want my car to really try hard to stay in the lane, but when required exit it safely.

I also wonder how the human accelerator pedal activation is integrated into mothership reporting if at all? I use the accelerator when I have confidence in a turn and cannot afford to let FSD hesitate in front of fast-moving traffic.
 
  • Like
Reactions: ironwaffle
...like whether I should allow FSD a little, or a lot of latitude....

In theory, the machine learns from humans so I don't see how it can learn if you are hesitant to disengage each time it does not drive as a normal human would.

So by giving it latitude, you are reinforcing behaviors that are not human-like--you are teaching it to drive as a drunkard machine does!
 
  • Like
Reactions: RenaeTM
I want some training about how FSD classifies a disengagement to the mothership when the human input is different than the FSD choice. I want my choices regarding how to disengage, to reflect the level of danger.

I suspect that training a NN requires good data. How does the mothership reporting program differentiate from an error that FSD made which leads to danger (very bad), vs a choice that the humans just changed their mind (disengage but no error?) There are also FSD choices that make the humans uncomfortable, but are not yet unsafe (in between)

As a human who wants FSD to get better data and send it to the mothership, is there any difference in the severity of disengagement between hitting the brakes and pushing up on the drive selector? Does the mothership want me to report a total failure one way maybe with brake pedal, and perhaps not report disengagements when moving the drive selector up to cancel FSD?
What about when my choice of steering is different, FSD disengages and TACC is still in control, is that ranked as the same level of disengagement as other types? If FSD is downright dangerous should I hit the brakes rather than just the steering wheel disengagement?

This leads to other questions, like whether I should allow FSD a little, or a lot of latitude when it seems to be making a questionable but not yet dangerous choice. My standards are pretty high, and when it is crossing the double yellow around a blind turn, I want to disengage immediately and have the mothership know this is a failure, even at 20 mph. However, going over the double yellow is required for landslide hazards every day on my narrow 2 lane road. It is interesting to test how good each iteration is at making the choice. I want my car to really try hard to stay in the lane, but when required exit it safely.

I also wonder how the human accelerator pedal activation is integrated into mothership reporting if at all? I use the accelerator when I have confidence in a turn and cannot afford to let FSD hesitate in front of fast-moving traffic.

Tesla has about 250 preset conditions they call triggers, which if applies to your driving, will automatically send that event to the mothership for labeling and training purposes. Not every disengagement will "trigger" this. But based on my network traffic, if you press the AP snapshot button, it will send that event to the mothership for evaluation. There's no guarantee that what you think is worthy of reporting is actually acted upon. There's no transparency from Tesla how they evaluate snapshots. We've seen that they seem to focus on narrow situations for each point release. Your snapshot could be saved for future labeling once they decide to improve the situation you reported. The car has enough storage for 5 snapshots. In my experience, they are sent to Tesla the same day.

Regarding how to disengage, I think the best rubrik in order of priority is:

1) disengage to ensure safety of self and others
2) disengage to ensure other cars aren't confused/annoyed by the car's driving behavior
3) disengage to ensure passenger comfort
 
In theory, the machine learns from humans so I don't see how it can learn if you are hesitant to disengage each time it does not drive as a normal human would.

So by giving it latitude, you are reinforcing behaviors that are not human-like--you are teaching it to drive as a drunkard machine does!
Is that what you know from your experience with NN? I agree that what you say is likely true-ish, I was just hoping for any information as to how Tesla wants the NN trained.

However, it clearly needs some latitude, otherwise learning is hindered.

For instance, you are at a stop sign creeping forward to turn right, the wheel clocks around 90 degrees to the right, then backtracks 45 degrees, before continuing with about 75 total degrees of total rotation to the right. It completed the turn, everything was safe, but it didn't drive as a human would. Is this a failure that should be logged?

How about another case: You are approaching a blind corner on a 35 mph road that you usually travel at 40 mph. You are traveling 35 mph, and the car loses confidence and slows to 32 then 30 mph. It has done something a human wouldn't, is it time to disengage?

Another case: you are waiting at a stoplight to go straight, which turned green 2 seconds ago. The truck behind you is impatiently rolling towards your bumper, but FSD hasn't moved forward yet. Do you intervene to accelerate immediately, mimicking the impatient and emotional human driver who may accelerate just to get away from the large truck rolling at your bumper?

Humans drive with many habits that are poor for a computer, so I don't totally agree that we need to disengage 100% of the time as soon as FSD does something we wouldn't.

I drive and babysit FSD with my hands right on the wheel 100% of the time, letting it slide through my grip or be grabbed as needed. When it makes questionable but not dangerous choices, like delaying at a stop light, turning right, spinning the wheel oddly at slow speeds or slowing down in places where it might have lost confidence, I give it a small latitude to see if the errant behavior is corrected. If it persists, it's time to report it through the on-screen button and email.

However, maybe this ain't the best attitude? Maybe it is truly better to disengage 100% of the time the first time the car veers off course by 12 inches. This seems like too high a bar to expect without a neuralink terminal.
 
...For instance, you are at a stop sign creeping forward to turn right, the wheel clocks around 90 degrees to the right, then backtracks 45 degrees, before continuing with about 75 total degrees of total rotation to the right. It completed the turn, everything was safe, but it didn't drive as a human would. Is this a failure that should be logged?...

Sometimes Autosteer goes crazy and spins the steering wheel in both directions and if I was holding a yoke, I fear it would whack my hands!

That's bad steering behavior and that should be corrected so it could steer as a human does.


How about another case: You are approaching a blind corner on a 35 mph road that you usually travel at 40 mph. You are traveling 35 mph, and the car loses confidence and slows to 32 then 30 mph. It has done something a human wouldn't, is it time to disengage?

In this scenario, manually increasing speed does not "disengage" the autosteer.

Even on a straight clear road, the car can slow down for no reason that I could think of and I just manually increase speed without "disengaging" the autopilot.

I guess what you mean is if you disengage, that would create a trigger event so the AI can learn.

In theory, the system runs in shadow too. It calculates the safe speed for this curve is 30 MPH and it compares with what speed humans actually drive on that particular curve too. So even there's no disengagement but the vast majority of human speed for that curve is 40 MPH then it would eventually learn to adapt that speed instead of its original calculated speed of 30 MPH.

So in theory, it can learn even there's no disengagement too.



...Another case: you are waiting at a stoplight to go straight, which turned green 2 seconds ago. The truck behind you is impatiently rolling towards your bumper, but FSD hasn't moved forward yet. Do you intervene to accelerate immediately, mimicking the impatient and emotional human driver who may accelerate just to get away from the large truck rolling at your bumper?...

I would. There's nothing wrong with me driving as a human does. Just because there's an FSD, I should not become a toddler driver.

...Humans drive with many habits that are poor for a computer, so I don't totally agree that we need to disengage 100% of the time as soon as FSD does something we wouldn't...
Expectations for humans and machines are different.

Because we already know humans make mistakes so that's why we pay lots of money for machines to solve that problem and not to exacerbate it.


... This seems like too high a bar to expect without a neuralink terminal....

We are talking about theory but the reality might be different.

If Tesla gives us feedback, we would know how to give feedback as a tester. Our reports go into a black hole and we have no idea if they even read our reports at all.
 
  • Like
Reactions: Krock65 and Vines