Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
FSD always wants to get to destination using shortest time approach. I think it should use the easiest path if Tesla want to do robotaxi.
I started reducing speed limit below the speed on the adjacent lanes to prevent FSD from jumping out of the lane since yesterday. I will continue to do this to see if it has any affect.
I have been doing this on Freeways. Scroll down to match lane speed to minimize auto lane change. I usually move to exit lanes manually when I want and scroll down the speed to help.
On passing slow trucks on I-5, I usually move to the passing lane well before the slow down to maintain the passing speed.
I always set the speed offset to zero. This is the way to minimize auto lane change. I don't use the "minimize auto lane change" setting.
 
FSD always wants to get to destination using shortest time approach. I think it should use the easiest path if Tesla want to do robotaxi.
Right, but FSD and robotaxi are two different things. But it isn't even really FSD deciding on the path. But I do think that if Tesla does offer a robotaxi service that it will use a different navigation system than what we currently have, that would avoid some things. (Just like Waymo does, or at least used to do, like avoiding unprotected left turns.)

But it is still probably best to make FSD handle as many driving situations as possible, even if the robotaxi service tries to avoid the difficult ones.
 
  • Like
Reactions: rlsd and DrGriz
Almost like labeling, "this batch of human reactions to situations all fit in this set of outputs" but with curation?
At risk of triggering certain elements here, it would be like giving the control software the current visualization that we see in the car. It is a distillation of the essential information that's needed to make control decisions. Not literally the current visualization, but something along those lines. Also not an occupancy network, but something that provides a normalized understanding of the driving environment. The current visualization is just my best point of reference. It may be something completely unintelligible to people.

That distillation could come from cameras (at various locations), LIDARs, ultrasonic sensors, drones, other cars, traffic cameras, whatever. However the sensor information comes together, the sensor system spits out that distillation, and that's what the control software works from. Given that, a simulator would be able to produce the same information without having to perform realistic rendering. It then falls to separate software to figure out how to collect all the information coming from the various sensors, distilling it and presenting it to the control network. Ultimately, an all-LIDAR vehicle produces the same outputs to the control logic as an all-camera vehicle would, but the means of producing those outputs might be wildly different. They'd be car-specific, while the control logic could be created once and used by everyone.

The machine learning guys tell me that such an approach hobbles the overall efficiency of the neural network because there can be no back-propagation to the pixel level. I'm suggesting this approach because of the larger problem of training many different sensors sets while trying to produce the same control outputs.

My approach has the problem that there may simply not be enough information coming from the sensors to satisfy the control system's needs. For example, putting a one-pixel camera facing forward on a car isn't enough information to make control decisions. So even if such a distillation system was created, it might all fall apart when sensor solutions come along that blow the pants off existing stuff. At that point, you'd want to completely retrain the control system with the outputs of the new sensors because they can provide a much better distillation - and the currently-trained control system wouldn't take full advantage of it.
 
Twice recently FSD just sat at an intersection with a Stop sign and never moved. Waited over a minute. The only thing in common were low visibility corners (i.e. fencing), no shoulder and fairly fast crossing traffic. I wonder if FSD knew by creeping it would end up out in the roadway affecting traffic? I'll have to go back and see if the behavior happens again.
 
Software -> Service -> Infotainment -> Software -> Autopilot Data Bank
How do you get this? I have nothing there under Software on my 2022 M3LR.

image.jpg
 
No. Just sat at the 2 intersections. After a minute plus I finally used the accelerator pedal both times. Odd. V12.3.4.
That's consistent with poor performance at stop signs. It apparently gets a weak "go" signal when there are no other cars around, as if it saw very few examples of empty intersections. So it learned to wait for another car to go past first. After all, the network doesn't infer the rules of the road, but the rules of the training data.
 
  • Helpful
Reactions: JHCCAZ
That's consistent with poor performance at stop signs. It apparently gets a weak "go" signal when there are no other cars around, as if it saw very few examples of empty intersections. So it learned to wait for another car to go past first. After all, the network doesn't infer the rules of the road, but the rules of the training data.
I think it's probably a bug related to the new feature to avoid blocking intersection.
 
  • Like
Reactions: JB47394
I've always thought human reaction time is about 250ms

"Human Reaction Time in Emergency Situations​

October 1, 2021

Reaction Time For Simple Tasks:​

Reaction time is defined simply as the time between a stimulus and a response. Human reaction time is sometimes quoted as a constant number: 0.2 seconds. While 0.2 seconds may be the average measured for simple tasks, reaction time is actually more complex.

Reaction Time For Complex Tasks:​

For more complex tasks such as emergency braking, human reaction time has been studied and measured as three different phases: the time to perceive or sense a danger or hazard (perception phase), the time to make a response decision (decision phase), and the time to respond (response phase). The response phase (i.e. braking) is further complicated by the physical response (i.e. apply the brakes with the foot) and the system response (i.e. the time the vehicle’s braking system requires to actually apply braking force to the wheels). Under ideal driving conditions, the entire human perception reaction time for braking has been measured to be approximately 1.5 seconds (R. Limpert)."

Note that is ideal conditions, it can be worse if you are distracted, tired, drunk, hungry, dehydrated, etc
 
Remind me how to get there please. I did that once last year…


Enable Service Mode Via Vehicle Touchscreen​

  1. On the vehicle touchscreen, touch CONTROLS (vehicle icon) > SOFTWARE.
  2. Touch and hold the large word "MODEL" for 2 seconds, and then release.
  3. Use the on screen keyboard to type "service" into the access code field, and then touch OK.
    Note
    The touchscreen is overlaid with the words "SERVICE MODE" in red. Newer firmware versions have a red border around the edges of the touchscreen UI.

They even have a video on how to use it: Service and diagnostic information for independent businesses and individuals involved in the professional maintenance and repair of Tesla vehicles.
 
One scenario where it consistently tailgates is when another car changes into the same lane as you, ending up just a short distance in front of you. FSD will not slow down to create distance. That short distance seemingly becomes its set following distance. I haven't let it continue long enough to see what happens if the car in front front hits the brakes.
I don't have FSD, but I notice the same with AP. A similar thing happens when the vehicle ahead slows down. It lets the gap go down, but when the vehicle speeds up, it takes its merry time to increase the speed.