Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

How does our driving help refine FSD?

This site may earn commission on affiliate links.
I'm trying to understand how the Beta test program works. I understand the snap shot communication but can someone explain how the development team "learns" from the driving experiences of all of the beta testers out there? Are they notified every time we take over and disengage FSD, with transmission of all of the data surrounding the event? What are they tracking and using to further refine the software?
 
I'm trying to understand how the Beta test program works. I understand the snap shot communication but can someone explain how the development team "learns" from the driving experiences of all of the beta testers out there? Are they notified every time we take over and disengage FSD, with transmission of all of the data surrounding the event? What are they tracking and using to further refine the software?

I don't actually know how it works but I think the system still runs even when the FSD is not on. It runs in shadow and compares that with what humans actually do.

For example, it sees a traffic light that's turned off at the freeway entrance ramp during off-hours and weekends... and it would run in the shadow to a model that the car should stop at all traffic lights that are not turned on. It compares with humans and notices that humans don't stop in this particular scenario and it would alter its model to imitate humans' behavior.

If the FSD program is on, the principle is the same but the shadow model now is in practice, the car actually stops for traffic lights that are off at the freeway entrance ramp. Human drivers would override it and would prevent it from stopping the car. The AI then notices there's a discrepancy of humans/machine behavior at this particular GPS location and it would adjust to imitate human behavior in a future update.
 
Interesting... As I leave my neighborhood there are two STOP signs in very close proximity. The 2nd is on the other side of a railroad track. FSD stops at the first one, then crosses the track and blows right through the 2nd one without stopping. I've modeled the correct behavior multiple times with it turned off as well as sending multiple snap shots (with emails) when it's turned on. Still happening with 10.8 which is what made me ask the question.
 
Interesting... As I leave my neighborhood there are two STOP signs in very close proximity. The 2nd is on the other side of a railroad track. FSD stops at the first one, then crosses the track and blows right through the 2nd one without stopping. I've modeled the correct behavior multiple times with it turned off as well as sending multiple snap shots (with emails) when it's turned on. Still happening with 10.8 which is what made me ask the question.

AI takes many many samples, I don't know how many, maybe at least hundreds of thousands or more to make it significant. So I wouldn't hold my breath thinking that my overriding actions will affect the AI anytime soon.

I think most of the Stop signs are GPS coded, and not vision coded. That means, if you get a shroud and cover the Stop sign, your car still stops without seeing the stop sign.

I think Tesla still relies heavily on GPS so when it runs a stop sign, it means the GPS coding needs to be updated.

In addition, the theory might be very different than reality. Tesla predicted in 2013 that driverless would be a reality in 2016:

 
Once Full Self-Driving is “feature complete,” Tesla anticipates that are two more levels of autonomy beyond that: a level where Tesla is confident that Full Self-Driving can be operated without supervision (which will open up the possibility of a Robotaxi), and the ultimate level where regulators are also confident and grant it full approval

With FSD beta, Tesla is now "Feature complete".

Prior to the release, Tesla was partially complete with:

Navigate on Autopilot
Auto Lane Change
Autopark
Summon
Full Self-Driving Computer
Traffic Light and Stop Sign Control

and it was missing:

Autosteer on city streets

"Feature complete" doesn't mean it is doing well with its features such as summon which can still scrape your rims and your car.

FSD beta has now fulfilled the "Feature Complete" with the addition of "Autosteer on city streets" but it doesn't mean it's performing well with that features because if you don't override it timely, collisions can happen.

The fundamental of Self-Driving is it should be able to avoid colliding with an obstacle. It seems that Tesla doesn't consider that as an autonomous feature because all these years, Tesla has been trying to do a "Feature Complete" while ignoring the collision avoidance feature. It relies on that collision avoidance feature with humans to override its system to avoid obstacles and not on automation technology.
 
  • Informative
Reactions: pilotSteve
I'm trying to understand how the Beta test program works. I understand the snap shot communication but can someone explain how the development team "learns" from the driving experiences of all of the beta testers out there? Are they notified every time we take over and disengage FSD, with transmission of all of the data surrounding the event? What are they tracking and using to further refine the software?
I suggest reading up basics of Neural Network training and watching Tesla AI day.

Some of the above answers are partly inaccurate.