Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Model S and X AutoPilot Fleet Learning during Emergencies

This site may earn commission on affiliate links.

Cyclone

Cyclonic Member ((.oO))
Jan 12, 2015
5,212
1,295
Charlotte, NC
I do not have insider knowledge about this, but I wonder if we could make some educated guesses. How does Tesla's "Fleet Learning" process handle areas in a State of Emergency?

I know the fleet learning combs through multiple instances from multiple vehicles and smooths out over time, but I wonder if Tesla has thought ahead to make that process "read-only" in areas that are under a State of Emergency. South Carolina is in a State of Emergency and soon, I-26 from Charleston to Columbia will be reversed to one-way traffic "Northbound" regardless of if you are in the Northbound or Southbound lanes. While I hope AutoPilot doesn't freak out about going "the wrong way", I would hope the mothership doesn't also "learn" that it is acceptable.

My personal thought is that AutoPilot tiles in an area under a State of Emergency (not just a hurricane evacuation) are "locked" from updating from the fleet during the State of Emergency is in effect. Whether Tesla has done so, I don't know.

What are you thoughts?
 
IMO the way the forums interpreted that Tesla will do fleet learning is wrong. Whether Tesla misled everyone, or people just escalated thinking what they heard and it became the norm.

Again, this is all IMO. There is no "automatic" fleet learning. The data is brought back to the Tesla engineers, who then decide how and what to train to make the algorithms better. They can hand pick a subset of data, or they can use it all. They should be able to automatically screen for anomalies and to manually pick them out of the dataset.

I did training/testing/simulation/machine learning/etc. for about a decade; management would often say stupid things like "hey, let's let the operator add a <thing> into the dataset, and the algorithms will retrain themselves" each time our team heard that, we cringed. You can't predict how the operator will be adding said <thing> into the dataset, you can't automatically predict that it wont break something. It takes days/weeks/months of testing on the aggregate dataset to make sure your newly trained model works better than your old model. But then you're only 15% of the way there. You then need to start breaking up the data into subsets and looking at the false alarm rates of each sub-category (i.e. did your newly trained AP now do better on asphalt, but worse on gravel? is it an acceptable loss? Is any loss acceptable?), etc. And that takes another days/weeks/months.

I think it's all hype, but like I said, I could be dead wrong, maybe they're a million times smarter than me. But IMO this whole "fleet learning" will be done at a desk, with an engineer (group of them, maybe) sitting and sifting through the data. And if they see an anomoly, they can start digging down to see what happened in South Carolina on October 5th, 2016.

P.S. Stay safe.
 
  • Like
Reactions: Cyclone
@Max* , I agree with you that the cars are sending the data to the mothership to review and we likely have a poor idea of how they are handled. However, absent strong exceptions (like an AEB event), statistical modeling was automatically mapping the lanes of travel (remember that presentation with the detail map Elon showed) and normalizing some aspects to then either incorporate or have a human review.

If there is a human review, that easily addresses such cases as South Carolina's coastal evacuation. But let's assume there is some automation going on (maybe on just lower "severity" things). If there are enough AutoPilot capable cars participating in the evacuation, all the curve data for "on ramps" that are actually "off ramps" and lane data for 8.1's eventually Nav-assisted highway existing could be statistically strong enough to make a noticeable effect on the normalizing of the data. Now, it would soon get wiped out as data from normal behavior overrides the "false" data, I would hope we never even get to that.

So if there is automation in place (and I'm not saying there definitively is), I would hope it includes some logic to account for moments like these.

I just think back to 7.0 reports of those in California talking about how the car took weird actions as if it had some predetermined biases to take swerve at certain coordinates and stuff like that. But who knows, that could also have been bad anecdotal evidence from our limited understandings.

I appreciate your thoughts on how Tesla might be doing "fleet learning". :)
 
Its not that uncommon, in the UK, for an accident which blocks the highway to cause all backed-up traffic to be turned around, drive wrong back back up that carriageway, and then make a U-turn to exit at the next junction.

Something else an unattended / automated process would have to "ignore"
 
  • Like
Reactions: Cyclone
I agree completely with Max, the car sends data back to Telsa, which is then analyzed offline and used to refine the AP algorithms. Those refined algorithms are then tested against the old one for evidence that they're actually better.

This is my speculation, but I believe that the car may be running two AP algorithms in parallel. Algorithm "A" is the active one and is controlling the car. It's a mature, tested algorithm. Algorithm "B" is a new algorithm that's running in parallel, but not controlling the car. Now, the following event happens:

The driver sees that AP is doing something it shouldn't, and takes over steering. Data is now sent back to Tesla: What the driver did (assumed to be correct), what algorithm A wanted to do (assumed to be wrong), and what algorithm B wanted to do. If repeated data shows that algorithm B's desired action more closely matched the driver's action, then algorithm B's modifications may be issued into the next "A" algorithm release.

I also believe that not everyone may be running the same "A" and "B" algorithms, even if you're on the same firmware version. As an example, here in Texas most major cities use light-colored concrete for the freeways. To test different light-concrete algorithms, Tesla would push those here, whereas other parts of the country might get test algorithms for darker-colored asphalt, or test algorithms for hills, etc.. This would explain why some people say that firmware version X is the best AP they have seen, while others say that same firmware version is worse.

I feel that the late 7.1 builds of AP had much better control than the new 8.0 AP, and it could be because down here in Texas we may have been running light-concrete optimized algorithms, which have been pulled out for the wide 8.0 release. They'll eventually make their way back in.
 
  • Like
Reactions: Cyclone