Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla Software 2019.16.3

This site may earn commission on affiliate links.
Right, but the steering wheel actuators themselves aren't strong enough to do that without help from the steering motors under the car. The steering wheel actuators themselves aren't very strong motors, less than the strength of a person. Elon has said it will always be this way while a steering wheel is in the car.
As far as I know there’s only one steering motor and it’s attached directly directly to the rack. It might actually be two motors in one housing for redundancy. There are no “steering wheel actuators”. It wouldn’t make much sense to have a motor that drives a torque sensor that drives another motor.
It seems like what you’re talking about is the amount of torque on the steering wheel it takes to cause the software to switch over to manual control. That may be hard coded but I don’t think anyone here knows for sure.
 
That first sentence stings! :p

Whatever size of fleet “Shadow Mode” operates on (saw it suggested it required HW3, which would be internal only to Tesla until the last month or so), I think this brings up the fundamental limitation of the concept. Without expert, external guidance it is extremely difficult to accurate assess achievement of “safe driving” goals.

Chicken & Egg issue, in a sense, to automate the learning, because how does the system validate that the drivers it is watching are doing it correctly, or doing it the only correct way?

<edit> Put another way, how does the system discern a false positive when the very goal of the feature it is testing assumes the driver makes mistakes & feature is to correct the mistake.
This seems super easy to test. You run the system in shadow mode. You look at all the times the system wants to take evasive action. You look at how many times the lack of evasive action resulted in an accident. All the rest of the cases are false positives. Obviously some false positives are acceptable.
For all we know Tesla rolled out this update to early access participants and it reduced the accident rate. Now if the system itself causes an accident, as it sounds like it could do, I wonder how liable Tesla is. It would be nice if they provided some transparency on how well ELDA works. They have the data.
 
You look at all the times the system wants to take evasive action. You look at how many times the lack of evasive action resulted in an accident. All the rest of the cases are false positives. Obviously some false positives are acceptable.
"You" who? Using what data and evaluation methodology/process, with what standards?
It would be nice if they provided some transparency on how well ELDA works. They have the data.
They may have how many times ELDA would have been triggered in whatever fleet they may have hypothetically been using, but there is an ocean between that and "how well ELDA works".

<edit> The potential methods for doing so would require massive manual evaluation of the events, assuming you got enough data reported for that, or mind numbing amounts of accident data from which you could try infer how correct ELDA might have been. When I say "mind numbing" you're probably talking years from a very large fleet to have any sort of confidence, given the very wide range of potential conditions and relatively low accident frequency.

P.S. All "accidents" experienced by AP-equipped Teslas are supposedly all manually reviewed by a single person at Tesla. To give you an idea how low that number is.
 
Last edited:
"You" who? Using what data and evaluation methodology/process, with what standards?

They may have how many times ELDA would have been triggered but there is an ocean between that and "how well ELDA works".
Well if it’s triggered in shadow mode and then there is an accident then you know that it detected a hazardous situation.
Tesla has 500k cars on the road and they’ve been rolling out this feature randomly so they’ll know very quickly or not whether it reduces the accident rate. All they have to do is compare the number of accidents when it is activated in shadow mode to the number of accidents when it is activated in real life. We’re all guinea pigs. Personally I’m going to wait for the results before I upgrade :D
 
I haven’t had EDLA activate for me yet, so i have no idea how dangerous it is. These reports of false positives is tempting me to try to find a safe way to activate it just to get an idea of how hard it is to override. But I have to say that I don’t see whyTesla is making it an essentially permanent option by requiring it to be deactivated on every drive.
 
<edit> The potential methods for doing so would require massive manual evaluation of the events, assuming you got enough data reported for that, or mind numbing amounts of accident data from which you could try infer how correct ELDA might have been. When I say "mind numbing" you're probably talking years from a very large fleet to have any sort of confidence, given the very wide range of potential conditions and relatively low accident frequency.
You just need a good way to detect accidents reliably. There’s so much data that the car is taking it certainly seems possible to do. The numbers I’ve seen are that people get into accident about every 150k miles on average. So with a fleet of 500k cars there are over 100 accidents a day. How many of those could be prevented by ELDA? I have no idea but if it’s more than 1% it doesn’t seem like it would take very long to find out.
 
Well if it’s triggered in shadow mode and then there is an accident then you know that it detected a hazardous situation.
That's a very small number AND you don't actually know if the Shadow Mode predicted the correct action that would avoid the accident. Counterfactal is hard to assess. Fortunately on that, as per my <edit>, that would be feasible manually assess as Tesla already assesses every accident. Unfortunately, for the same reason, that's a tiny slice of data that would take a lot of time to build up.

Tesla has 500k cars on the road and they’ve been rolling out this feature randomly so they’ll know very quickly or not whether it reduces the accident rate. All they have to do is compare the number of accidents when it is activated in shadow mode to the number of accidents when it is activated in real life. We’re all guinea pigs. Personally I’m going to wait for the results before I upgrade :D
That is:
1) exactly the opposite of the concept of Shadow Mode, and moral pitfall for what I'd hope is obvious reasons to you? The smily face at the end means you get that?
2) relying on fallacy of the averages, it can hide a lot of bad stuff in there and doesn't actually address specific issues....without further unethical trial and error

The whole use of this concept of Shadow Mode relies on drivers making correct decisions, the assumption that the drivers are making correct decisions . That's why trying to use it for assessing how well incorrect decisions are detected is at odds with it.
 
Last edited:
You just need a good way to detect accidents reliably. There’s so much data that the car is taking it certainly seems possible to do. The numbers I’ve seen are that people get into accident about every 150k miles on average. So with a fleet of 500k cars there are over 100 accidents a day. How many of those could be prevented by ELDA? I have no idea but if it’s more than 1% it doesn’t seem like it would take very long to find out.
We’re probably seeing 10’s of thousands of EDLA triggers a day w/no accident occurred. Are all those “shouldn’t act here”?
 
That is:
1) exactly the opposite of the concept of Shadow Mode, and moral pitfall for what I'd hope is obvious reasons to you? The smily face at the end means you get that?
Yes. I get it. I’m sure it was in an EULA we all clicked on. It does make me a little bit nervous which is why I will wait to upgrade on this and all future software updates.
I guess my point is that you can know from shadow mode how many false positives the system will create and yet they rolled it out anyway. I’m hoping it’s because they’re sure that it works to avoid accidents. I think now that they’ve rolled it out they will know very quickly whether or not it works.
 
There wasn't a .15, prior released version was .12.x. .16.x started rolling out on 5/22 according to TeslaFi.com. Not "months ago". I got 16.2 on my Model X 2.5HW yesterday, 6 days after initial rollout. That seems reasonable to me for a gradual rollout to avoid widespread issues.

Peter+


No....I meant that 16.2 and all other updates get beta tested by thousands - weeks to months before they are officially "rolled out".
 
We’re probably seeing 10’s of thousands of EDLA triggers a day w/no accident occurred. Are all those “shouldn’t act here”?
It sounds like a huge number of false positives! On the other hand it hasn’t actually caused any accidents yet as far as we know and it may have prevented some. The damage to people’s mental health may be a major downside though. It sounds like having a car that randomly yanks the steering wheel is very distressing. I think I would find it to be.
 
Because it can take several weeks to get statistically significant comparison data, and confidence to set a new baseline. You’ll get the update when they’re ready for you to get it.

And 16.2 has only been out for a week and a half.

16.2 has been out for months to thousands of the EAP. It was officially rolled out after months of testing. That's how Tesla does it.

Just like Advanced Summon. It has been in test mode for over a month with EAP'ers. You have seen the videos - I'm sure. Now the official rollout of Advanced Summon is a long way down the road.
 
I haven’t had EDLA activate for me yet, so i have no idea how dangerous it is. These reports of false positives is tempting me to try to find a safe way to activate it just to get an idea of how hard it is to override. But I have to say that I don’t see whyTesla is making it an essentially permanent option by requiring it to be deactivated on every drive.
I’d say it isn’t any harder to override than AP. Just unexpected AND could happen in concert with you rather than against. So over compensation is a concern.

Mostly it is annoying and unsettling, undermines my confidence in the car to act/react as I direct it to. Maybe I’d say likely not imminent danger but ultimately less safe, insidiously outcome is inverse of drivers’ appropriateness in driving is my guess?

Also highly dependent on driving environment.
 
  • Like
Reactions: Scott7
ELDA is part of AP. If you don't have AP you don't have ELDA. I drove all over the road today, crossing solid lines and double lines and received no warning or my car trying to take control as others have said it would do.

Those here saying that Tesla rolled this out because they care so much about safety are deluding themselves. They rolled this out to make AP more attractive to sell it to people like me that don't have it and to make new cars look safer to potential customers. It's about money not safety.