I am on HW3 and just got 16.3 today
Same here got it 15 minutes ago.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
I am on HW3 and just got 16.3 today
As far as I know there’s only one steering motor and it’s attached directly directly to the rack. It might actually be two motors in one housing for redundancy. There are no “steering wheel actuators”. It wouldn’t make much sense to have a motor that drives a torque sensor that drives another motor.Right, but the steering wheel actuators themselves aren't strong enough to do that without help from the steering motors under the car. The steering wheel actuators themselves aren't very strong motors, less than the strength of a person. Elon has said it will always be this way while a steering wheel is in the car.
This seems super easy to test. You run the system in shadow mode. You look at all the times the system wants to take evasive action. You look at how many times the lack of evasive action resulted in an accident. All the rest of the cases are false positives. Obviously some false positives are acceptable.That first sentence stings!
Whatever size of fleet “Shadow Mode” operates on (saw it suggested it required HW3, which would be internal only to Tesla until the last month or so), I think this brings up the fundamental limitation of the concept. Without expert, external guidance it is extremely difficult to accurate assess achievement of “safe driving” goals.
Chicken & Egg issue, in a sense, to automate the learning, because how does the system validate that the drivers it is watching are doing it correctly, or doing it the only correct way?
<edit> Put another way, how does the system discern a false positive when the very goal of the feature it is testing assumes the driver makes mistakes & feature is to correct the mistake.
"You" who? Using what data and evaluation methodology/process, with what standards?You look at all the times the system wants to take evasive action. You look at how many times the lack of evasive action resulted in an accident. All the rest of the cases are false positives. Obviously some false positives are acceptable.
They may have how many times ELDA would have been triggered in whatever fleet they may have hypothetically been using, but there is an ocean between that and "how well ELDA works".It would be nice if they provided some transparency on how well ELDA works. They have the data.
omg omg omg I'm getting a software update ! I can't tell the version from my phone
Well if it’s triggered in shadow mode and then there is an accident then you know that it detected a hazardous situation."You" who? Using what data and evaluation methodology/process, with what standards?
They may have how many times ELDA would have been triggered but there is an ocean between that and "how well ELDA works".
You just need a good way to detect accidents reliably. There’s so much data that the car is taking it certainly seems possible to do. The numbers I’ve seen are that people get into accident about every 150k miles on average. So with a fleet of 500k cars there are over 100 accidents a day. How many of those could be prevented by ELDA? I have no idea but if it’s more than 1% it doesn’t seem like it would take very long to find out.<edit> The potential methods for doing so would require massive manual evaluation of the events, assuming you got enough data reported for that, or mind numbing amounts of accident data from which you could try infer how correct ELDA might have been. When I say "mind numbing" you're probably talking years from a very large fleet to have any sort of confidence, given the very wide range of potential conditions and relatively low accident frequency.
That's a very small number AND you don't actually know if the Shadow Mode predicted the correct action that would avoid the accident. Counterfactal is hard to assess. Fortunately on that, as per my <edit>, that would be feasible manually assess as Tesla already assesses every accident. Unfortunately, for the same reason, that's a tiny slice of data that would take a lot of time to build up.Well if it’s triggered in shadow mode and then there is an accident then you know that it detected a hazardous situation.
That is:Tesla has 500k cars on the road and they’ve been rolling out this feature randomly so they’ll know very quickly or not whether it reduces the accident rate. All they have to do is compare the number of accidents when it is activated in shadow mode to the number of accidents when it is activated in real life. We’re all guinea pigs. Personally I’m going to wait for the results before I upgrade
We’re probably seeing 10’s of thousands of EDLA triggers a day w/no accident occurred. Are all those “shouldn’t act here”?You just need a good way to detect accidents reliably. There’s so much data that the car is taking it certainly seems possible to do. The numbers I’ve seen are that people get into accident about every 150k miles on average. So with a fleet of 500k cars there are over 100 accidents a day. How many of those could be prevented by ELDA? I have no idea but if it’s more than 1% it doesn’t seem like it would take very long to find out.
Yes. I get it. I’m sure it was in an EULA we all clicked on. It does make me a little bit nervous which is why I will wait to upgrade on this and all future software updates.That is:
1) exactly the opposite of the concept of Shadow Mode, and moral pitfall for what I'd hope is obvious reasons to you? The smily face at the end means you get that?
There wasn't a .15, prior released version was .12.x. .16.x started rolling out on 5/22 according to TeslaFi.com. Not "months ago". I got 16.2 on my Model X 2.5HW yesterday, 6 days after initial rollout. That seems reasonable to me for a gradual rollout to avoid widespread issues.
Peter+
It sounds like a huge number of false positives! On the other hand it hasn’t actually caused any accidents yet as far as we know and it may have prevented some. The damage to people’s mental health may be a major downside though. It sounds like having a car that randomly yanks the steering wheel is very distressing. I think I would find it to be.We’re probably seeing 10’s of thousands of EDLA triggers a day w/no accident occurred. Are all those “shouldn’t act here”?
Because it can take several weeks to get statistically significant comparison data, and confidence to set a new baseline. You’ll get the update when they’re ready for you to get it.
And 16.2 has only been out for a week and a half.
I’d say it isn’t any harder to override than AP. Just unexpected AND could happen in concert with you rather than against. So over compensation is a concern.I haven’t had EDLA activate for me yet, so i have no idea how dangerous it is. These reports of false positives is tempting me to try to find a safe way to activate it just to get an idea of how hard it is to override. But I have to say that I don’t see whyTesla is making it an essentially permanent option by requiring it to be deactivated on every drive.
The percentage of driving errors, missteps that convert into actual collision incidents is very low.It sounds like a huge number of false positives!