Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla Software updates - Australia

This site may earn commission on affiliate links.
Real world testing is the final step in Tesla's QA and they do it in stages, seemingly sometimes also running AvB versions.

That's pretty normal behaviour for rollout. Only a small number of sub-releases go widespread.

Last year (as a SR+/non-FSD) when we actually had regular monthly releases I only received two versions in a month once.
Whereas those on the fast-track (typically those with FSD, including most of the YTrs) would receive three to four versions.
 
Real world testing is the final step in Tesla's QA and they do it in stages, seemingly sometimes also running AvB versions.

That's pretty normal behaviour for rollout. Only a small number of sub-releases go widespread.

Last year (as a SR+/non-FSD) when we actually had regular monthly releases I only received two versions in a month once.
Whereas those on the fast-track (typically those with FSD, including most of the YTrs) would receive three to four versions.
Im in the same boat with SR+/non-FSD. Last year was awesome as updates were very regular monthly updates. So if an issue came up, you knew you only needed to live with it for a month. This year has been very random on updates. So far I have had 4 updates this year and the one with most issues stuck for almost 3 months, where less problematic were way shorter time.
I wish they released the updates on a shorter interval, as then even if they have issues, you know three is a good chance that shortly that will change. For me all updates this year have been from 2021.4 subtree until the last one that was 2021.12.25.7, so far no 2021.24 yet.
 
Real world testing is the final step in Tesla's QA and they do it in stages, seemingly sometimes also running AvB versions.

That's pretty normal behaviour for rollout. Only a small number of sub-releases go widespread.

Last year (as a SR+/non-FSD) when we actually had regular monthly releases I only received two versions in a month once.
Whereas those on the fast-track (typically those with FSD, including most of the YTrs) would receive three to four versions.
Software must be the only industry where sending unchecked work out to the public is considered acceptable. I’m guessing the lawyers havnt caught up with it yet, but will when a negligent update causes injury. In my industry such a lazy approach results in litigation, and if its blatant negligence, loss of licence to practice.
 
Software must be the only industry where sending unchecked work out to the public is considered acceptable. I’m guessing the lawyers havnt caught up with it yet, but will when a negligent update causes injury. In my industry such a lazy approach results in litigation, and if its blatant negligence, loss of licence to practice.
I worked as a software developer for over 20 years. It is impossible to test all combinations for every release - there just isn't enough time and enough people to do all the testing necessary, even with automated tools.

That said, software development is (or at least it used to be) about finding and removing defects as earlier in the process as possible. It is much easier to fix a defect in the design phase than it is to fix in coding which is easier than unit test, etc. etc. I fear with all the "modern" techniques and tools all this has gone out the window. Just look at the crappy phone software these days (even from "reputable" brands).

The other thing that's required is to have a vigorous process and software version control so that a change in one area is partitioned from a change in another area.

If the process is good, there should be very few defects making into production. It does require good programmers, though.

Do Tesla do all this? Who knows? I have heard some horror stories about the environment in which the developers work.
 
Software must be the only industry where sending unchecked work out to the public is considered acceptable. I’m guessing the lawyers havnt caught up with it yet, but will when a negligent update causes injury. In my industry such a lazy approach results in litigation, and if its blatant negligence, loss of licence to practice.

But it is a different industry than yours… Are you suggesting that ALL industries, should be exactly the same as yours ?
That’s naive. I would think that level of blatant assumption based on lazy research could cause injury, or litigation in your industry. :)
 
It is a misconception that software is being sent out "unchecked" anyway. Software does typically go through many rounds of testing before it is released at all - what's being discussed here is staged releases which has been shown to be effective in catching "unknown unknowns" that slipped through testing before they reach the full user base.

It's unfortunate that errors can make it through testing at all, but somewhat inevitable with a mass market product that's exposed to an incredible variety of environments and usage scenarios. That's not unique to software either - it's not like recalls for errors in the design or construction of mechanical components are unheard of.
 
Software must be the only industry where sending unchecked work out to the public is considered acceptable. I’m guessing the lawyers havnt caught up with it yet, but will when a negligent update causes injury. In my industry such a lazy approach results in litigation, and if its blatant negligence, loss of licence to practice.
"In MY world things work this way, therefore any other way is wrong and negligent!"

What industry are you in where litigation is so prevalent?
 
Software must be the only industry where sending unchecked work out to the public is considered acceptable. I’m guessing the lawyers havnt caught up with it yet, but will when a negligent update causes injury. In my industry such a lazy approach results in litigation, and if its blatant negligence, loss of licence to practice.

It's not a bunch of auditable if/then/else statements.

I not sure to what level it's possible to 'audit' neural net / machine learning code. I would guess to the best of your ability you run it through simulations and drive it around and look for regressions. Send it out in small batches .. look for issues .. bigger batches .. look for issues .. system-wide. Oh, shoot, an issue in Australia on paulp's cul-de-sac, car braked to much unecessarily due to some weird shadows and a bike stuck in the tray of a ute being detected incorrectly.

Send out requestes for bikes in back of utes in cul-de-sacs with poor lighting. A few hundred come back. Feed that into the neural net for the next dot point update.
 
  • Funny
Reactions: Vostok
But it is a different industry than yours… Are you suggesting that ALL industries, should be exactly the same as yours ?
That’s naive. I would think that level of blatant assumption based on lazy research could cause injury, or litigation in your industry. :)
I think you are reading way too much into what I said, and its incredibly obvious that methods of checking and QA are completely different for every industry type, and at a granular level for every business within in industry.
I’m just fascinated that tesla can send out an update that causes my window sensors to suddenly send errors even though they are fully closed. I’m not a software engineer, but that seems like something that wasnt broken and didnt need fixing.
I’m further not convinced that the rationale that software can only be tested on unsuspecting real life people who didnt agree to be beta testers at 100kmh will have any hope of an argument when the innevitable legal action starts (which will not be from me so its not a threat). At that point tesla will need to convince a judge, which is a person untrained in the field (so someone like me) that they have used best endeavours to eradicate errors before sending out software that causes damage….like last month when they corrupted my powerwall data that cannot be retrieved.
I suspect everyone that thinks they are a programmer will disagree with me. Its easy to not check your work.
 
It's not a bunch of auditable if/then/else statements.

I not sure to what level it's possible to 'audit' neural net / machine learning code. I would guess to the best of your ability you run it through simulations and drive it around and look for regressions. Send it out in small batches .. look for issues .. bigger batches .. look for issues .. system-wide. Oh, shoot, an issue in Australia on paulp's cul-de-sac, car braked to much unecessarily due to some weird shadows and a bike stuck in the tray of a ute being detected incorrectly.

Send out requestes for bikes in back of utes in cul-de-sacs with poor lighting. A few hundred come back. Feed that into the neural net for the next dot point update.
all elements of autopilot and fsd are beta. We are told that and we use it on that basis. Fairly sure everyone knows beta’s are likely to have problems.….so keep your hands on the wheel and foot near the brake.
The rest of my car, and my powerwalls, are not a beta and I dont expect tesla to treat it like one.
 
I’m further not convinced that the rationale that software can only be tested on unsuspecting real life people who didnt agree to be beta testers at 100kmh
This is a straw man, though: no-one says it can only be tested on real life people. It's tested extensively before release; the point is just that sometimes issues aren't found in that testing.

On your windows, have you tried the window calibration procedure?
 
This is a straw man, though: no-one says it can only be tested on real life people. It's tested extensively before release; the point is just that sometimes issues aren't found in that testing.

On your windows, have you tried the window calibration procedure?
To be fair you dont really know how much tesla test. Neither do I.
But when I’m told software that corrupted my powerwall near instantly but only went to less than a handful of customers before it was stopped, I think its reasonable for me to assume that the significant error should have been obvious If it faulted straight away. But regardless, dont they roll it into a company testbed powerwall first? It broke every powerwall it went to so presumably it would have done the same on a test powerwall?
Yes I recalibrate my windows around every 3rd “update”
 
all elements of autopilot and fsd are beta. We are told that and we use it on that basis. Fairly sure everyone knows beta’s are likely to have problems.….so keep your hands on the wheel and foot near the brake.
The rest of my car, and my powerwalls, are not a beta and I dont expect tesla to treat it like one.
This is exactly why the option in Tesla cars exist to choose Software Update Preference. If you do not agree to be beta tester you choose "standard" and the updates will be thoroughly tested by the people who agree to get updates that are less thoroughly tested and have chosen "Advanced" there. The way I understand that option is that if you choose Standard, you do not get any limited rollouts, only full rollouts that have been tested by the limited rollouts and are considered stable.

Similar systems exist for most software. Much bigger software companies have been doing this for decades like Microsoft, rolls out monthly updates to most of their products and every now and them it basically breaks your whole computer if you deploy update in the first week of it coming out, then they recall the update and replace it with fixed one. So you can hold off applying the update for a week if you do not want to be one tested on.