Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

2022.16.3 released

This site may earn commission on affiliate links.
@GtiMart exactly. I think that level of development should stay in the lab, and tightly controlled tests, with trained drivers.

I'm not sure how they could do that, and still integrate the millions (billions?) of miles of training data they get from the fleet as a whole.

Maybe send out updates that somehow only collect the data, but don't change the functionality for the drivers? But I assume they already do that.

IDK, it's a really hard problem, as you said.

But I'm talking about non-FSD, non- Beta, standard release SW. There should be a way to almost air-gap any unstable AI learning from past stable, well tested releases, if that's really the issue. I think it's more basic than that. Breaking wi-fi connectivity? That's inexcusable. It's well understood and should never break.
 
  • Like
Reactions: Transformer
Let's not mix bad UI/UX design with buggy software. I'm not sure if V11 was because management asked for it, the UX group said it was better or the product owner decided it. Or maybe worse, the dev team was left to do what they thought was nice. It seems to me like the basic idea was ok but they took liberty to do a lot of "while we're there, let's also change this" things without thinking of the drivers enough. I'd bet there was no UX team involved or they were not listened to as I don't think they would have let this happen.

All software has a big backlog of bugs, no company ever fixes them all. Everyone is resource constrained. The list simply needs to be properly prioritized, and sometimes that's where things are not done properly. It's the same with new requirements/features, proper prioritization is key. If you can only do one between location tracking and preventing collisions, which one will you do (as an extreme example of my point)?

Creating new bugs as regressions as you advance makes matters worse obviously. That's where automated tests save the say. But: writing those automated tests costs a fortune upfront. Sometimes, because of pressure from management to release quickly, the tests are cut short. Guess what happens then?
 
Hmmm, IDK, I think the buggy software and bad UI design are so intertwined that you can't really separate them. Seems like they feed each other too: rushed UI design is always going to have more bugs. SW will drag their feet on bugs if they think UI is just going to change stuff later.

I work in hardware development, and I intentionally stay out of the chaos in our own company between SW, UI/UX and Product. It's a constant drama fest; tell me when I need to change something physical, the rest is a firehose of crap (indecision and firefighting).

I love working with ID (Industrial Design, interaction with the physical aspect of the product) though, they are great, and are like a firewall against all the drama.
 
Hmmm, IDK, I think the buggy software and bad UI design are so intertwined that you can't really separate them.

Agreed.


If you can only do one between location tracking and preventing collisions, which one will you do (as an extreme example of my point)?

Understood, except location tracking *was working* until they broke it... and they didn't seem to have introduced anything new to it that would explain why it was even touched. If they simply left it alone there'd be no problem. Same is true with the charge port door problems that persisted for months.

I lean towards conspiracy theory, so I swear this is a case of corporate espionage. Somehow Ford managed to get to the entire software team at Tesla and now half of them are chaos monkeys, LOL. Truly, the amount of problems they've caused has no better excuse.
 
Yes, they can. That should always be the goal.

Because Ford or VW suck too does not absolve Tesla. I actually think Tesla set them up for this, by creating a competitive landscape where constant updates are expected.

This is a car. It needs to work, full stop.

That includes consistent behavior, so the driver can act the same way in a crisis situation as every other time they used it, and not discover a new flaw at the worst possible time.

For the same bug or mistake in UI design to appear, get fixed, and then reappear and get fixed again is beyond everything. It's simply not OK. (Phantom braking, for instance. Comes and goes with software releases, very unstable; one release can both fix it for some cars, and break it for others.)

Fixing a mistake should include always testing for that error forever in the future. They don't, obviously.

Honestly, I think part of the problem is they have too many permutations of hardware and chip sets, so they can't really exhaustively test every permutation.

I'm sitting on 16.2 for as long as I can, since it's basically working well for me. App connectivity is bad, (wi-fi connectivity is super variable), but I'll take that over PB, which is really good right now. (really never slows down when it shouldn't).

I would like 20.x, only for the SOC prediction moving back where it belongs in the navigation status, but I'm waiting until it's out a long time to see what else is broken.

DIY regression testing.
I agree the goal is to release bug-free code, but @GtiMart summed it up clearer than I did. It's very hard to cover all scenarios. However, I do believe the major operational aspects of the car are very reliable. The UI, IMO, is a separate topic and shouldn't be treated the same as a firmware bug that can cause an unsafe behavior like seen in other EVs.
 
I doubt that Tesla would ever do this, but they should establish an *actual* new update beta testing program for software updates where they collect feedback on bugs or issues from drivers rather than just allowing us all to be guinea pigs. They kind of have this via the standard vs. advanced software update feature, but I think it should be more explicit. If you are willing to allow Tesla to release brand new code to your car, you get the updates first, then have a channel other than the general service requests to troubleshoot bugs/issues. This gives them an opportunity to try and fix these issues or at least be prepared with accurate troubleshooting responses when they start pushing the code to the broader Tesla community. In my experience with 16.3, Tesla SC was not able to solve the issue and said I needed to bring my car in... I found a thread on this forum, fixed my problem myself, and then had to explain to Tesla what I did to fix the problem. They should have known and been prepared, particularly when many of our issues are recurring post-update.
 
I doubt that Tesla would ever do this, but they should establish an *actual* new update beta testing program for software updates where they collect feedback on bugs or issues from drivers rather than just allowing us all to be guinea pigs. They kind of have this via the standard vs. advanced software update feature, but I think it should be more explicit. If you are willing to allow Tesla to release brand new code to your car, you get the updates first, then have a channel other than the general service requests to troubleshoot bugs/issues. This gives them an opportunity to try and fix these issues or at least be prepared with accurate troubleshooting responses when they start pushing the code to the broader Tesla community. In my experience with 16.3, Tesla SC was not able to solve the issue and said I needed to bring my car in... I found a thread on this forum, fixed my problem myself, and then had to explain to Tesla what I did to fix the problem. They should have known and been prepared, particularly when many of our issues are recurring post-update.
They have long had the EAP, Early Access Program, where they had people sign an NDA and test software before general release. So they are already doing what you are asking.
 
1. Artificial Intelligence / Machine Learning problems are very hard. Adding new data to the learning set can solve problems but might cause others in unforeseen cases. Yes, you can also add data to your test set. Still, with the (incomplete) learning I've done on AI/ML, these types of problems are more difficult to harness.

Agreed.

Machine learning technology is not ready for these safety critical applications.