Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Software testing

This site may earn commission on affiliate links.

LongRanger

Active Member
Jan 11, 2020
1,318
1,220
Wales
This may seem a little odd coming from someone whose car hasn’t arrived yet, but here goes.

Having worked in computer software and IT architecture/design/delivery for almost 25 years - is there something wrong with Tesla’s software build and release quality process ?

2 examples really stand out to me in the last month based on seeing so many comments from different owners and relating to all models of the 3.

1) The navigation data rollback - immediately tens/hundreds of folks experience poor quality speed data linkage to their position. It’s not a glitch, it’s a fundamental error. Amazingly a few hours later the rollback happens - co-incidence ?

2) The charging bug - not only does it affect people’s user experience significantly, but the Tesla response seems to be one of “yeah, maybe it’s us” followed by “yeah, a later release will fix it”

If Amazon, Tesco or any other online presence ran their operations like this they would get crucified. We have seen the banks get hammered for rubbish mobile software (integration) behaviours linked to Online Banking.

At what point do examples like the above force Tesla to rethink their product development quality control ?

I’m not bashing Tesla, I want their software to work as I’m a first time EV owner-to-be - and love the whole concept especially as my job is centred around delivering zero carbon energy for the UK - but there seems to be no consequence (to them) for shipping poor quality code. Are they cutting corners on testing ?

At what point does a major fleet-wide incident happen where people die due to a software fault or complete systems failure that could have been picked up with proper testing.

What are people’s thoughts ... ?
 
during your 25 years did you work in an agile development environment?
This just looks like classic agile development to me with the inadequate testing that often accompanies it. Move fast break things and let the customer do most of the testing. Which is fine if you are developing a social media app. not so good if you are building something safety critical like a car. I am hoping that different parts of the code are compartmentalised and the safety critical parts are treated with more care than the charging.
I don't know about the navigation but the charge bug looks like it only affects certain countries i.e the ones doing 230volt single phase charging. Which is probably how it got missed.
 
Yes, I work in one now - for critical infrastructure projects - and have worked in other industries where the bottom line is on the line with each product or software release. Agile/lean product management should not mean bad testing or no testing.

Of course testing can miss things but there doesn’t seem to be an accountability mechanism with the customer base with appropriate comms.

If I get something wrong for my customers I have a dialogue with them to reassure and keep them engaged with their product quality/progress.

One idea would be to engage more closely with the types of users on these forums and have a dialogue about releases.

Having a “bug report” feature through the in-car computer is a bit like Mork calling Mindy.
 
  • Like
Reactions: stonecoldrmw
I did a lot of user testing for in-house systems in a well-known aerospace company in the 80’s
Despite IT department testing, Back then, a user test could be relied upon to break any software quickly and with ease.

Perhaps Tesla has the view that the same holds true today but sadly they use a live environment.
One would hope that the really bad bugs don’t make it out of the factory! (Map updates excluded!!)
 
I did a lot of user testing for in-house systems in a well-known aerospace company in the 80’s
Despite IT department testing, Back then, a user test could be relied upon to break any software quickly and with ease.

Perhaps Tesla has the view that the same holds true today but sadly they use a live environment.
One would hope that the really bad bugs don’t make it out of the factory! (Map updates excluded!!)
What do you think is happening when you select the "advanced" tab on the software update? you are volunteering to test the code really bad bugs and all.
 
  • Like
Reactions: Doudeau
This just looks like classic agile development to me with the inadequate testing that often accompanies it.

This is a misnomer. An item should not be released until it is 'done'. And 'done' means that it has passed QA, been accepted by the product owner and stakeholders and is in a shippable state. It is quite clear that on many occasions, Tesla's QA is either inadequate as often obvious defects are not detected, or the product owner has released something that is knowingly defective and should not have shipped.

Quite a few people also incorrectly cite the iterative process as allowing an item to be shipped then subsequently fixed in a different iteration. That is not the case. The iterative process relates to an multiple iteration of work units that all form the product, but only when 'done', ie fully passed QA, stakeholders and product owner expectations. You don't ship 'half done'. Its either 'done', or you don't ship it and it goes back on the product backlog for consideration in the next iteration or so. If you do subsequently find a defect or something not to spec, then that goes onto the backlog for future consideration (which may mean that the defect is never resolved hence one of the reasons why it should not be the usual workflow), but you should never be in that position for obvious defects, yet Tesla seem to manage it all too often.

So Tesla's QA is either inadequate, or they are cutting corners, or both. There is no excuse for inadequate testing in an Agile workflow.

[Professional Scrum Master hat removed]
 
Last edited:
What do you think is happening when you select the "advanced" tab on the software update? you are volunteering to test the code really bad bugs and all.
This is a good point. I don’t know what version people who leave it on Basic are on, but I suspect it isn’t close to the latest.

People want all the latest updates ASAP but also complain about the issues they get with them.

I’d like to think, as said earlier, that the actual safety based stuff is compartmentalised and isolated from the feature based stuff.
 
The Tesla manuals for the S and X have been out-of-sync with the correct operation of voice commands for at least 6 months. Count on doing some experimentation to get voice commands to work, if you go so far as to read the manual. I'd be interested to find out if the Model 3 is similarly unable to comply to the description given in the manual.
 
I've had my car 3.5 years and it's always been the same. Indeed just before I took delivery the then latest software introduced a bug that prevented the charge port from unlocking, it caused chaos as you can imagine.

The 'advanced' option is, IMO, little more than typical Tesla kideology, much like Sentry on older cars supposedly uploading the footage to Tesla.

I'm afraid the software QA, or more accurately lack of, comes from the same corporate mindset that believes it's ok to present new cars for collection with no proper PDI.
 
I’m another on here with loads of years of experience in software development (40 years since coding on my Acorn Atom in my bedroom).

I support many of the comments above - we are choosing to be alpha/beta testers of the software using the advance setting for updates and I personally expect (as I don’t have FSD) to be able to intervene before the car does something stupid (although I can’t usually hit the accelerator fast enough when it phantom brakes on me :) )

Since the introduction of smartphones and tablets, people don’t seem to mind downloading apps that regularly need updating to fix bugs - we are in a different world from when ERP solutions were developed under a waterfall methodology where very little code would ever “go out” with bugs.

Supporting some comments above - I’d hope Gartners term of Bimodel development is how they develop code - robust methodology for critical stuff but quick to market on other stuff e.g. changes to backgammon skill levels.

When I bought my 2nd BMW 5GT in 2016 I was disappointed that it’s auto windscreen wipers weren’t as good as on the 2010 model it replaced and in the last 4 years they haven’t even improved a little......
 
I was thinking the same things, and indeed I’d extend that to general quality. From reading the threads it does seem there is a very high reliance on fix forward. I guess I get the general quality problems creeping through a bit more as they’re hammering to get orders out to meet order sheet demands/make the books look good. The software side though, I don’t get that at all, I dont see what the pressure is that’s driving them to rush out under-tested releases. Perhaps its a differentiator thing? Other cars I’ve had are not OTA updated, I don’t expect them to be updated and so I never see any evolution, and it’s annoying but it’s ‘how things are’ so you live with it. They still have their issues of course.
It’s possible that the feel there’s a pressure to keep the software moving ‘forward ‘ so that people register the cadence of change and speed of evolution which creates a buzz and acts as a sales carrot. Once you’ve ordered and see whats actually happening its I suppose a bit late - i doubt many return vehicles because of issues like this.

I was reflecting before now on how they are doing so well and why this car is so different to any other I’ve bought - I never joined a forum before, incessantly tracked build progress before, never become obsessed with looking at add-ons and tweaks, etc etc. Maybe I’m typical of a large number of people that choose Tesla? It feels to me like this car is as much about software as it is about a vehicle?
I’m not surprised, when I read your posts, that like me many of us are software/project people, who’ll I bet spend a lot of time with new gadgets, and are probably early on the adoption curve of new technology. I think its the fact this car is as much about the software as it is about the car that almost makes it a gadget is one of the big draws to us. We should know better! We know we shouldn’t really be sticking the car into a beta mode and yet I bet a lot do. I bet at lot of us do the same with our phones/macs into the beta programmes and I think we tolerate more as a result, because we know they’ll fix it, cos thats how beta programmes work, right? That said I dont have any other gadgets that could kill me!

All that said, the level of defect is astonishing, and I think there could be a lot more caution on the software side whilst still maintaining the buzz. Like you, I also hope that the quality levels for critical driving systems have a much healthier Q/T balance than the other stuff, but then Maps and Speed seems like a fairly important thing to get right, especially in a self driving car, so maybe not!

This may be a silly question but i assume there are standards for quality and defects in vehicle production that manufacturers have to meet? The equivalent of certification in aerospace? Does this not also apply to vehicle software or is it still too ‘new’ that regulators haven’t cottoned on?
 
What do you think is happening when you select the "advanced" tab on the software update? you are volunteering to test the code really bad bugs and all.

The early access program is the one where some poor new feature behaviour might be accepted but even then, it should not be in safety critical features.

The advanced/standard setting at times seem to make very little difference. Granted you will probably not be in the first wave of updates if on standard, but it doesn't mean that you will never get that release and it does not mean that the release is not riddled with fresh new bugs, regressed functionality or incomplete implementation, the latter being an issue as it is how iterative development works.

Until the last update, we have run pretty much exclusively in standard mode other than the first couple of releases after this feature was introduced. We soon learned the error of our ways. The reason for recently going to advanced is because of the hope of getting a fix to a recently introduced regression.

Below is our install list with timings. iirc standard/advanced option got introduced around 2019.32/2019.36. You only have to look at 2020.4.50.x and 2020.16.x to see instances where Tesla standard settings resulted in bug fixes, and in the case of 2020.4.50.x, in a short period of time which potentially hints at the urgency of the fix.

Unfortunately standard setting does not make you immune from having to take an update. The releases around 2020.4 - 2020.12 were pretty much forced on us by the nags that you have to bypass before you can drive the car. They wear you down to accept the install to the point that one driver is at least going to accept the update. And for some updates, such as navigation, you do not even get the option.

So no matter what mode you run in, based upon our experience, you will end up with regressed functionality or operation sooner or later. That does not even touch on the subject of defective new functionality.


upload_2020-7-23_8-46-8.png
 
There's not a lot that I can say that hasn't already been said by the above comments and it's fair to say I agree with them all.

I was reflecting before now on how they are doing so well and why this car is so different to any other I’ve bought - I never joined a forum before, incessantly tracked build progress before, never become obsessed with looking at add-ons and tweaks, etc etc. Maybe I’m typical of a large number of people that choose Tesla? It feels to me like this car is as much about software as it is about a vehicle?
I’m not surprised, when I read your posts, that like me many of us are software/project people, who’ll I bet spend a lot of time with new gadgets, and are probably early on the adoption curve of new technology. I think its the fact this car is as much about the software as it is about the car that almost makes it a gadget is one of the big draws to us. We should know better!

Absolutely spot on and hits the nail on the head.

This may be a silly question but i assume there are standards for quality and defects in vehicle production that manufacturers have to meet? The equivalent of certification in aerospace? Does this not also apply to vehicle software or is it still too ‘new’ that regulators haven’t cottoned on?

I think this is a really interesting point and perhaps highlights a really dangerous issue if it' isn't being regulated properly as yet, more specifically around the software. There are clearly things out there like safety ratings and standards that have to be adhered to for the physical structures but what about the software on a car. Does anyone know if such regulations exist?
 
  • Like
Reactions: Pilotpem
This is a misnomer. An item should not be released until it is 'done'. And 'done' means that it has passed QA, been accepted by the product owner and stakeholders and is in a shippable state. It is quite clear that on many occasions, Tesla's QA is either inadequate as often obvious defects are not detected, or the product owner has released something that is knowingly defective and should not have shipped.

Quite a few people also incorrectly cite the iterative process as allowing an item to be shipped then subsequently fixed in a different iteration. That is not the case. The iterative process relates to an multiple iteration of work units that all form the product, but only when 'done', ie fully passed QA, stakeholders and product owner expectations. You don't ship 'half done'. Its either 'done', or you don't ship it and it goes back on the product backlog for consideration in the next iteration or so. If you do subsequently find a defect or something not to spec, then that goes onto the backlog for future consideration (which may mean that the defect is never resolved hence one of the reasons why it should not be the usual workflow), but you should never be in that position for obvious defects, yet Tesla seem to manage it all too often.

So Tesla's QA is either inadequate, or they are cutting corners, or both. There is no excuse for inadequate testing in an Agile workflow.

[Professional Scrum Master hat removed]
I am aware of the theory of how agile is supposed to work. And if done right it can work but in allowing iterative development to occur at a rapid pace it opens the door for that code to be shipped inadequately tested more easily and frequently than was the case with the old waterfall model, if the method you describe above is not properly implemented.
Agile takes away the independence of test and makes the people responsible for quality the same people responsible for delivery and that is s dangerous combination in an environment where there is constant pressure from stakeholders to deliver more, more more and faster. Does that environment sound like any company you can think of?
I am sure we could debate the theory of this all day but lets be real, we are only having this conversation because of issues with the software so the weight of evidence is on my side :)
[Professional Software Tester hat removed]
 
  • Like
Reactions: Durzel
I am aware of the theory of how agile is supposed to work. And if done right it can work but in allowing iterative development to occur at a rapid pace it opens the door for that code to be shipped inadequately tested more easily and frequently than was the case with the old waterfall model, if the method you describe above is not properly implemented.
Agile takes away the independence of test and makes the people responsible for quality the same people responsible for delivery and that is s dangerous combination in an environment where there is constant pressure from stakeholders to deliver more, more more and faster. Does that environment sound like any company you can think of?
I am sure we could debate the theory of this all day but lets be real, we are only having this conversation because of issues with the software so the weight of evidence is on my side :)
[Professional Software Tester hat removed]
Hear, hear.
 
Agile takes away the independence of test and makes the people responsible for quality the same people responsible for delivery and that is s dangerous combination in an environment where there is constant pressure from stakeholders to deliver more, more more and faster.

[rummaging through the numerous software development and project management hats in the cupboard]

Actually it doesn't make the same people responsible for quality. It makes the team responsible. Within that multi disciplined team (not necessarily made up of multi disciplined individuals), for non trivial projects there should still be individuals responsible for quality and these should not to be the same as those cutting the code or any other function, at least not for that iteration. Next iteration, it could well be all change. But most certainly, the same individuals responsible for product delivery should not be the same individuals responsible for quality even though the team is. In some methods, before something is done, its reviewed by stakeholders etc so its got to pass their input too so they also bear some responsibility if you ship shite. If you end up where the former is true, then something should be done about it.

Making the assumption that Tesla do follow some agile development method, working software is key to the Agile Manifesto - "Working Software Over Comprehensive Documentation", "Frequent delivery of working software" and "Working software is the primary measure of progress". If Tesla don't put 'done' / working software out the door, they are not making measurable progress and providing customer satisfaction.

If Tesla do end up with the situation where "constant pressure from stakeholders to deliver more, more more and faster." occurs, its time for their product owners to address the situation - the stakeholders, amongst others, did sign up for the current method in the first place and a good product owner will be good at stakeholder management. One would like to think that if Tesla are doing some form of agile development, they would have decent product owners et al. who can manage this and not be afraid to say no. Irrespective, these are conversation to be had outside the current iteration and not resolve by cutting corners within an iteration and compromising the product.
 
Last edited:
[rummaging through the numerous software development and project management hats in the cupboard]

Actually it doesn't make the same people responsible for quality. It makes the team responsible. Within that multi disciplined team (not necessarily made up of multi disciplined individuals), for non trivial projects there should still be individuals responsible for quality and these should not to be the same as those cutting the code or any other function, at least not for that iteration. Next iteration, it could well be all change. But most certainly, the same individuals responsible for product delivery should not be the same individuals responsible for quality even though the team is. If you end up where the former is true, then something should be done about it.

Making the assumption that Tesla do follow some agile development method, working software is key to the Agile Manifesto - "Working Software Over Comprehensive Documentation", "Frequent delivery of working software" and "Working software is the primary measure of progress". If Tesla don't put 'done' / working software out the door, they are not making measurable progress and providing customer satisfaction.

If Tesla do end up with the situation where "constant pressure from stakeholders to deliver more, more more and faster." occurs, its time for their product owners to address the situation - the stakeholders, amongst others, did sign up for the current method in the first place and a good product owner will be good at stakeholder management. One would like to think that if Tesla are doing some form of agile development, they would have decent product owners et al. who can manage this and not be afraid to say no. Irrespective, these are conversation to be had outside the current iteration and not resolve by cutting corners within an iteration and compromising the product.
You could save some time you know and just post a link to the scrum guide if you are just going to quote it pretty much verbatim :p.
I am not disagreeing with the theory anyway just saying there is a slight chance that they may have a few tiny flaws in their execution as do many organisations I have seen use it. Clearly they need to hire you to sort them out :)
And we seem to be in violent agreement on the responsibilities. I said" the same people are responsible for quality and delivery." You said "It makes the team responsible." to me that is the same thing.
What I do disagree on is the idea that "they would have decent product owners et al. who can manage this and not be afraid to say no" Do you know what they call people who say no to Elon?......They call them a cab :D
 
Would be intrigued to take a look a the product WBS, burndown and DRE (defects) status for each of the "products" that they specify and see if there's any weighting going on for different parts of the firmware, UI software, comms, API etc.

Maybe they are credible in the core firmware and fundamental operational system bits but more tolerant to defects in some of the fluffier stuff (in their view, not the users' view)

Next question - do they even have UK users with a small fleet to hard-test each rollout increment before general availability ?