Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

[uk] UltraSonic Sensors removal/TV replacement performance

This site may earn commission on affiliate links.
Doesn’t really work that way in practice as testing inevitably gets pushed to the back burner with a promise that they’ll be added later and then never done.

Having everything behind a conditional feature flag also makes testing near enough impossible as you end up with an impossible number of permutations and there’s no way you can confidently test every one of those permutations and that’s why there are so many bugs in the software
If Elon truly did assess the twitter developers by 'lines of code' and has a similar approach at tesla then testing will never be done because it won't appear in his metrics. I fear such things explain a lot about what we experience - what incentive is there to fix bugs if it could get you fired? Just keep writing new features on top of the bugs.. you'll get employee of the week..

Also, side effects, even 'impossible' ones, are a thing.. you *never* develop in the mainline release code. Feature flags won't save you from that. You keep feature branches as short and concise as possible but those *always* go through at least testing before a merge.
 
The feature flags are there to prevent things from becoming broken. Large batches of change (ie feature branch merges) increase the risk of something going wrong due to delayed feedback and a larger surface area of stuff changing. Small, incremental changes with tests run on each and every commit reduce both the likelihood and impact of b0rk, and make any introduced regressions easier to manage. This approach arguably comes from vehicle manufacturing in the first place: long before Dave Farley and co wrote Continuous Delivery, Taiichi Ohno wrote about "autonomation" (automation combined with intelligence to tell whether the right thing was done, ie tests) in the Toyota Production System as a means of reducing lead times and improving the flow efficiency of a manufacturing system.
You are mixing two concepts: feature flags and agile development. Small, incremental improvements is a good approach for reducing change risk but that does not necessarily require bloating the main branch with code that is inactive. You cannot hide everything behind feature flags and they usually lead to some very convoluted logic that needs to be managed (that is why LaunchDarkly exists) and brings additional risk (e.g. easier regression bugs). In general, feature flags are not used in critical systems (e.g. SCADA) while they embraced agile development at least a decade ago.

I would not be the one judging their software team. Given Musk’s change of attitude in the last couple of years, they may be getting weird and impossible tasks leading to scrambling to do what they could. On the other hand, if that were the case, would should have seen talent exodus due to stress. Also, they did some things that question their connection with reality (e.g. the victory lap video after V11 release). So, it may be a combination of bad development practices, inadequate design, and directions from above.

In any case, we see the result and customer satisfaction is the best metric for any development org.
 
Last edited:
ull of decade old tech debt meaning that this talent spends all of their time fighting fires, most likely being forced to release work to ridiculous timescales.

the removal of USS before the feature is even ready

a combination of bad development practices, inadequate design, and directions from above.

All sounds quite likely given what we get to see. It doesn't take long for a business to become just like the rest, especially when you leave your dev teams to 'just get on with it themselves' trying to do the (often difficult or impossible) job.
 
  • Like
Reactions: Boza and DevMar
It shouldn't surprise me that there are lots of people from the software industry in a Tesla forum :) I may have dragged this topic off-track.

Doesn’t really work that way in practice as testing inevitably gets pushed to the back burner with a promise that they’ll be added later and then never done.
Sadly I'd have to agree. Anyone that doesn't do at least test-first development, and ideally test-driven development, should not be allowed to write production systems consumed by members of the public. If digital infrastructure was physical, there's no way authorities would let it be built the way that large enterprises generally tend to.

You are mixing two concepts: feature flags and agile development

I'm not, although I should have been more clear that I was referring to feature flags to disable unfinished functionality and prevent those code paths from being used in production. I'd expect this to be used to allow WIP to be committed to master/main when practicing continuous delivery. Between process boundaries is a bit different, you can get around that with different minor/patch releases of services being available concurrently, and rely on the routing layer to ensure clients get the behaviour they're expecting.

You cannot hide everything behind feature flags

Having everything behind a conditional feature flag also makes testing near enough impossible as you end up with an impossible number of permutations
You pull out the feature flags once the code has been enabled in production. I'd agree that long-lived feature flags are bad; they should exist as long as is necessary and no longer for exactly the reason you mention.
what incentive is there to fix bugs if it could get you fired?
Reminds me of a gambling company that I worked with, where the testers got a bonus dependent on the number of passing tests they had. Would you believe that their tests didn't find many bugs?
Also, side effects, even 'impossible' ones, are a thing.. you *never* develop in the mainline release code. Feature flags won't save you from that. You keep feature branches as short and concise as possible but those *always* go through at least testing before a merge.
I absolutely, categorically disagree on this in the strongest possible terms :) Continuous delivery requires trunk-based development, and a stop-the-line CI/CD pipeline that means that if something is broken, no-one gets to deploy any new functionality until the blockage is cleared - again inspired by the Toyota Production System. I would agree that code changes must always be tested before being deployed, but not that they need to be tested before being merged.

Some continuous deployment purists would argue that testing before deployment isn't even necessary, and it's better to have very responsive rollback mechanisms and progressive rollouts. The weakness of this approach is that causality becomes hard to track in large, distributed systems, and doesn't protect against emergent phenomena.

Git Flow and feature branch-based workflows work for open source projects contributed to by unknown, untrusted and uncontrollable third parties - this should not be an aspiration for an enterprise working on a product. They delay feedback (discovery of merge conflicts, semantic conflicts) increasing lead-time-to-production and increasing the risk of rework being needed. This is in addition to incentivising isolationist approaches from developers working solo, and a lack of shared vulnerability allowed by developers rebasing the heck out of their feature branch before raising a PR, so that no-one else on the team ever saw all the mis-steps they made on the way to getting the perfect commit.

If you don't trust trunk-based development, that's a smell that your processes, tests and monitoring aren't good enough.
 
The basic principle is the trunk should always be releasable. In practice that never happens because a release has to go through a full release testing process but you should never find showstopper bugs at that stage. We did have a customer who insisted they got a daily build from the trunk, and they were paying enough to make that not a silly request.. it worked. Luckily we don't have to do that currently.

Single branch works for a solo developer, but when you've got multiple developers working on projects you absolutely can't do that.. I've been there and worked for companies that tried it and it *always* ended in complete disaster - I've done my fair share of overnight firefighting to meet a deadline because someone decided 'just one more fix' the night before a release. No way that's sustainable.

CI/CD pipeline that means that if something is broken, no-one gets to deploy any new functionality until the blockage is cleared

If we'd had that policy the company would have had periods of at least a month off because some of the stuff I've dealt with has taken that long. We'd also be bankcrupt..
 
  • Like
Reactions: H43lio
Single branch works for a solo developer, but when you've got multiple developers working on projects you absolutely can't do that
I disagree with this so, so much.
I think we'll need to agree to disagree, as explaining it fully would take us even further off topic. I can understand your disbelief, the current status quo is taken as truth by so many. There are a lot of things that you need in place to make continuous delivery work, discipline being chief among them. If you don't have these things in place, then you're right in that it absolutely won't work. Many of my customers didn't believe the approach would work, until they'd done it themselves.

Although there's a selection bias, I often dealt with customers who came to us because their branch-based development process was so slow it threatened their business' existence. My favourite is a TV company that I can't name, who thought they had a CI server problem, but it turned out that their 200 developers had 90 open PRs at any time, each of which spun up a CI pipeline that ran heavyweight end-to-end browser-driving tests. This caused the CI server to fall over, and because it took so long to get a branch merged, developers would open a new ticket, start a new branch, and raise a new PR... You can see how this problem self-perpetuates. The developers were all miserable because their work took months to get into prod, there wasn't a team spirit, and folks had started to leave en masse. The company used to do eXtreme Programming and continuous delivery, but as it scaled this got lost. Over time the culture there got worse and worse, and developers saw it as their job just to raise PRs, not get functionality all the way into production. A classic case of prioritising resource utilisation over effectiveness - everyone being busy paddling, but not in the same direction.

I won't try and convince you here, but I'd implore you to explore and recognise that it absolutely can and does work. I started and ran a consultancy for six years that ended with a multi-million turnover before I sold it, teaching and implementing proper continuous delivery (normally in the cloud infrastructure space). If you have a bank account, a satellite TV subscription, or a European car, or a pair of trainers, the chances are you've used the services of a company that we tried to improve. I ran the London Continuous Delivery meetup for a while, and I've given keynotes on the topic.

I think my favourite example is when we helped a security firm continuously deploy their entire infrastructure and software stack into an investment bank, through an airgap.
 
Last edited:
  • Like
Reactions: randompixel
I disagree with this so, so much.
Hmm, I think you're probably right to, thinking about it. I would expect developers to run as many tests as possible before pushing changes to master/main. Often that might only be unit and integration tests, rather than full-on end-to-end tests, but you're right in that it'd be a cause for chastisement if an engineer pushed broken code when they could have found out that it was broken before doing so.
 
In the UK certainly people with new orders are being required to confirm they accept the situation with USS before they are allowed to proceed to payment. I can't see how anyone would have any case against Tesla. It would also seem very unlikely that a retrofit was at all possible, the wiring harness will have been updated and the bumpers don't have holes.
It never asked me to accept car without uss , my build date ia 13th october , delivery date is 19th nov and already made payment for car .not sure if its coming without uss , as per chat they said my car will come with tesla vision.
 
Something else that occurred to the other day is how good is TeslaVision going to be at detecting things that move? It's all very well modelling a stationary thing that the cameras have seen, factoring in how fast the car is moving. You can take that a step further, and assume TeslaVision will be able to track movement speed of obstacles as well as your own vehicle's speed. But what about things that change speed, direction, or were never visible to the cameras in the first place?

The thought that prompted this was being in a car park on the school pick-up. Small kids are lower than the height of a Model X bonnet, and they move in erratic ways.
 
It never asked me to accept car without uss , my build date ia 13th october , delivery date is 19th nov and already made payment for car .not sure if its coming without uss , as per chat they said my car will come with tesla vision.
Vision is used for AP amongst other things. Having vision does not mean you won't have USS. I have Vision and USS. Although today in a drive through I almost managed to play popcorn with the beeps - bloody annoying. :)
 
JLR seem to be able to use cameras to make the bonnet invisible... surely Telsa can do the similar its just images taken before the car was in the way stitched together? Does it work? Land Rover’s ClearSight handy X-ray vision tech
JLR have cameras above the front number plate where it can actually see what’s in front of the bonnet. It doesn’t rely on a high level camera where the bonnet masks the view. Never thought I’d be saying a JLR tech solution just works, but this one does.