Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla Software updates - Australia

This site may earn commission on affiliate links.
Is your voice control all good, now? Mine is remarkably good- it picks up what I mean amazingly well.
I just can't use it as I leave home- the WiFi-to-wireless broadband handoff really screws up if I ask anything of it at that time...
The old version was bulletproof reliable. The new version I usually have to have a couple of goes. It got very confused yesterday when I said ‘drive to 22 magil road’. It set it as 222 magil road. So I thought I’d say ‘drive 22 magil road’. It didnt like that at all and declared it an ‘unrecognised command’ even though thats what it thought I said the first time. In the end I had to leave a significant gap between ‘to’ and ‘2’, so not fluent, unlike the original,
The day before I asked it to call someone in my contacts. Very simple first name and a simple 4 letter surname. It just wouldnt do the surname. In the end I just did first name and scrolled through all the first names to the correct one. Never had to do that before either. It also wont recognise my PA. First name or surname. I’ve had to save her as a simple fake name to get it to call the number. Her surname is also 4 simple letters.
But its substantially better than the first untested in denial botch of the new version, just not up to the original yet.
 
The old version was bulletproof reliable. The new version I usually have to have a couple of goes. It got very confused yesterday when I said ‘drive to 22 magil road’. It set it as 222 magil road. So I thought I’d say ‘drive 22 magil road’. It didnt like that at all and declared it an ‘unrecognised command’ even though thats what it thought I said the first time. In the end I had to leave a significant gap between ‘to’ and ‘2’, so not fluent, unlike the original,
The day before I asked it to call someone in my contacts. Very simple first name and a simple 4 letter surname. It just wouldnt do the surname. In the end I just did first name and scrolled through all the first names to the correct one. Never had to do that before either. It also wont recognise my PA. First name or surname. I’ve had to save her as a simple fake name to get it to call the number. Her surname is also 4 simple letters.
But its substantially better than the first untested in denial botch of the new version, just not up to the original yet.

Navigate
I tried to go to with a "take me to 22 Wall Road" and insisted no matter how much I tried to take me to "22 Powell Road"....gave up and typed it in....

Contacts
Any time I tried with "Call Mum" it refuses...just cannot understand Mum...I've tried every accent I can think of...
 
Navigate
I tried to go to with a "take me to 22 Wall Road" and insisted no matter how much I tried to take me to "22 Powell Road"....gave up and typed it in....

Contacts
Any time I tried with "Call Mum" it refuses...just cannot understand Mum...I've tried every accent I can think of...

Same
"Call Mum"
Mom not found
I don't want to call Mom dam it, come on Tesla its Mum not Mom

I have had success adding a surname
"Call Mum (surname)"
The car still insists its Mom, but it does find the right number

I wonder if the UK has this issue
 
I consider myself not only well informed but also technology inclined, having been professionally involved in cutting edge scientific SW development for the better part of the last 25 years. What caught me out that I didn't expect from Tesla is what I call the "downgrade experience": A perfectly well functioning subsystem being replaced by an alpha stage release at best performing random actions without any feedback as to why it misbehaved. I drove just over 1000km yesterday from Sydney to Thredbo and back and experienced rock solid autosteer performance. But TACC - which used to work flawlessly on HW1 cars including speed sign recognition - was misbehaving like a bloody learner driver who mixes up accelerator and brake pedals, causing countless minor and 4 major massive braking events yesterday without discernible cause or reason. TACC in various incarnations exists in all medium to upper level cars these days, and not once have I seen that sort of behaviour in other cars (I also own a Lexus and owned a Prius for 150'000km, neither adaptive cruise control has misbehaved once). This is all the more disconcerting as we do not get any feedback whatsoever either directly from the car or from Tesla service as to why these events occurred.

I'm happy to beta test all day long, but in return I need access to the logs. I need to be able to understand why the system behaved the way it did. Being at the mercy of a blackbox is not acceptable. And I don't do alpha testing, which is what TACC at this stage really is.

At this point I can't use TACC when my wife is in the car as she has a neck injury and sudden decelerations of the phantom braking kind will send her into a week of severe neck pain. My reasons for buying a current gen HW3 Tesla were 80% for the autopilot and 20% for the electric drive. I have not received 80% of what I thought I put $200k into, and that was based on my HW1 experience, which I have driven for several 1000km prior to purchase. We've all been subjected to a massive downgrade and frankly inadequate engineering.

You sound like you have a bitter taste in your mouth on spending $200k and being underwhelmed by AutoPilot. It also sounds like you have a legacy way of thinking about software. Perhaps due in part to writing software in the same way for 25 years?

AutoPilot's value isn't its consumer facing feature set. Its the data science that's going on behind the scenes. If you watch any of Karpathy's talks on the topic you can see the vision for AP is to be fully "software 2.0". In essence they put disproportionate amount of work into curating the datasets and let the Deep Learning training stack essentially "write" the consumer facing features. At the moment they are still working through the "perception" problem (identifying everything in the environment accurately).

This perception problem is a completely unknown problem. Its required at least 2 complete restarts (one for HW3 and another for PlaidNet)... and they just get on with it. Its rapid iteration with no sunk cost fallacy -- the only way a problem like Level 5 autonomy will be solved and as a paying consumer of AutoPilot you are basically at least partially funding this effort. There is absolutely no guarantee you will be impressed by the results along the way as its the fundamentals that count. Once those fundamentals are near enough perfect the end result will be rapidly accelerated towards similar perfection.

Once the perception problem is solved, they can move to the planning and policy problem. This is the same type of AI that can learn how to Play Go, DOTA or Starcraft and beat any human player 100% of the time. Eventually we will get there but there is no way of knowing how long it will take.

I like to see this as a Boeing v SpaceX type contrast. Boeing will build a rock in a clean room with hundreds of vendors and suppliers. It will be very well structured and produce consistent results at every step of the way. SpaceX are building steel Frankenstein type monster's in a junkyard and that regularly explode. They have no idea what the final details will be and are happy to make massive fundamental changes once the data indicates this is the way to go. However in the end, the only way to actually get to Mars is with the SpaceX method as this will result in a fundamental shift (10,000x reduction) in the cost basis for building and launching rockets.

Same goes for AutoPilot. Mobileye would have possibly made a great Level 2 system. Maybe even Level 4. But true Level 5 can only be done with fundamental up engineering and true software 2.0.
 
You sound like you have a bitter taste in your mouth on spending $200k and being underwhelmed by AutoPilot. It also sounds like you have a legacy way of thinking about software. Perhaps due in part to writing software in the same way for 25 years?

AutoPilot's value isn't its consumer facing feature set. Its the data science that's going on behind the scenes. If you watch any of Karpathy's talks on the topic you can see the vision for AP is to be fully "software 2.0". In essence they put disproportionate amount of work into curating the datasets and let the Deep Learning training stack essentially "write" the consumer facing features. At the moment they are still working through the "perception" problem (identifying everything in the environment accurately).

This perception problem is a completely unknown problem. Its required at least 2 complete restarts (one for HW3 and another for PlaidNet)... and they just get on with it. Its rapid iteration with no sunk cost fallacy -- the only way a problem like Level 5 autonomy will be solved and as a paying consumer of AutoPilot you are basically at least partially funding this effort. There is absolutely no guarantee you will be impressed by the results along the way as its the fundamentals that count. Once those fundamentals are near enough perfect the end result will be rapidly accelerated towards similar perfection.

Once the perception problem is solved, they can move to the planning and policy problem. This is the same type of AI that can learn how to Play Go, DOTA or Starcraft and beat any human player 100% of the time. Eventually we will get there but there is no way of knowing how long it will take.

I like to see this as a Boeing v SpaceX type contrast. Boeing will build a rock in a clean room with hundreds of vendors and suppliers. It will be very well structured and produce consistent results at every step of the way. SpaceX are building steel Frankenstein type monster's in a junkyard and that regularly explode. They have no idea what the final details will be and are happy to make massive fundamental changes once the data indicates this is the way to go. However in the end, the only way to actually get to Mars is with the SpaceX method as this will result in a fundamental shift (10,000x reduction) in the cost basis for building and launching rockets.

Same goes for AutoPilot. Mobileye would have possibly made a great Level 2 system. Maybe even Level 4. But true Level 5 can only be done with fundamental up engineering and true software 2.0.
I am completely NOT a software person. However, as a consumer, if something is sold to me with certain representation, then I expect to receive such product. We might be guinea pigs as early adopters, but why should we be funding the basic research? Testing a result if it is so represented at time of purchase, I will go with that. Unless, of course, I am misunderstanding the thrust of what you are saying.
 
I am completely NOT a software person. However, as a consumer, if something is sold to me with certain representation, then I expect to receive such product. We might be guinea pigs as early adopters, but why should we be funding the basic research? Testing a result if it is so represented at time of purchase, I will go with that. Unless, of course, I am misunderstanding the thrust of what you are saying.

It's a fair comment. However the way I see it the cost FSD has been appropriately discounted to its net present value. Yes its not "Full Self Drivings" as people would like it to be, but if you look at the option as a collection of convenience features as a whole -- its worth it to enough people. Remember that legacy OEMs will routinely charged $5k for TACC alone.

My comment around "funding" the development comes through you purchasing the vehicle and providing data to their fleet learning program.
 
You sound like you have a bitter taste in your mouth on spending $200k and being underwhelmed by AutoPilot. It also sounds like you have a legacy way of thinking about software. Perhaps due in part to writing software in the same way for 25 years?

AutoPilot's value isn't its consumer facing feature set. Its the data science that's going on behind the scenes. If you watch any of Karpathy's talks on the topic you can see the vision for AP is to be fully "software 2.0". In essence they put disproportionate amount of work into curating the datasets and let the Deep Learning training stack essentially "write" the consumer facing features. At the moment they are still working through the "perception" problem (identifying everything in the environment accurately).

This perception problem is a completely unknown problem. Its required at least 2 complete restarts (one for HW3 and another for PlaidNet)... and they just get on with it. Its rapid iteration with no sunk cost fallacy -- the only way a problem like Level 5 autonomy will be solved and as a paying consumer of AutoPilot you are basically at least partially funding this effort. There is absolutely no guarantee you will be impressed by the results along the way as its the fundamentals that count. Once those fundamentals are near enough perfect the end result will be rapidly accelerated towards similar perfection.

Once the perception problem is solved, they can move to the planning and policy problem. This is the same type of AI that can learn how to Play Go, DOTA or Starcraft and beat any human player 100% of the time. Eventually we will get there but there is no way of knowing how long it will take.

I like to see this as a Boeing v SpaceX type contrast. Boeing will build a rock in a clean room with hundreds of vendors and suppliers. It will be very well structured and produce consistent results at every step of the way. SpaceX are building steel Frankenstein type monster's in a junkyard and that regularly explode. They have no idea what the final details will be and are happy to make massive fundamental changes once the data indicates this is the way to go. However in the end, the only way to actually get to Mars is with the SpaceX method as this will result in a fundamental shift (10,000x reduction) in the cost basis for building and launching rockets.

Same goes for AutoPilot. Mobileye would have possibly made a great Level 2 system. Maybe even Level 4. But true Level 5 can only be done with fundamental up engineering and true software 2.0.

you’ve done a great job of convincing yourself
 
you’ve done a great job of convincing yourself

Thanks for noticing. Through qualitative and quantitative analysis on the subject I have come to that conclusion based on the data I've seen. Would love to see your in depth research on the matter if you disagree.

I don't think the so called FSD is discounted at all. In its present form I don't think its value for money.
Auto pilot does a pretty good job tho.

Many people (myself included) would disagree. Smart Summon and even Traffic Light Control is a little bit underbaked right now but NoAP is absolutely worth $8.5k in itself if you regularly drive on highways.
 
@GrimRe yes, a little bit bitter, but for the reasons I mentioned: I was expecting Tesla to be more open and customer facing with their development and when encountering problems - I'd want to see a ticketing system allowing me to report specific problems and know their status of addressing them, especially when there's actual bugs (of which there are a few that need adressed in "linear" programming space). At the same time I also love my car, and wouldn't want to own anything else today.

I've worked extensively with neural networks and deep learning in recent years, and am well familiar with the difficulties of understanding why the networks output is doing something rather than something other. However... there are always debug products one has to produce, especially when working with sensor data inputs. I'm not asking to see all the intermediate transforms of the deep learning machine, but I would like to have access to a log of the sensor data.

In any case, something new and cool I just noticed yesterday (2020.24.6.1): When stopped at a traffic light on autopilot behind another vehicle, it detects when the light shifts to green and starts moving towards the vehicle in front before that vehicle has started moving.
 
I’d settle for some indicator in logs of why it chose to apply the brakes (or make other safety related manoeuvre) while I have autopilot engaged. It would let me anticipate false positive situations better and give detailed feedback to Tesla.
 
@GrimRe yes, a little bit bitter, but for the reasons I mentioned: I was expecting Tesla to be more open and customer facing with their development and when encountering problems - I'd want to see a ticketing system allowing me to report specific problems and know their status of addressing them, especially when there's actual bugs (of which there are a few that need adressed in "linear" programming space). At the same time I also love my car, and wouldn't want to own anything else today.

I've worked extensively with neural networks and deep learning in recent years, and am well familiar with the difficulties of understanding why the networks output is doing something rather than something other. However... there are always debug products one has to produce, especially when working with sensor data inputs. I'm not asking to see all the intermediate transforms of the deep learning machine, but I would like to have access to a log of the sensor data.

In any case, something new and cool I just noticed yesterday (2020.24.6.1): When stopped at a traffic light on autopilot behind another vehicle, it detects when the light shifts to green and starts moving towards the vehicle in front before that vehicle has started moving.

The AutoPilot team is ostensibly 60 or so people. These are the best of the best in the respective speciality and even the human annotators are experienced (i.e. understand semantic segmentation for computer vision). Having a "ticketing" system for every quirk and bug that every consumer see is not scalable and it doesn't fundamentally improve the NN architecture. With the pace they are moving, issues that pop in and out of existence every few weeks.

Just today Elon tweeted that the PlaidNet rewrite has been a ground up re-write to put everything into 3D space. All the previous "issues" raised by consumers would be completely irrelevant once the PlaidNet inference model is deployed on cars. Time spent trying to understand, isolate, reproduce and ultimately fix would have been wasted as it was only relevant to a now a superseded foundational architecture. I'm not saying PlaidNet will fix all the issues of AutoPilot but the data within tickets would be completely useless and just create unnecessary operational churn.

Tesla instead has a very robust mechanism by which they can source examples from the fleet. This in itself is a very sophisticated unsupervised learning solution but gets absolutely not credit from consumers (after all its a backend system). However this is far more effective at solving problems. Simply ask the fleet for any and all examples of a certain cluster-able definition and let then human annotators work their magic. Once done, run through the wiz bang ML-ops infrastructure to check for regression and/or metric improvement, if so, launch to select few within Tesla, then to EAP, then to small roll out, then to mass roll out.

Check greentheonly on twitter and see how he has gone about accessing debug information within the cars CANbus. This may be the only way to get the type of information you are looking for.

BTW I did not know about the creeping on a green light, this is very interesting!
 
Last edited: