Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Model S Software/Firmware Updates

This site may earn commission on affiliate links.
I just got 4.4 last night too.

Here's a secret feature. Press the tesla logo and the signal strength logo. Hold it until the logo flashes a second time - immediately release your touch and enter 20130401 and press enter. The dialog will still be up. quickly within three seconds press your brake three times then open all your windows and then close them all. tada! watch the video on the big screen. :)

So how did you find out about this?

How do we know this is not the self destruct sequence?
 
Last edited:
I just got 4.4 last night too.

Here's a secret feature. Press the tesla logo and the signal strength logo. Hold it until the logo flashes a second time - immediately release your touch and enter 20130401 and press enter. The dialog will still be up. quickly within three seconds press your brake three times then open all your windows and then close them all. tada! watch the video on the big screen. :)

Hummm 20130401 --> 2013-April-01 --> April Fools !
.... a little late...
 
Encouraging! Sensor (blind spot, backup etc?) and Cruise control (adaptive?) improvements will really help finish off this class of car.

Hate to burst the bubble but writer of this article is insane! You can't add blind spot or backup detection with a software update, you need to add sensors/cameras. I thought everyone in the valley was required to have basic technology competency!
 
Hate to burst the bubble but writer of this article is insane! You can't add blind spot or backup detection with a software update, you need to add sensors/cameras. I thought everyone in the valley was required to have basic technology competency!

Hey Tomas, I am a software engineer and I can tell you that with the fish-eye camera on the Model S and some pattern recognition algorithms you could certainly implement a blind spot monitor and lane assist as well as a pretty good rear backup detection. The car can know what is in what section of the camera as well as rate of change and relative sizing. The fisheye camera on the S is quite 'wrap-around' and I can totally see any of the car's blind spot by looking at it. If i can do it with my eyes, some piece of software can simulate that for me. Even the cheapest digital cameras can now track moving faces, smiles and focus points so tracking large moving cars coming up behind or to the side would be no problem. For anything in the rear i think they could definitely implement some features even without adding sensors. Of course, the one challenges is that if the camera lens is dirty, this could cause a problem but I would still take those features implemented in software with a notification on the screen when and if the camera lens was too dirty to function properly :)
 
As I've noted previously, the Audi blind-spot detection relies on the rear view camera to function, but I believe there's additional sensors in the wing mirrors as well. Of course, there's several antenna in the wing mirrors that we're not 100% sure of the function, so I'm reasonable optimistic they could do all of this in software.
 
OK, I stand corrected. You could NOT do adaptive cruise without new sensors, but you COULD probably do blind spot and backup warning by processing rear camera signal. Question is, would you want to, and is it good design? I personally get nervous about all this real-time, mission-critical processing being routed through one CPU that is also doing a lot of other things, and where frequent software updates occur. A dedicated sensor whose only job is to detect things on backup is more reliable and takes up less processor than a sophisticated AI program to analyze HD video... etc, etc.
 
drivetrain, and dashboardscreens are 2 different systems!
Else you would not be able to rebout these while driving

concerning blind spot via camera

watch this video of a corvette racing. In the middle is a screen with a rearview camera. Everytime a car nears a big arrow shows how far it's behind him, and when a car passes a big arrow shows which side it passes

 
Last edited by a moderator:
As I've noted previously, the Audi blind-spot detection relies on the rear view camera to function, but I believe there's additional sensors in the wing mirrors as well. Of course, there's several antenna in the wing mirrors that we're not 100% sure of the function, so I'm reasonable optimistic they could do all of this in software.

I'm quite sure that Audi's and other "Blind Spot" warning systems use two wide-bandwidth FMCW microwave radars operating in the 24.150 GHz ISM band. They are concealed behind the rear bumper covers and are pointed at an outward angle from each rear corner of the vehicle. They are all manufactured by a custom-application radar sensor manufacturer in Germany (whose name escapes me at the moment) and then re-labeled by large automotive system/component suppliers for inventory convenience.
 
I personally get nervous about all this real-time, mission-critical processing being routed through one CPU that is also doing a lot of other things, and where frequent software updates occur.

Automotive systems aren't like that. There are many ECUs that are mainly independent of each other. A car is not like your computer where there is often only one CPU. Proof is that you can reboot either or both displays while driving and no critical functions are affected.
 
Jerry and Todd, in context of the thread: There was conjecture that blind spot could be added via software by using the rear view camera and AI to interpret signal. If it was to be added by software, then an EXISTING CPU would have to process the image and interpret to do the blind spot. I personally see that as a "mission critical" real time thing that needs a dedicated processor, not a piece of an existing one that's also doing something else. So, my assertion was that hardware would be needed to do it right, and it would be bad design to simply add the AI to an existing processor. Never said there weren't many CPUS, in fact have been asserting more is better for real time systems.

Yep. I'm sure there are dozens of independent CPUs in the Model S.
 
Jerry and Todd, in context of the thread: There was conjecture that blind spot could be added via software by using the rear view camera and AI to interpret signal. If it was to be added by software, then an EXISTING CPU would have to process the image and interpret to do the blind spot. I personally see that as a "mission critical" real time thing that needs a dedicated processor, not a piece of an existing one that's also doing something else. So, my assertion was that hardware would be needed to do it right, and it would be bad design to simply add the AI to an existing processor. Never said there weren't many CPUS, in fact have been asserting more is better for real time systems.

1. Just because the video comes from the same source, doesn't necessarily mean it has to use the same processor. Video can easily be split to multiple CPUs.

2. (And this is where I think we differ). These kind of alerting systems are secondary. That is, the driver has the primary responsibility for detecting vehicles and pedestrians and shouldn't be relying on the alerting systems as a first means of identification. The alerting systems are a good backup, but they are just a backup.

3. Adjusting the mirrors with the head-wobble system eliminates the blind spots in the Model S.