Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

When are the Matrix Headlights going to be enabled?

This site may earn commission on affiliate links.
I just saw a video of Audi's headlight system and remembered that Tesla has matrix headlights on their newer vehicles but they are disabled. I think I heard somewhere that Tesla would be enabling the matrix functionality via software update but what ever happened to that?

 
I just saw a video of Audi's headlight system and remembered that Tesla has matrix headlights on their newer vehicles but they are disabled. I think I heard somewhere that Tesla would be enabling the matrix functionality via software update but what ever happened to that?

I was just wondering the same thing (I have matrix headlights). It’s not a trivial software feature to implement, so that could be holding it up. And, of course, Tesla would need specific regulatory approval to enable them, so that takes time.
 
benz with Multi-beam functioning in US
benz was infront of me lighting up everywhere without blinding oncoming car on two lane road.
so tesla matrix headlight should be functioning too however its not the case.
 
Our MYLR '23 (Giga Berlin) has matrix headlights but they are "dumb". And there are no regulations on this in Norway so why it is not pushed out in a sw update is baffling tbh. Even my three year old e-golf has fully functioning matrix lights
 
Last edited:
It’s plausible that this is, like the USS replacement stuff, blocked on the single stack TV.

Speaking as an IT architect, IF you have the capability for your car to model the 3D space around your car, the natural way to build these capabilities would be on top of that foundation.

How do you provide parking sensors? Well, you consult your 3D model and provide feedback based on it. How do you do matrix lighting? Well, you’ve got a 3D model that already understands where the other entities around the car are, so you would consult that and decide where to shine your lights.

What I would not do, is build completely independent sub systems for all of these things in addition to the fancy 3D model capability that you still need anyway for self driving. It would be overall greater complexity and less efficient long term, both on the processing in the car and the development to support and enhance it.

I think there are exciting possibilities here if you consider what the TV model knows vs what a traditional headlight-spotting camera knows. It would be pretty simple, for example, for Tesla matrix to also mask out pedestrians/cyclists, not just oncoming cars. It would even be simple to only mask out oncomers and leave the lights on for those facing away from you. This would be really hard to do if you’ve built a system that spots headlights but trivial if you’ve already got a system that models everything and predicts its motion.

I don’t know that they’re planning to do this (and obviously I don’t know how good single stack is going to prove to be in the real world), but I know that if I was on that software team this is exactly how I would be advising they approach it. It would be poor architecture to do it the other way due to all the redundant capability.

There is a chance that ‘23 is going to be an exciting year for Tesla software updates. They could be about to leapfrog their way back in front of the competition again.
 
It’s plausible that this is, like the USS replacement stuff, blocked on the single stack TV.

Speaking as an IT architect, IF you have the capability for your car to model the 3D space around your car, the natural way to build these capabilities would be on top of that foundation.

How do you provide parking sensors? Well, you consult your 3D model and provide feedback based on it. How do you do matrix lighting? Well, you’ve got a 3D model that already understands where the other entities around the car are, so you would consult that and decide where to shine your lights.

What I would not do, is build completely independent sub systems for all of these things in addition to the fancy 3D model capability that you still need anyway for self driving. It would be overall greater complexity and less efficient long term, both on the processing in the car and the development to support and enhance it.

I think there are exciting possibilities here if you consider what the TV model knows vs what a traditional headlight-spotting camera knows. It would be pretty simple, for example, for Tesla matrix to also mask out pedestrians/cyclists, not just oncoming cars. It would even be simple to only mask out oncomers and leave the lights on for those facing away from you. This would be really hard to do if you’ve built a system that spots headlights but trivial if you’ve already got a system that models everything and predicts its motion.

I don’t know that they’re planning to do this (and obviously I don’t know how good single stack is going to prove to be in the real world), but I know that if I was on that software team this is exactly how I would be advising they approach it. It would be poor architecture to do it the other way due to all the redundant capability.

There is a chance that ‘23 is going to be an exciting year for Tesla software updates. They could be about to leapfrog their way back in front of the competition again.

The proof will be in the pudding. Unless they release Q1 this year I am selling the car.
 
  • Like
Reactions: Mullermn
Wouldn’t blame you. There’s a big difference between the car being technically capable of a thing and management respecting their customers enough to productionise it.
 
It’s plausible that this is, like the USS replacement stuff, blocked on the single stack TV.

Speaking as an IT architect, IF you have the capability for your car to model the 3D space around your car, the natural way to build these capabilities would be on top of that foundation.

How do you provide parking sensors? Well, you consult your 3D model and provide feedback based on it. How do you do matrix lighting? Well, you’ve got a 3D model that already understands where the other entities around the car are, so you would consult that and decide where to shine your lights.

What I would not do, is build completely independent sub systems for all of these things in addition to the fancy 3D model capability that you still need anyway for self driving. It would be overall greater complexity and less efficient long term, both on the processing in the car and the development to support and enhance it.

I think there are exciting possibilities here if you consider what the TV model knows vs what a traditional headlight-spotting camera knows. It would be pretty simple, for example, for Tesla matrix to also mask out pedestrians/cyclists, not just oncoming cars. It would even be simple to only mask out oncomers and leave the lights on for those facing away from you. This would be really hard to do if you’ve built a system that spots headlights but trivial if you’ve already got a system that models everything and predicts its motion.

I don’t know that they’re planning to do this (and obviously I don’t know how good single stack is going to prove to be in the real world), but I know that if I was on that software team this is exactly how I would be advising they approach it. It would be poor architecture to do it the other way due to all the redundant capability.

There is a chance that ‘23 is going to be an exciting year for Tesla software updates. They could be about to leapfrog their way back in front of the competition again.
Yes but...

Matrix lights are not a 3D problem. Yes, if you have a reliable 3D model of what's around you then use it. But 2D would be fine. Probably 1D would be fine: you need to know a range of angles to not shine light. You don't need a 3D model to solve this problem. So a separate dedicated system from somebody who specialized in that, has already built it, and can achieve greater quantities of scale by selling to multiple auto manufacturers may be better than Tesla building their own from scratch. Imagine a world where these are built by Hella and Bosch, who have lighting divisions that compete with each other and constantly improve. As opposed to Tesla who might eventually get around to this as a checkbox item and then leave it stagnant.

A good analogy would be automatic wipers: it sounds good on paper to leverage an existing vision system to also decide when the windshield is wet. But that vision system is focused 50 meters in front of the car and only looks though about 1 square cm of glass, so you end up with poor results. As compared to a single IR sensor that can average a large portion of the windshield and adds and extra $1 to the BOM costs.

To be fair, their are pros and cons to both approaches. One reason I gave Tesla my money is that they have engineering that rethinks how things ought to be done, and they develop technology in house. Whereas other American auto companies are more than happy to remain completely clueless and just buy technology from somebody else. I like that they are shaking up the entire industry, and I wish to support that. The downside, of course, is that by outsourcing a subsystem to specialists allows those systems to be built by specialists who will do a better job.
 
Tesla, to quote one of their AI day presenters, are not a car company. They are a technology company, and one of the things they happen to sell is cars.

They aren’t interested in ‘fine’, they’re interested in driving (excuse the pun) things forward.
 
Tesla, to quote one of their AI day presenters, are not a car company. They are a technology company, and one of the things they happen to sell is cars.

They aren’t interested in ‘fine’, they’re interested in driving (excuse the pun) things forward.

That's a nice PR statement for AI day, which is a recruiting event. There's plenty in the car where mediocre is good enough. Tesla excels at attacking problems that others have written off as unsolvable. Commissioning the world's largest presses to do the megacastings? Getting the whole industry excited about autonomy, Tesla deserves a lot of respect for these things, as they push the entire industry forward. Tesla has had a huge impact on autonomy even though they haven't solved it.

But by "fine" I meant completely sufficient for the needs. If you have a reliable 3D model and can achieve the same results then use it: building something else would be poor engineering. But there is nothing about matrix headlights as a problem that requires a 3D model. The NHTSA will want things to be production quality for the matrix headlights. A 3D model that only blinds oncoming drivers 5% of the time, that is in perpetual beta, and that might get better in two weeks is probably not what they have in mind. Of course, it's Tesla, so they'll ship it for a year or three until the NHTSA gets around to calling B.S. on them.
 
Bump.

I really haven’t heard anything recently regarding enabling adaptive headlight features. I would hope some progress has been made.
I certainly hope so. But we’ll probably only find out shortly after an update when someone takes a look and finds unimplemented code for adaptive features on the matrix headlights.

I wonder if Teslas matrix LED headlights have the same “resolution” as other manufacturers do. When I see “TESLA” projected on my wall (10 feet away) while playing the light show, the word does not have sharply defined edges. I thought it was just my headlights, but I see many online video clips that appear the same way. I wonder how that will affect the precision of “darkening” a slim slice of the beam for an oncoming vehicle that is still some distance away. That’s probably one of the most common uses of adaptive headlights, as well as projecting info onto the road ahead of you (road warnings, speed limits, nav turns, etc.)
I can only imagine the choices Tesla (who really loves their fart noises) will provide their customers to project onto the road once they flip the switch on adaptive lighting.