Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Dear Tesla and Tesla FSD Team

This site may earn commission on affiliate links.
Dear Tesla and Tesla FSD,

I have owned a 2018 Tesla Model S (75D) & now own 2019 Tesla Model 3. I’m a huge fan of the FSD Beta program in which I am currently enrolled and testing with my Model 3.

Problem:

I do realize the software is in Beta, however I’m contacting you in regards to the hardware installed in my vehicle. Ever since the rollout of FSD Beta, I have noticed that MCU 2 (Intel Atom) is not sufficient enough to handle the visualizations and day-to-day use of the infotainment screen. Normally I wouldn’t let this bother me as I understand technology gets outdated (ie. iPhone slowing down from one generation to the next). However, with the FSD Beta installed on vehicles with MCU 2, it should be imperative for the visualizations to properly reflect real world changes - this is a matter of safety; if the visualizations are either laggy, delayed, or buggy; what sense of confidence does this leave the user to utilize FSD?

Solution:

I am requesting your team to please consider (at least) the FSD customers whom fully purchased this package, and perhaps the FSD Subscribers, to be able to have some form of MCU 3 retrofit - similar to Model S going from MCU 1 to MCU 2. For Model S, this was simply a quality of life improvement. However, for FSD Beta and future FSD users its a matter of safety and accurate reporting.

Conclusion:

I hope to hear back from your team in regards to this request and feedback as I’m sure I can’t be the only one with this is problem, as there are many sub-Reddit’s and TeslaMotorsClub users expressing visualization issues with older MCU 2 hardware.

Thank you for your time and consideration, I hope there will be a solution to this issue in the near future.

Regards,

Arya
 
Dear Tesla and Tesla FSD,

I have owned a 2018 Tesla Model S (75D) & now own 2019 Tesla Model 3. I’m a huge fan of the FSD Beta program in which I am currently enrolled and testing with my Model 3.

Problem:

I do realize the software is in Beta, however I’m contacting you in regards to the hardware installed in my vehicle. Ever since the rollout of FSD Beta, I have noticed that MCU 2 (Intel Atom) is not sufficient enough to handle the visualizations and day-to-day use of the infotainment screen. Normally I wouldn’t let this bother me as I understand technology gets outdated (ie. iPhone slowing down from one generation to the next). However, with the FSD Beta installed on vehicles with MCU 2, it should be imperative for the visualizations to properly reflect real world changes - this is a matter of safety; if the visualizations are either laggy, delayed, or buggy; what sense of confidence does this leave the user to utilize FSD?

Solution:

I am requesting your team to please consider (at least) the FSD customers whom fully purchased this package, and perhaps the FSD Subscribers, to be able to have some form of MCU 3 retrofit - similar to Model S going from MCU 1 to MCU 2. For Model S, this was simply a quality of life improvement. However, for FSD Beta and future FSD users its a matter of safety and accurate reporting.

Conclusion:

I hope to hear back from your team in regards to this request and feedback as I’m sure I can’t be the only one with this is problem, as there are many sub-Reddit’s and TeslaMotorsClub users expressing visualization issues with older MCU 2 hardware.

Thank you for your time and consideration, I hope there will be a solution to this issue in the near future.

Regards,

Arya

1. I doubt Tesla will see this message. They do not respond to messages on this forum. If you want Tesla to see this message, you need to email it to Tesla.
2. There is no MCU3 AFAIK. Or are you saying that you want Tesla to give FSD beta testers a free upgrade to MCU3 if it comes out? But, I am not even aware that there are any plans for MCU3.
3. Are you posting this message just to make a point that you think the FSD visualizations are laggy?
 
Problem:

I do realize the software is in Beta, however I’m contacting you in regards to the hardware installed in my vehicle. Ever since the rollout of FSD Beta, I have noticed that MCU 2 (Intel Atom) is not sufficient enough to handle the visualizations and day-to-day use of the infotainment screen. Normally I wouldn’t let this bother me as I understand technology gets outdated (ie. iPhone slowing down from one generation to the next). However, with the FSD Beta installed on vehicles with MCU 2, it should be imperative for the visualizations to properly reflect real world changes - this is a matter of safety; if the visualizations are either laggy, delayed, or buggy; what sense of confidence does this leave the user to utilize FSD?
Visualizations are irrelevant, they don't affect safety.
 
1. I doubt Tesla will see this message. They do not respond to messages on this forum. If you want Tesla to see this message, you need to email it to Tesla.
2. There is no MCU3 AFAIK. Or are you saying that you want Tesla to give FSD beta testers a free upgrade to MCU3 if it comes out? But, I am not even aware that there are any plans for MCU3.
3. Are you posting this message just to make a point that you think the FSD visualizations are laggy?
1) I am hopeful that perhaps a Tesla employee might see this to address back to their team.
2) I know there isn’t a retro-fit kit available as of this moment, but I’m hopeful that - just like the model S - they may offer if there’s a large enough demand for this.
3) I’m posting this as a FSD Beta user whom has compared the same software revision on my friend’s 2022 Model Y (with MCU 3 - Ryzen). His visualizations aren’t choppy/laggy. So for those using FSD, I think it’s a fair request.
 
Much safer to keep your eyes on the road. I’ve never understood the argument that visualizations are necessary for safety at all.
Confidence in the system leads to complacency which is the opposite of safety.
I’m not saying to solely depend on the visualizations alone for driving. I am saying that as I’m keeping my eyes on the road, as a tester, it’s important to ensure what I see in the real world does properly reflect that of what the visualization is showing - accurate depiction. The argument for safety stems from the fact that what the visualization is presenting is what the FSD computer is processing. Now if this system is in any shape or form laggy or misrepresenting the real world - this is a problem. No form of automation should ever lead any individual to complacency; this is why safety standards are in place.
 
Visualizations are irrelevant, they don't affect safety.
Quite the contrary - when teaching a(n AI) system, it needs to know how to interpret what it’s looking at. We use our real world examples to give it a standard base to which it can then cross-reference. Does it look like a duck? Does it quack like a duck? Does it walk like a duck? Then perhaps with 75% accuracy we can say its a duck since we can’t eat the live duck.

Visualizations help communicate to the driver that both the FSD system and the driver are on the same page as to what’s being presented.
 
I’m not saying to solely depend on the visualizations alone for driving. I am saying that as I’m keeping my eyes on the road, as a tester, it’s important to ensure what I see in the real world does properly reflect that of what the visualization is showing - accurate depiction. The argument for safety stems from the fact that what the visualization is presenting is what the FSD computer is processing. Now if this system is in any shape or form laggy or misrepresenting the real world - this is a problem. No form of automation should ever lead any individual to complacency; this is why safety standards are in place.
I’ve never seen anyone change their behavior based on what they see on the screen while using FSD beta.
The vast majority of disengagements seem to occur with the screen displaying an accurate representation of the real world anyway.
 
This makes no sense and is a terribly tortured argument. The visualization is a representation of things to the user, and is not nor meant to be an accurate representation of what FSD is processing.

You're not testing what FSD is seeing. You're testing what FSD is doing.

If you're under the impression you're testing the visualization, you're on the wrong team and testing the wrong things. Apply to work at Tesla. They are recruiting heavily. They have an entire AI day tomorrow to recruit people for that.

If you just want a better MCU... just say so. Don't be yet another alarmist asking for a change under the guise of "safety" that has nothing to do with safety.
 
My understanding is that visual images are received by the cameras to a pre-processor that turns the raw visual data into usable blocks. Those new blocks are sent to the various neural nets for decision making. One of those nets handles image recognition, which processes the data and sends the perceived object to the visualization system which shows it to you on screen.

The NN's that handle decisions had the data directly from the image pre-processors and have made the decision before the other net that handles image recognition gets it to the screen. So what's on screen will always be just a little behind what's actually happening.
 
When I first got my 3, there were no visualizations. People were "scared" because they couldn't see what the car "saw". A few updates later they implemented visualizations. From the start Tesla's been very clear that it does not represent a full picture of what the car actually sees - it never was intended to. I'd rather they save the computing cycles and put them to better use.
 
  • Like
Reactions: Resist
Posting on TMC in the hopes an employee will see it AND take it to work is a fool's errand.

I suggest two known direct ways to reach Tesla (and don't expect a response):

fsdbeta at tesla dot com

Just use the snapshot button when the car makes a wrong or inelegant move. Tesla will use that data to train their NN. Whatever value the visualization might provide is not worth the safety tradeoff of taking your eyes off the road.

Real world is life or death. Vectorspace is not.
 
  • Like
Reactions: diplomat33
My understanding is that visual images are received by the cameras to a pre-processor that turns the raw visual data into usable blocks. Those new blocks are sent to the various neural nets for decision making. One of those nets handles image recognition, which processes the data and sends the perceived object to the visualization system which shows it to you on screen.

The NN's that handle decisions had the data directly from the image pre-processors and have made the decision before the other net that handles image recognition gets it to the screen. So what's on screen will always be just a little behind what's actually happening.
Are you saying that in real time images are transmitted to Tesla? Sounds very processing and ping time intensive. It would explain its poor performance though.
 
Are you saying that in real time images are transmitted to Tesla? Sounds very processing and ping time intensive. It would explain its poor performance though.
No, not at all. The cameras send their real-time video feed to a pre-processor that takes the raw images and converts them to data points. That new data is then fed to the autonomous code / neural nets, which process the data and make decisions for the car, and feed the visualizations. All this happens in the car, there is no transmitting/receiving data from Tesla for those functions.
 
So this neural net you speak of is contained within each individual car? What makes it a network?

What Is a Neural Network?​

A neural network is a series of algorithms that endeavors to recognize underlying relationships in a set of data through a process that mimics the way the human brain operates. In this sense, neural networks refer to systems of neurons, either organic or artificial in nature.
 
  • Like
Reactions: diplomat33