Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

What we learned at the shareholders meeting......

This site may earn commission on affiliate links.
  • Configurator open at end of July, when production starts.
  • Dual motor not available until end of 2017, or more likely early 2018. No mention of performance version. (I'm hoping for not much later than the dual motor.)
  • Very few options initially--maybe only color, interior finish and wheels.
 
Teased Model Y, said it would be cheaper (than 3) to manufacture, and would need to have a new factory made for it since Fremont is at capacity. (In part due to lack of parking space for employees and contractors!)
 
  • Informative
Reactions: ModelX
Mentioned that Tesla does the audio system for Model S, X, and 3 in-house, and that he thinks they can improve the sound quality with a software update to the "CODEC"?
Seemed a slightly odd response to me... I guess "we can improve it with software updates" is an answer for everything.
He did mention they are working on better song-recommendation (playlist management) features.
I am curious what will be offered in terms of streaming music services, and music download options.
Personally, I just want to be able to manually upload all my own preferences into flash memory somewhere.
 
Mentioned that Tesla does the audio system for Model S, X, and 3 in-house, and that he thinks they can improve the sound quality with a software update to the "CODEC"?
Seemed a slightly odd response to me... I guess "we can improve it with software updates" is an answer for everything.
In audio quality, computing power has reduced the price point for great sounding audio system dramatically. Good sound system and speakers are supposed to reproduce the music exactly as the source, and cheap speakers usually have non-linearity in different notes, so certain notes will sound louder than other notes, or some sharp transition may not come out as sharp, so notes may sound muffled.

With modern computing power, instead of having to build a very fancy speak system that has good fidelity, one can take relatively basic speakers, put a mic at where you would sit, and map out its frequency response in very fine detail, and adjust the amplitude for different notes to compensate for speakers' non-linearity, and also exaggerate some transition so what comes out of the speaker actually mimic the intended sound. Think of it as an equalizer, but accurately tuned to each speaker connected to it, on a very finely grained scale.

Similar stuff goes into those "virtual surround sound" system, the smarter ones can map out the distance of your speakers to where you are, and figure out how the sound waves would amplify or cancel each other from different speakers, and precisely engineer what you hear.

The computing power we have now allows all this to happen, the SW happens when people realized what they can do with the computing power.
 
In audio quality, computing power has reduced the price point for great sounding audio system dramatically. Good sound system and speakers are supposed to reproduce the music exactly as the source, and cheap speakers usually have non-linearity in different notes, so certain notes will sound louder than other notes, or some sharp transition may not come out as sharp, so notes may sound muffled.

With modern computing power, instead of having to build a very fancy speak system that has good fidelity, one can take relatively basic speakers, put a mic at where you would sit, and map out its frequency response in very fine detail, and adjust the amplitude for different notes to compensate for speakers' non-linearity, and also exaggerate some transition so what comes out of the speaker actually mimic the intended sound. Think of it as an equalizer, but accurately tuned to each speaker connected to it, on a very finely grained scale.

Similar stuff goes into those "virtual surround sound" system, the smarter ones can map out the distance of your speakers to where you are, and figure out how the sound waves would amplify or cancel each other from different speakers, and precisely engineer what you hear.

The computing power we have now allows all this to happen, the SW happens when people realized what they can do with the computing power.

I know about some of all that, but he specifically mentioned improved CODEC. Not DSP software...
Are they using some transcoding/re-compression mechanism to store media in a way that has compromised audio quality?
Are they talking about using a different CODEC to store audio files on the system?
(Say as a random example, they switch from 256kbps MP3 to 320kbps OGG)
Maybe the audio streams are passed between components in some compressed format?

Hmm., maybe they should use "middle-out" compression? :p
 
I know about some of all that, but he specifically mentioned improved CODEC. Not DSP software...
Are they using some transcoding/re-compression mechanism to store media in a way that has compromised audio quality?
Are they talking about using a different CODEC to store audio files on the system?
(Say as a random example, they switch from 256kbps MP3 to 320kbps OGG)
Maybe the audio streams are passed between components in some compressed format?

Hmm., maybe they should use "middle-out" compression? :p
I was hoping "codec" was a slip of the tongue. I'm personally not going to notice a simple codec change unless the audio is horrible now. I would notice a DSP software change.

Your example: 256kbps MP3 to 320kbps OGG I wouldn't notice the difference on a standard car speakers.

The person asking the question really wanted to know if there was going to be an audio upgrade on the Model 3 like the Model S.
 
I know about some of all that, but he specifically mentioned improved CODEC. Not DSP software...
Are they using some transcoding/re-compression mechanism to store media in a way that has compromised audio quality?
Are they talking about using a different CODEC to store audio files on the system?
(Say as a random example, they switch from 256kbps MP3 to 320kbps OGG)
Maybe the audio streams are passed between components in some compressed format?

Hmm., maybe they should use "middle-out" compression? :p
I think the kind of compression/digitization used by the CODEC can make a difference in how the sound is processed by the DSP. Based on my very old knowledge of signal processing from many years ago, equalization functions can apply directly frequency domain, but edge enhancement could be applied in both time or frequency domain. Transforming the audio signal into frequency domain and passing to each component may allow simpler and lower cost DSP built-in to each component to process it easily. I've definitely heard mentions of certain compression method can retain more of the features of the audio stream that allow DSP to work better, and some other compression may work better when the uncompressed output is fed directly into an analog audio system.
 
I think the kind of compression/digitization used by the CODEC can make a difference in how the sound is processed by the DSP. Based on my very old knowledge of signal processing from many years ago, equalization functions can apply directly frequency domain, but edge enhancement could be applied in both time or frequency domain. Transforming the audio signal into frequency domain and passing to each component may allow simpler and lower cost DSP built-in to each component to process it easily. I've definitely heard mentions of certain compression method can retain more of the features of the audio stream that allow DSP to work better, and some other compression may work better when the uncompressed output is fed directly into an analog audio system.

How does any of that make the FM radio sound better in the car?... Pretend you have zero control over the original audio source or the codec used to encode it.