Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla lowers Model 3 range estimates in Europe due to extra power consumption of AMD Ryzen processor

This site may earn commission on affiliate links.
I don't think that's quite the correct math. If you look at EPA test results, page 20, the 2022 Tesla LR AWD:

UDDS cycle: 505 miles, average speed 19.59 mph, so it takes 25.8 hours. 82.067 kWh DC was used, meaning around 3200W. 35.5W/3200W = 0.0111 or 5.6 miles of range.

On Highway cycle: 475 miles, average speed 48.3 mph, 9.8 hours. 82.067 kWh DC, ~8400W. 35.5/8400W = 0.0042 or 2 miles of range.

On FTP cycle: 306 miles, average speed 21.2 mph, 14.4 hours. 76.577 kWh DC, ~5300W. 35.5/5300W = 0.0067 or 2 miles of range.
https://dis.epa.gov/otaqpub/display_file.jsp?docid=54391&flag=1

Of course, the Ryzen processor wouldn't be running at full power, so I still don't feel it'll be that much, but if it was running full bore (especially if you add in a portion of 65-90W for the discrete Navi 23 GPU) and the test cycle had very low average speeds (similar to UDDS), you can get multiple miles of range impact (not just rounding to zero).
Agree and thanks for that link. Agree it isn't 0, but I think he was being facetious.

We could look at it this way, and let me know if I am missing something.

Assuming ~ 270 watt hours per mile (rated efficiency of the model Y long range), you would have to run the processor at full bore for ~8 hours (8x35 Wh) to lose 1 mile of range.
 
The hardware and software guys say it is really how the power management works.
As they all claim the Atom is not fully used and its a 4 core vs a 4 core.
At least most say its a wash. Someone would need clock freq and true power numbers.
The chip would need to be decaped to see what i really in there.
Well beyond our info.
 
I am in the chip industry, specifically power consumption.

Chip power consumption is the number one issue nowadays, since everyone’s chip basically works. Power and performance are huge issues.

I would not be surprised if the the range reduction happens due to power hungry semiconductors.

My Model S used to lose 12-15 miles overnight when I bought it in 2013 just sitting still. Some of that was due to BMS, but even in fair weather of spring, it still managed to lose a lot of miles. Tesla made changes to the software to not constantly communicate stuff over the wifi and the loss dropped to about 7 miles.

It still manages to lose 7 miles overnight.

im not sure you have a grasp of how much wattage we're talking about here...
 
  • Like
Reactions: JBT66 and DrGriz
I’m not sure you understand how chip power is affected by workloads. Or how many semiconductors are in the system, total.

even if the CPU change would add a hypothetical 200W (The Ryzen built into Teslas uses 45 watts but im being generous) to the infotainment system load it is a tiny extra consumption compared to the drive units. Assuming an average speed of 60km/h and an average consumption of 200Wh/km, a hypothetical increase of 200W would only add ~1.6% increase in load (less if other auxiliary consumers are negligible).

the power difference in chips is a rounding error. you have absolutely no idea what you're talking about.
 
even if the CPU change would add a hypothetical 200W (The Ryzen built into Teslas uses 45 watts but im being generous) to the infotainment system load it is a tiny extra consumption compared to the drive units. Assuming an average speed of 60km/h and an average consumption of 200Wh/km, a hypothetical increase of 200W would only add ~1.6% increase in load (less if other auxiliary consumers are negligible).

the power difference in chips is a rounding error. you have absolutely no idea what you're talking about.
To put it another way, the power difference is comparable to turning on your headlights. It's just not going to have a big impact.
 
  • Like
Reactions: CarlThompson
even if the CPU change would add a hypothetical 200W (The Ryzen built into Teslas uses 45 watts but im being generous) to the infotainment system load it is a tiny extra consumption compared to the drive units. Assuming an average speed of 60km/h and an average consumption of 200Wh/km, a hypothetical increase of 200W would only add ~1.6% increase in load (less if other auxiliary consumers are negligible).

the power difference in chips is a rounding error. you have absolutely no idea what you're talking about.
Again, peak power draw is not the same as average. Depending on how the software stimulates the activity, peak power draws are sometimes 10 times the average. Without a detailed analysis, you would not know the actual power draw.

Also, there are other chips working in the system. Ryzen could have pushed the consumption more than the earlier versions. While there was a range loss earlier too, this would have increased it to something more noticeable.

This is why looking at information available online and making quick conclusions is hazardous. Also, we are talking about a 7 miles in reduction, right? (Unless I am missing something) Before Ryzen, the loss could have been lower, may be 2-3 miles, not very noticeable. Now, it’s likely become more noticeable. It’s possible. However, without a detailed analysis of the actual system, it is not possible to reach a definitive conclusion.
 
Last edited:
Again, peak power draw is not the same as average. Depending on how the software stimulates the activity, peak power draws are sometimes 10 times the average. Without a detailed analysis, you would not know the actual power draw.

Also, there are other chips working in the system. Ryzen could have pushed the consumption more than the earlier versions. While there was a range loss earlier too, this would have increased it to something more noticeable.

This is why looking at information available online and making quick conclusions is hazardous. Also, we are talking about a 7 miles in reduction, right?
TDPs on CPUs typically represent peak draws, not averages (excluding microseconds where they can exceed that). The peak should be 45W compared to Atom's 10W or whatever. Sustained should be much lower. All the calculations in this thread are assuming a worst case scenario of constant 100% load.

Considering the other hardware hasn't changed, I wouldn't expect other chips to be contributing a major difference.
 
TDPs on CPUs typically represent peak draws, not averages (excluding microseconds where they can exceed that). The peak should be 45W compared to Atom's 10W or whatever. Sustained should be much lower. All the calculations in this thread are assuming a worst case scenario of constant 100% load.

Considering the other hardware hasn't changed, I wouldn't expect other chips to be contributing a major difference.
Nope, that is incorrect. TDP operates on a larger timescale (microseconds, typically), while peak operates on nanoscale levels. A sharp peak does not immediately cause thermal effects, but is very detrimental to the power supply, if it’s not sized correctly.

EDP (Electrical Design for Power) is what constitutes nanoscale events, and is becoming a very important and critical aspect of power control, but from hardware perspective, since the time of flight to recognize, and then control power draw through software is too long to be of any help. The usual current sensors work, but a more fine grained, sophisticated approach is to design monitors into the chip at various module levels.

Thermal gradients trend slower, but are detrimental to the performance, and if not properly cooled, can break the die. (NVidia had these, years back).

Now, at a system level, it’s not just the Ryzens, but all of the semiconductors put together that might start to produce power peaks (density hotspots). These are dieprectly caused by software behavior. If the stack was rewritten for more performance, there could be more peaks, more frequently. That would lead to increased power draw.

Anyway, without knowing the system, and without the benefit of analysis, we are just whistling Dixie here.
 
I think you're both wrong. TDP is the indefinitely sustained power draw a chip can handle and the power draw one can expect when operating at the rated speed on all cores. (The rated speed is based on TDP.) In a way it's sort of like an average as it can burst higher for short periods. So if you have a 45W n-core chip rated at 2GHz and run all n cores with a full workload at 2GHz it should draw around 45W. Most modern CPUs can burst higher than the rated speeds for short periods or if some cores are idled/shut down.

Obviously, as some have pointed out it is extremely unlikely that a Tesla would be running either of these CPUs at max TDP all the time unless Tesla's engineers are particularly stupid.
 
I think you're both wrong. TDP is the indefinitely sustained power draw a chip can handle and the power draw one can expect when operating at the rated speed on all cores. (The rated speed is based on TDP.) In a way it's sort of like an average as it can burst higher for short periods. So if you have a 45W n-core chip rated at 2GHz and run all n cores with a full workload at 2GHz it should draw around 45W. Most modern CPUs can burst higher than the rated speeds for short periods or if some cores are idled/shut down.

Obviously, as some have pointed out it is extremely unlikely that a Tesla would be running either of these CPUs at max TDP all the time unless Tesla's engineers are particularly stupid.
Depends on how the software stack operates. There are stacks that ping over WiFi or BT all the time that eat up the battery of a battery limited device in very little time. Examples were earlier versions of google navigation. iPhones actually used to get hot when running the app.

Modern examples are things like webex or any inefficiently coded Java, Ruby, Python applications. Most of these interpreted languages are inefficient by definition. C++, otoh when designed properly can be quite efficient, especially at large scales.

Coming back to TDP. TDP is thermal, tracking microsecond scale events. Not meant to track peak transients. Sure, these are nanoscale events, but how many are there in a time window? Depends on how the software is running, and how inefficient the hardware is. You get a large number of them, your power draw is likely to be much worse than what the vendor has quoted you. I have seen this all the time.

Like I said, without a detailed analysis, one cannot know.

That does not stop internet keyboard warriors bent on showing the world they are ‘right’!! And proclaiming the other person is ‘wrong’!! 😏

Carry on! 🖐
 
Seriously, on a car with an 82kWh battery, who cares about sub-megawatt loads if they're on the scane of microseconds to nanoseconds? On the scale of hours to days, we're talking probably a max of 45W, likely less.
I've pointed it out before, but for a very slow drive cycle that drags in for dozens of hours (the EU cycles might be), while having low average energy demand (like the UDDS cycle), even 45W can add up to a significant amount and result in multiple miles of range difference.

Everyone here also seems to be forgetting the Navi 23 discrete graphics. That has a TDP of 65-90W depending on model and as far as I can find, the previous Atom infotainment system didn't have an analog to it, so this is completely new power demand. It would be interesting to see what the idle or low demand power draw of the new GPU is.