Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Odd Consumption "30 Mi Average" Behavior

This site may earn commission on affiliate links.
In this video, between the 40 and 42 second mark, the average WH/Mi increases even though the point being removed from the chart (left side) is lower than the point being added to the chart (right side). While this looks wrong from a basic math perspective, there could be a logical explanation. Does anyone have one?
 
I watched the video and saw the rise in the final few seconds. I wish the video had shown a bit more until the average started ramping up again. That might give some clue. The only thing that comes to mind is that the "average" is a weighted average giving the older data less weight than the recent data. I can't really think of a valid basis for doing it that way though. It's also not so straight forward to do that. Instead of just taking an average once, then doing one add and one subtract for each sample, a weighted average would require N multiplies and N additions. Still not really taxing even on a tiny CPU given the low data rate.

I just don't see the advantage unless they are using the "average" as a filter to smooth off the sharp transitions. But that is pretty inherent in a long average like this.
 
I watched the video and saw the rise in the final few seconds. I wish the video had shown a bit more until the average started ramping up again. That might give some clue.
I really do want to understand this, so I've uploaded a much longer video with the same pattern part-way through (roughly 3 minutes in). It wasn't a big deal to get since this is on my daily commute, but I had to find time to get it uploaded. Interestingly, the next day I saw the exact opposite happen (consumption was at a peak and starting to come down while a trough was dropping off the end, but the "average" dropped in spite of that). I'm guessing both of these behaviors happen fairly regularly, but I've never paid much attention elsewhere.
 
I really don't see anything unexplainable here but readily admit that I might have missed something. How the average is computed will, of course, have an influence on what the average does. If the integrator is the "boxcar" (sliding window, FIR) type which computes, as the name suggests, the sum of all the points within a window which slides along the data, the average is as sensitive to what is sliding out of the window (the oldest data) as it is to the new data coming in. In part of the videos the oldest data was pretty close to the average and reasonably flat. In such a case fluctuations in the average will influenced mostly by the newest data. Conversely, if the oldest data is much smaller or larger than the average and the newest data is close to the average the oldest data will be the main determinant in the average's fluctuations over time.

Often data like this is "smoothed' or "low pass filtered" with an IIR (recursive) filter in which the current output is the scaled previous output plus the newest data scaled by another coefficients. The coefficient determine the time constant (bandwidth) of the filter. It is called Infinite Impulse Response (IIR) because the output is a function of the ALL the input data but a much stronger function of the newest data. There is some appeal to this as you immediately see the results of any changes in conditions but the entire drive history is considered with influence decreasing with it's age.

As to which of these Tesla may have chosen I have no idea. One could theoretically get an idea by rolling along at 10 miles per hour and then flooring the car briefly. The fact that the display is in terms of distance rather than time would complicate the analysis but a clever analyst might be able to do it.
 
  • Like
Reactions: animorph
In my mind, if you "smooth" an average, it's not an average anymore, because the term average has a specific definition that doesn't include weighting. I would just expect that responses are correct and the average is "weighted" or "smoothed" but it seems like some other word should be used or appended in such a case (i.e. "weighted average"). Then again, considering that the "Instant" Range is different on the 5, 15, and 30 mile charts, I guess the average is as average as the instant is instant.
 
I had a chance to observe this myself and while it was happening I selected the alternate average intervals. What I found was the details of the graphed data changed dramatically. This confirms the graphs are drawn from filtered data which changes in the three views. So trying to compare the average calculation to any of the graphs is pointless. I expect the average is calculated on the raw data with peaks and valleys that never show up on any of the graphs.
 
I had a chance to observe this myself and while it was happening I selected the alternate average intervals. What I found was the details of the graphed data changed dramatically. This confirms the graphs are drawn from filtered data which changes in the three views. So trying to compare the average calculation to any of the graphs is pointless. I expect the average is calculated on the raw data with peaks and valleys that never show up on any of the graphs.
I've noticed that the graphs are definitely progressively smoothed more as the X axis is extended, but I've also noticed that the average moves based on which graph you have chosen. I assumed that the average was for the graph displayed. It seems more likely to me that the average is weighted than that the smoothing of the graphs is so extreme that the a real average appears to move in the wrong direction when compared to the changes (points removed vs points added) in the graph.
 
I've noticed that the graphs are definitely progressively smoothed more as the X axis is extended, but I've also noticed that the average moves based on which graph you have chosen. I assumed that the average was for the graph displayed. It seems more likely to me that the average is weighted than that the smoothing of the graphs is so extreme that the a real average appears to move in the wrong direction when compared to the changes (points removed vs points added) in the graph.

Not sure what you are trying to say. I can't say anything for certain as I didn't write the code, but the average is calculated over the range of data selected, that seems pretty certain. The graph is clearly a smoothed version of the raw data. I expect the average is calculated over the raw data and so will vary with peaks and valleys not apparent on the smoothed graphed data.

Try watching your graph as you switch it between durations. You can see peaks appear and disappear or multiple peaks combined into one, not issues caused by the graph resolution. If that is happening because of the smoothing and the average is calculated on raw data, then clearly the average can move in ways that can't be explained by the graph.
 
Not sure what you are trying to say. I can't say anything for certain as I didn't write the code, but the average is calculated over the range of data selected, that seems pretty certain. The graph is clearly a smoothed version of the raw data. I expect the average is calculated over the raw data and so will vary with peaks and valleys not apparent on the smoothed graphed data.

Try watching your graph as you switch it between durations. You can see peaks appear and disappear or multiple peaks combined into one, not issues caused by the graph resolution. If that is happening because of the smoothing and the average is calculated on raw data, then clearly the average can move in ways that can't be explained by the graph.
I am saying that, in spite of the obvious smoothing, it seems to me that the "average" consistently favors new "data points" added to the graph versus the old "data points" being removed. While I could certainly be wrong, I infer from this that the "average" is actually a weighted average of some sort, regardless of whether it is using the smoothed "data points" or the actual raw data.
 
I think it's a lot simpler than we have been thinking. The 30 (or 15 or 5) mile Whr/mi number is no more that the watt hours consumed in driving the last 30 (or 15 or 5) miles divided by 30 (or 15 or 5). The display shows consumption vs range and displays perhaps 100 points. Thus each point represents 0.3 or 0.15 or 0.05 miles. The value displayed at a point could be nothing more than the watt hours consumed in traveling the previous 0.3, 0.15 or 0.05 miles.

Let's say you sit at a stop light for 3 minutes (0.05 hr) with the heater going at 1 kW and the display in 5 mi setting. When the light turns green you take of and cover 0.05 miles using 0.05*400 = 20 Whr to move the car that far but the total energy consumption over that first 0.05 miles includes 0.05*1000 = 50 Whr for the heater for a total of 70. Dividing by 0.05 miles would give 70/0.05 = 1400 Wh/mi - off the scale. In the next 0.05 miles another 20 Whr would be consumed by the motor plus another 1000*0.05/25 = 2 Whr for the heater assuming 25 mph and heater power still at 1 kW. Dividing by 0.05 we have 22/0.05 = 440. If the display were set to 30 mile range the first computation after the green light would come at 0.3 miles. The energy used to move the car would be 0.3*400 = 120 Whr which, when added to the heater energy at the stop light plus1000*0.3/25 = 12 Whr for the heater assuming it is still drawing 1 kW and the average speed is 25 mph would give a total of 182. Dividing by 0.3 gives 607 Wh/mi. In the next 0.3 miles the energy used would be again 120 for the traction plus another 12 for the heater for a total of 132. Divided by 0.3 that's 440 Whr/mi.

Now we don't see any single point spikes at 1400 followed by an adjacent point at 440 so it is clear that some sort of smoothing over a few points is taking place. This could be boxcar or leaky integrator. As the latter is easier to implement I'd guess it's that but only Tesla knows. This would also support the feelings of some that more recent data is more heavily weighted than older but note that this smoothing only effects the appearance of the display from which one is only intended to get an impression of what has been going on. The average is clearly a simple block average obtained by dividing the power used in the last, e.g., 30 miles by 30.
There is a question here as to what gets counted as energy used in the last 30 miles. Clearly we want the average to be useful for prediction of achievable range and so it should include power taken from the battery in operating the car. For example, when if we pre warm the car in the parking lot of a motel with no destination charger the energy taken from the battery, IMO, should be coubted. But should we count energy used for that when we are connected to a charger? In that case it comes from the charger and should not be counted. But what I think doesn't matter. What counts is what Tesla actually did.


When I started to write this I was making it more difficult than it ought to be and came up with the following which, as I have done it, I'm going to include as it might be of interest.

As the car is going down the road the battery must supply energy to:

1)Push the car against a force (drag) that is proportional to the square of the airspeed i.e. to (s + w)^2 where s is the ground speed and w the wind speed component in the direction of travel. In time ∆t the car travel s*∆t and as energy is force times distance this energy is Cd*(s^3 + 2*s^2*w + s*w^2)∆t. Cd is a drag coefficient which is a function of (s + w) but lets assume it is constant here to make things simple.
2)Change the kinetic energy of the car by m*a*s*∆t where m is the mass of the car and a is the acceleration which is negative when you take your foot off the accelerator.
3)Change the potential energy of the vehicle by m*g*G*s*∆t in which g is the acceleration due to gravity and G is the grade as a fraction (G - 0.02 for a 2% grade).
4)Run the cabin and battery heaters, the coolant pump, the windshield wipers, the window motors, the seat motors the radio, and charge your cell phone. If the power demanded by all those things is W then the energy taken from the battery in time ∆t is W*∆t.

Dividing through by ∆t we get the total power demand on the battery, ∆E/∆t

P = ∆E/∆t = Cd*s^3 +2*Cd*s^2*w + Cd*s*w^2 + m*a*s + m*g*G*s + W

We don't want, for the display, ∆E/∆t. We want ∆E/∆x where x is the distance traveled. That's easily obtained from

∆E/∆x = (∆E/∆t)/(∆x/∆t) where clearly ∆x/∆t = s, the ground speed of the vehicle. So

∆E/∆x = P*∆t/∆x = Cd*s^2 + 2*Cd*s*w + Cd*w^2 + m*a + m*g*G + W/s = Wh/mi

This makes it clear that your main strategy for extending range is slow down and wait for headwinds to die down if they are forecast.

It also shows that if you stop for any reason that the instantaneous Wh/mi becomes infinite if there is any electric consumption other than that of the traction motors which, as discussed earlier, is not a problem for the hypothecated display calculation because ∆x is never 0 when a computation is done.
 
I looked up "leaky integrator" and "boxcar (function)" on Wikipedia (better than nothing). I think the moving average explanation involving the boxcar function better describes how I feel like the smoothing works on the historical data, but I wouldn't be surprised if the end results of the leaky integrator method are close enough (and thusly used) there.

FWIW, the only time you can accurately compare the average on the consumption chart with the average on the dash is at exactly 5/10/30 miles. I have only thought to do this once at roughly 30 miles (off by less than a 10th, as the IC said 30.0), and the average on the dash did not match the average on the consumption chart. Unfortunately, I don't remember the details more specifically than that.

Also, while I understand why you would think it should include energy consumed when preconditioning while not plugged in, I'm pretty sure it does not. If it did, the spikes at the beginning of drives following pre-conditioning would be bigger than the spikes at the beginning of drives that did not include pre-conditioning, and they are not. Further, if it was included in the average, but not in the historical view, I suspect the average would look less average vs the historical view.
 
Just came in from an attempt to measure the impulse response of the smoothing filter.

First: it appears that ∆x is 0.1 mile irrespective of whether you select 30, 15 or 5 miles for the display range. Thus the graph shows as few as 50 points and as many as 300.

It appears that the smoothing filter's impulse response is 5 points (half a mile) long. It may be a boxcar with 5 coefficients all equal to 1 or it may have first and last coefficients set to less than 1 and the middle 3 equal to 1. And I'm not sure it is 5 points either. I'm not so good at mental deconvolution as I used to be.

The model presented in #11 concerning what happens at a stop light seems to be confirmed by my observations today.

It was pretty clear from the data that the energy used to warm the battery was not coming from the battery heater. Attained Wh/mi were normal even though the regen limit lines were displayed.
 
I think it's a lot simpler than we have been thinking. The 30 (or 15 or 5) mile Whr/mi number is no more that the watt hours consumed in driving the last 30 (or 15 or 5) miles divided by 30 (or 15 or 5). The display shows consumption vs range and displays perhaps 100 points. Thus each point represents 0.3 or 0.15 or 0.05 miles. The value displayed at a point could be nothing more than the watt hours consumed in traveling the previous 0.3, 0.15 or 0.05 miles.

It's not. If you paid any attention to what people here have posted as their observations or paid attention to your own display, you would find the displayed graph is clearly not the watt hours consumed in the interval of a single point. The graph is clearly showing the effect of filtering and an observation today makes me think it is a simple box car average of the last N samples.


Let's say you sit at a stop light for 3 minutes (0.05 hr) with the heater going at 1 kW and the display in 5 mi setting. When the light turns green you take of and cover 0.05 miles using 0.05*400 = 20 Whr to move the car that far but the total energy consumption over that first 0.05 miles includes 0.05*1000 = 50 Whr for the heater for a total of 70. Dividing by 0.05 miles would give 70/0.05 = 1400 Wh/mi - off the scale. In the next 0.05 miles another 20 Whr would be consumed by the motor plus another 1000*0.05/25 = 2 Whr for the heater assuming 25 mph and heater power still at 1 kW. Dividing by 0.05 we have 22/0.05 = 440. If the display were set to 30 mile range the first computation after the green light would come at 0.3 miles. The energy used to move the car would be 0.3*400 = 120 Whr which, when added to the heater energy at the stop light plus1000*0.3/25 = 12 Whr for the heater assuming it is still drawing 1 kW and the average speed is 25 mph would give a total of 182. Dividing by 0.3 gives 607 Wh/mi. In the next 0.3 miles the energy used would be again 120 for the traction plus another 12 for the heater for a total of 132. Divided by 0.3 that's 440 Whr/mi.

Now we don't see any single point spikes at 1400 followed by an adjacent point at 440 so it is clear that some sort of smoothing over a few points is taking place. This could be boxcar or leaky integrator. As the latter is easier to implement I'd guess it's that but only Tesla knows.

Actually the boxcar is easier to implement properly since it only requires additions with minimal effort to prevent overflows in the calculations. A "leaky integrator" otherwise known as an IIR filter requires multiplies and more careful attention to rounding effects and overflow.


This would also support the feelings of some that more recent data is more heavily weighted than older but note that this smoothing only effects the appearance of the display from which one is only intended to get an impression of what has been going on. The average is clearly a simple block average obtained by dividing the power used in the last, e.g., 30 miles by 30.

Not sure what you are basing any of this on. The "feelings" of the average being calculated by a weighted average aren't very relevant since they aren't actually fact based and you are talking about something different, a "weighted" average or "leaky integrator" for the displayed data.


There is a question here as to what gets counted as energy used in the last 30 miles. Clearly we want the average to be useful for prediction of achievable range and so it should include power taken from the battery in operating the car. For example, when if we pre warm the car in the parking lot of a motel with no destination charger the energy taken from the battery, IMO, should be coubted. But should we count energy used for that when we are connected to a charger? In that case it comes from the charger and should not be counted. But what I think doesn't matter. What counts is what Tesla actually did.

See my other post.


When I started to write this I was making it more difficult than it ought to be and came up with the following which, as I have done it, I'm going to include as it might be of interest.

As the car is going down the road the battery must supply energy to:

1)Push the car against a force (drag) that is proportional to the square of the airspeed i.e. to (s + w)^2 where s is the ground speed and w the wind speed component in the direction of travel. In time ∆t the car travel s*∆t and as energy is force times distance this energy is Cd*(s^3 + 2*s^2*w + s*w^2)∆t. Cd is a drag coefficient which is a function of (s + w) but lets assume it is constant here to make things simple.
2)Change the kinetic energy of the car by m*a*s*∆t where m is the mass of the car and a is the acceleration which is negative when you take your foot off the accelerator.
3)Change the potential energy of the vehicle by m*g*G*s*∆t in which g is the acceleration due to gravity and G is the grade as a fraction (G - 0.02 for a 2% grade).
4)Run the cabin and battery heaters, the coolant pump, the windshield wipers, the window motors, the seat motors the radio, and charge your cell phone. If the power demanded by all those things is W then the energy taken from the battery in time ∆t is W*∆t.

Dividing through by ∆t we get the total power demand on the battery, ∆E/∆t

P = ∆E/∆t = Cd*s^3 +2*Cd*s^2*w + Cd*s*w^2 + m*a*s + m*g*G*s + W

We don't want, for the display, ∆E/∆t. We want ∆E/∆x where x is the distance traveled. That's easily obtained from

∆E/∆x = (∆E/∆t)/(∆x/∆t) where clearly ∆x/∆t = s, the ground speed of the vehicle. So

∆E/∆x = P*∆t/∆x = Cd*s^2 + 2*Cd*s*w + Cd*w^2 + m*a + m*g*G + W/s = Wh/mi

This makes it clear that your main strategy for extending range is slow down and wait for headwinds to die down if they are forecast.

It also shows that if you stop for any reason that the instantaneous Wh/mi becomes infinite if there is any electric consumption other than that of the traction motors which, as discussed earlier, is not a problem for the hypothecated display calculation because ∆x is never 0 when a computation is done.

Wow! I'd hate to see your writing when you don't try to simplify.
 
I have repeatedly observed an effect that when starting a trip there seems to be a rectangle of higher than normal power usage. Today I saw this on a road with otherwise very little variation so that the rectangle was very clear. The rise was very steep, perhaps only two measurements wide and the drop was about the same. As ajdelange said this confirms the graphing filter is a boxcar average and the data feature that produced this result was what is known as an impulse, a single data point with a large value. Seems these impulses are generated from consumption while in driving mode, but not necessarily moving. They should also be produced when there are high power usages while moving as long as the power levels are high enough and the duration is short enough to be within a single sample period. If all other power usage is constant, they will produce a clear rectangle for a boxcar average.

If the display data were being filtered by some other Finite Impulse Response (FIR) filter the result of the impulse test should create a pattern in the display showing the filter coefficients. For an Infinite Impulse Response (IIR) filter the test will show the coefficients of an equivalent FIR filter. For the "leaky integrator" the response shape would be a rectangle with rising and falling edges that start fast, slowing as they approach the level of the input. But since the input is a single high data point the leaky integrator will only produce a bump.

This may be a bit technical for some people here, but food for thought. The point is it is very clear the displays are filtered with boxcar averages of different durations (otherwise they would show the same data just different scale factors). So don't expect the calculated average to change in step with the filtered data.
 
Thinking a bit more about the formula

Wh/mi = Cd*s^2 + 2*Cd*s*w + Cd*w^2 + m*a + m*g*G + W/s

that I put in an earlier post. Clearly a term was missing and that would be the one for rolling friction which really has a couple of parts one of which is due to deformation of the tire plus bearing friction and the other of which is due to deformation of the substrate. The former is largely a function of the weight on the tires as is the latter but the latter is also clearly a function of the nature of the substrate. It's going to be smaller on smooth concrete than it is on bitumen and, of course, large on loose dirt, gravel, mud, snow or surface water. ABRP clearly tries to account for this component as it asks for road conditions and load.
 
I have repeatedly observed an effect that when starting a trip there seems to be a rectangle of higher than normal power usage. Today I saw this on a road with otherwise very little variation so that the rectangle was very clear. The rise was very steep, perhaps only two measurements wide and the drop was about the same. As ajdelange said this confirms the graphing filter is a boxcar average and the data feature that produced this result was what is known as an impulse, a single data point with a large value. Seems these impulses are generated from consumption while in driving mode, but not necessarily moving.
The source of the impulse was explained in No. 11. The fact that you see a trapezoid is indicative of a FIR filter with trapezoidal impulse response. But there actually is no filtering going on at all.

If the display data were being filtered by some other Finite Impulse Response (FIR) filter the result of the impulse test should create a pattern in the display showing the filter coefficients.
As there is no filtering its going to be hard to get filter coefficients but the processing being used is equivalent to boxcar filtering so the response to an impulse, could you present one to the car, which you can't, would look like a boxcar.


For an Infinite Impulse Response (IIR) filter the test will show the coefficients of an equivalent FIR filter.
There obviously is no equivalent as the impulse response of a FIR filter is of finite length (that's what the F stands for) and for an IIR filter it is infinitely long (that's what I stands for).

For the "leaky integrator" the response shape would be a rectangle with rising and falling edges that start fast, slowing as they approach the level of the input. But since the input is a single high data point the leaky integrator will only produce a bump.
A leaky integrator is a IIR implementation of a 1 pole low pass filter. It's impulse response is well known and is a step followed by an exponential decay. You can look this up on Wikipedia. I agree that this will mean nothing to most readers but I really think you should at least check at the Wikipedia level when posting on subjects you are not familiar with.

This may be a bit technical for some people here, but food for thought. The point is it is very clear the displays are filtered with boxcar averages of different durations (otherwise they would show the same data just different scale factors). So don't expect the calculated average to change in step with the filtered data.
At this point it is pretty clear that the car is recording SoC against odometer and giving us a couple of averages depending on the display width we choose. If we choose range r (5, 15 or 30 miles) the average is simply the energy difference between the SoC now and that r miles ago divided by r. It is a single number that gets updated every 10th of a mile. We also get a historical graph of averages over a shorter time interval which appears to be 0.5 mile for r = 5 and something longer for r = 15 or 30. Each 0.1 mile the last point is pushed off the left side of the chart and a new point inserted at the origin. The value of the new point is simply the difference between the current SoC and the value d miles previous where d is the smoothing width (0.5 miles for r = 5). That's all there is to it.

The display's points represent short term averages over half a mile to mile or perhaps even 2 (guessing). Thus the points, which are spaced 0.1 mile apart, are not independent of one another. A short event influences at least 5 and perhaps as many any 20 adjacent points. The blob we see sliding off the left side of the plot may represent an event that occurred a mile ago. Though it is out of the window and thus has no effect on the average it still contributes to the area under the curve. Thus we cannot eyeball integrate under the curve in order to estimate the average as we could were the raw (0.1 second average) data presented.
 
The source of the impulse was explained in No. 11. The fact that you see a trapezoid is indicative of a FIR filter with trapezoidal impulse response. But there actually is no filtering going on at all.

I really don't get what you are saying half the time. You say there is a FIR filter then say there is no filtering. What???


As there is no filtering its going to be hard to get filter coefficients but the processing being used is equivalent to boxcar filtering so the response to an impulse, could you present one to the car, which you can't, would look like a boxcar.

I have no idea what you are saying by "there is no filtering".


There obviously is no equivalent as the impulse response of a FIR filter is of finite length (that's what the F stands for) and for an IIR filter it is infinitely long (that's what I stands for).

I wish you would not be so literal sometimes. In the digital domain there is not infinite resolution and so no IIR filter has an infinitely long response.


A leaky integrator is a IIR implementation of a 1 pole low pass filter. It's impulse response is well known and is a step followed by an exponential decay. You can look this up on Wikipedia. I agree that this will mean nothing to most readers but I really think you should at least check at the Wikipedia level when posting on subjects you are not familiar with.

Yes... whatever...


At this point it is pretty clear that the car is recording SoC against odometer and giving us a couple of averages depending on the display width we choose. If we choose range r (5, 15 or 30 miles) the average is simply the energy difference between the SoC now and that r miles ago divided by r. It is a single number that gets updated every 10th of a mile. We also get a historical graph of averages over a shorter time interval which appears to be 0.5 mile for r = 5 and something longer for r = 15 or 30. Each 0.1 mile the last point is pushed off the left side of the chart and a new point inserted at the origin. The value of the new point is simply the difference between the current SoC and the value d miles previous where d is the smoothing width (0.5 miles for r = 5). That's all there is to it.

I'm not sure how you can state much about the averages with certainty, but the averages are not the issue. The issue is that filtered data is being displayed on the graph. So this graph can not be viewed to provide any expectation of what the average will show.


The display's points represent short term averages over half a mile to mile or perhaps even 2 (guessing). Thus the points, which are spaced 0.1 mile apart, are not independent of one another. A short event influences at least 5 and perhaps as many any 20 adjacent points. The blob we see sliding off the left side of the plot may represent an event that occurred a mile ago. Though it is out of the window and thus has no effect on the average it still contributes to the area under the curve. Thus we cannot eyeball integrate under the curve in order to estimate the average as we could were the raw (0.1 second average) data presented.

Glad you figured this out. Great!
 
I really don't get what you are saying half the time. You say there is a FIR filter then say there is no filtering. What???
Well to be fair I did bring up filters in the first place because it seemed that that would be the way to go. You have rough data and you's like to smooth it so it is easier to interpret, more pleasing to the eye etc.




I have no idea what you are saying by "there is no filtering".
I'm suggesting (and remember as I have no software manuals from Tesla I can only conjecture based on experience and common sense) that there is no piece of hardware with a shift register and summer (FIR) nor with a delay, multiplier and feedback connections (IIR) nor any software that performs the equivalent functions. I'm proposing this as it is much easier to simply take two values from the history which we know the vehicle records, subtract one from the other and divide by the time constant. This is equivalent to filtering of course but it's just not a filter in the traditional sense of an IIR or FIR filter.

I wish you would not be so literal sometimes. In the digital domain there is not infinite resolution and so no IIR filter has an infinitely long response.
For all intents and purposes the impulse respnse of an IIR filter is infinite. We don't call it an IIR filter just because that sounds neat. This is what engineers call the things. And I don't see what resolution has to do with it.


Yes... whatever...
I am sure you do not post with the intention of leading people astray. You can prevent that from happening by simply checking on thing you aren't sure of. In this particular case 2 minutes at Wikipedia would have set you right.


I'm not sure how you can state much about the averages with certainty,
. These posts are long enough so I didn't want to get into all the details. The car tells you how much energy it is using every 0.1 mile and displays this on the instrument panel. I recorded the numbers over a 20 mile drive and then computed the averages over that history for various smoothing times.


but the averages are not the issue. The issue is that filtered data is being displayed on the graph. So this graph can not be viewed to provide any expectation of what the average will show.
OP seemingly tried to visually integrate under the graph. He was perplexed when he saw more energy pop off one end than came on at the other and was, quite rightfully, puzzled by this. The graph shows a collection of points each of which represents the average over a half mile or so. OP apparently knows enough about this to know that the expected value of a sum of averages is the average. But the 30 mile sum of the points in the display is not the expected value. It is an estimate of it. Were samples of thousands of data points being processed rather than hundreds this effect would be less noticeable.

I bolded expected value because it has particular meaning in probability and statistics. This may be muddying the waters more but it is, I believe, the "noisy estimate" nature of the full width average as compared to the integral of the set of noisy averages over sub intervals that is at the heart of the matter.
 
Or put, perhaps, more simply

(x1 + x2 + x3)/3 is the smoothed average of three 0.1 mile consumptions. Similar sliding window averages would be (x2 + x3 + x4)/3 and so on.

The average over the averages is

[(x1 + x2 + x3)/3 + (x2 + x3 + x4)/3 + (x3 + x4 + x5)/3]/5 =

(x1/3 + 2*x2/3 + x3 + 2*x4/3 + x5/3)/5 ≠ (x1 + x2 + x3 + x4 + x5)/5

Now notice that one pointy, x3, is the same in the average of averages and in the overall average. It is the end points that cause the average of the averages to be different from the average.