ajdelange
Active Member
but the total capacity is different for charging and discharging due to a buffer being calculated in full when charging and being hidden below 0% when driving.
The reason that there are two capacities is that to change the state from 0 to 100% one must store enough energy to raise the battery's open circuit voltage from Vempty to Vfull. That requires passing current to the battery whose integral over time is the difference between the full charge (results in Vfull - not full, just the voltage that causes the display to read 100%) and the empty charge (Vempty - not equal to 0 - just the voltage that causes the display to read 0%). But that current has to flow through the battery's internal resistance, R, to get to the place the ions are moved. In doing so it dissipates energy R ∫i(t)i(t)dt. But the energy the car can measure, at the battery terminals, is ∫v(t)i(t)dt. Thus in measuring the "charge capacity", the amount of energy it takes to fully charge the battery, we get a number that is bigger than the actual battery capacity (stored energy) by R ∫i(t)i(t)dt.
Conversely, when we discharge the battery, the current that we withdraw to send to the load flows through that same internal resistance and the measured available energy, ∫v(t)i(t)dt, is less than the energy we had to take from the battery to get it by R ∫i(t)i(t)dt. Thus, the "capacity" of the battery, i.e. the energy it can actually store between empty and full is more than the energy we obtain in discharging it from Vfull to Vempty and less than the energy we must send to it in order to charge it from Vempty to Vfull. To find out what the actual capacity is we would have to evaluate R∫i(t)i(t)dt which means knowing R which is easily obtained as ∂v/∂i i.e. the amount the terminal voltage changes divided by the magnitude of a small current change associated with the voltage change. But that's more trouble than it is worth. It is much easier to just speak of a "charging capacity" and a "discharge capacity". The charging capacity is easily estimated by dividing the amount of energy added to the battery by the amount of state change it produces. E.G. if adding 9.8 kWh increases charge by 10 % the charging capacity is 98 kWh. Similarly if in taking 9.2 kWh from the battery results in a 10% discharge the discharge capacity of the battery is 92 kWh. These are the numbers that my X shows and are, I suppose representative. Note that the magnitude of the difference is similar to the roving buffer that some have come up with to explain this phenomenon.
Now in trying to estimate the discharge capacity there is a twist. Suppose that we assume 10% i^2R loss and drive the car up a hill consuming 1 kWh and taking 1.111 kWh out of storage and then drive back down the other side to the point where the 1.111 kWh is returned which would mean putting delivering 1.235 kWh of regen. The power "consumption" is -124 Wh and the state change 0. Dividing those two numbers does not, obviously, give a reasonable estimate of the discharge capacity. Thus using power consumption during driving is not a good means of estimating discharge capacity unless regen is off and that is why programs that estimate battery health use charging data.
In determining rated range the car would take the discharge capacity multiplied by the SoC (as a fraction) and multiply by the "rated" range. Tesla is somewhat, but not totally, constrained with respect to the rated range number but they have total flexibility in where they set Vfull and Vempty. There is a reserve which is always hidden else it wouldn't be a reserve. That reserve is the amount of energy that can be removed in discharging from Vempty to Vthreshold < Vempty with Vthreshold, also chosen by Tesla according to some criterion such as the onset of battery damage. By making the reserve smaller (setting Vempty closer to Vthreshold) they obviously increase the rated range somewhat at the same time reducing your margin of safety with respect to being stranded.
Last edited: