jeffnorman
Member
I am not satisfied with using the modeled 267 wh/mile - it is a kludge and I think that the problem lies in our "acceptance" of that as a means to explain anything. The 267 wh/mile solution is just a matter of trying to fit confusing data to a linear equation - the lost 40 wh/mile is assumed to go to the Troll Under the Bridge. But that cannot be the final answer on this and I think that is why you are seeing anomalies with the REST data not neatly coinciding with a zero point.
The fact is that we still do not have a good handle on the "true" reserve (and IMHO true battery capacity) so how can we calculate consumption rate accurately?
I also don't believe that Tesla would intentionally mislead us by stating rated miles at the outset of a trip that include the reserve, and then decrementing rated miles faster than they are consumed to account for the reserve. That seems very kludge-y and not at all how Tesla approaches engineering elsewhere (not to mention the bad PR they would get if that were proven to be true). If the problem were to account for reserve, there are a dozen other approaches Tesla could have used that would have been more accurate and not misleading. I just don't believe that Tesla would have chosen to intentionally obfuscate rated mileage and consumption in order to "hide" a reserve.
That is why I proposed ignoring the entire concept of rated range (which requires many assumptions about what is going on both in the software and under the hood) and focusing instead purely on consumption from a 100% max range charge - the only variables that way are the accuracy of the trip meter report on consumed KwH, the true capacity of the battery and the true maximum charge. Since the trip meter does report consumption since last charge in KhW, we can calculate the reserve solely from that. We don't need to know what the Troll Under the Bridge is doing (or why). It appears clear to me from testing JUST on the issue of reserve and battery capacity that one or more of the following is true (1) I (we?) don't have an 85KwH battery - mine seems to be more like 82 KwH (inclusive of the 5% reserve), (2) the battery does not charge to full 85 KwH capacity regardless of how I set the max charge, (3) the sensing system does not accurately report power consumption, or (4) there are power losses that the power consumption sensing system is not capturing (could be as simple as AC usage, or as complex as electron tunneling). It would be really good to know which of (1) - (4) is true and try to quantify that.
What we really need is a way to more directly measure and track the charge state of the battery in KwH rather than percentage charge. This would require some pretty good EE skills as well as knowledge of the electrical systems of the car. I am going to keep pressing Tesla on this until I get a good answer (no response yet to my log data from last week that was sent to my local service center by Tesla customer care).
EDIT: Here are some tests that we *can* do to test the variables (1) - (4) above:
A. Trip Meter KwH Consumption Inclusiveness (variable #4). It would be helpful to confirm whether the trip meter report of KwH consumed includes (or does not include) various sources of consumption that are within the drivers control or ability to measure. Specifically the following should impact KwH consumed for a given trip and should result in changes in the KwH consumed for the same trip: AC usage, lights, bluetooth, USB charging, increased resistance due to wind and/or rain, and ambient temperature. If the trip meter is measuring actual power consumed from all of these sources of consumption, then future testing of the other variables #1-3 can disregard this variable. The obvious test is to measure energy consumed for a given trip (preferably > 50 miles to ensure accuracy) with the AC off/full blast, lights on/off, bluetooth on/off, USB charging of multiple devices/no charging, against headwind and with no headwind, and in 80 degree heat/70 degree heat/60 degree heat. If the displayed consumption for an identical trip changes with each of these power/consumption sources used/not used, then we know that the trip meter is measuring actual consumption from all measurable sources (or not).
B. Accuracy of Trip Meter Power Consumption (variable 3). It would be helpful to know if the trip meter's power consumed accurately reflects the power that is consumed for a given mileage. To test this, we have to either model power consumption or else rely on the energy app. Since the energy app does not seem to be affected by the Troll, and the energy app report of average wh/mile *does* seems to take into account driver controlled variables (e.g., if I drive with AC on full blast my wh/mile goes up, in wind/rain it goes up, etc.), I think it is probably safe to rely on the energy app's report of average wh/mile as being an accurate reflection of consumption. The test is to drive 30 miles at a relatively constant speed (or as close as possible to a constant speed) and record (i) the starting and ending power consumption in order to determine total KwH consumed in the thirty mile drive, as reported by the trip meter display, (ii) record the 30 mile average wh/mile at the end of the 30 mile test as reported by the energy app, and (iii) confirm that the energy app and the trip meter power consumed agree (e.g., 30*(reported wh/mile)/1000 = (ending KwH consumed - starting KwH consumed). It would also be good to do this test at various levels of battery charge to confirm that they are consistent regardless of battery charge level (one of my hypothesis is that power consumption varies with percentage charge and such variance is not accurately captured by the energy app's wh/mile report).
C. Battery capacity with 5% reserve assumed. If variables #3 and #4 can be eliminated we then know that the power consumed reflected on the trip display is an accurate measure of consumption in all circumstances - a very valuable piece of knowledge. The test then is as I stated in my original response: charge to max capacity and drive (without stopping) until you reach zero miles rated range on the speedo. The total actual batter capacity (Kt) (assuming the 5% reserve as reported to me directly from Tesla) based solely on the report of power consumed (Pc) is then: Kt = (.05Kt) + Pc ... solving for Kt gives Kt = Pc/.95. But for the hopefully remote possibility of major sources of non-consumption losses (electron tunneling, Heisenberg uncertainly in measurement, or battery cell explosion losses), we can determine the actual capacity of the battery from the consumption data and the knowledge (or assumption) that there is a 5% reserve (as reported by Tesla's technical support to me). This will give an individual driver an estimate of his or her effective actual total battery capacity assuming the 5% reserve. If the test is consistent (e.g., the same vehicle consistently delivers the same total battery capacity using the above test in all circumstances) then it we know that we can disregard the other sources of possible non-consumption losses at least as regards the battery capacity and power consumption which is very valuable information.
D. Differentiating Max Capacity from Reserve (#1 and #2). If we have enough folks do this test we can differentiate between variables #1 and #2 to some extent; e.g., an individual driver doing this test still won't know whether there are issues with charging, the battery or the reserve (e.g., individual can't know whether the issues is that the battery does not charge to full capacity versus the battery simply not having the capacity versus there being a greater than 5% reserve). However, with enough data points (Pc from full charge assuming Pc is consistent for a given vehicle as discussed above) from different drivers we should be able to calculate the variance among vehicles with the same battery. If the variance is large enough to explain the "missing" 4-6Kwh that would imply that actual battery capacity varies by vehicle and the 85Khw is just a manufacturing target rather than a guaranteed capacity (it would be nice, from the perspective of customer relations, if the median of this test among a large number of drivers turned out to be 85KwH but that would mean there are folks with greater than 85Kwh batteries too). If the variance is small, that would mean that either (i) the reserve is greater than 5%, OR (ii) the batteries cannot be charged to their full capacity (possibly true if Tesla wanted to provide for loss of cells over time; perhaps some cells are shut off until other cells are lost, and then these reserved cells are activated - a different kind of reserve ...).
E. Differentiating Reserve from Full Charging Capacity. I can't think of a way to differentiate these unless the actual battery charge and resistance of the pack could be directly measured. If such could be measured, we would know for sure...
The fact is that we still do not have a good handle on the "true" reserve (and IMHO true battery capacity) so how can we calculate consumption rate accurately?
I also don't believe that Tesla would intentionally mislead us by stating rated miles at the outset of a trip that include the reserve, and then decrementing rated miles faster than they are consumed to account for the reserve. That seems very kludge-y and not at all how Tesla approaches engineering elsewhere (not to mention the bad PR they would get if that were proven to be true). If the problem were to account for reserve, there are a dozen other approaches Tesla could have used that would have been more accurate and not misleading. I just don't believe that Tesla would have chosen to intentionally obfuscate rated mileage and consumption in order to "hide" a reserve.
That is why I proposed ignoring the entire concept of rated range (which requires many assumptions about what is going on both in the software and under the hood) and focusing instead purely on consumption from a 100% max range charge - the only variables that way are the accuracy of the trip meter report on consumed KwH, the true capacity of the battery and the true maximum charge. Since the trip meter does report consumption since last charge in KhW, we can calculate the reserve solely from that. We don't need to know what the Troll Under the Bridge is doing (or why). It appears clear to me from testing JUST on the issue of reserve and battery capacity that one or more of the following is true (1) I (we?) don't have an 85KwH battery - mine seems to be more like 82 KwH (inclusive of the 5% reserve), (2) the battery does not charge to full 85 KwH capacity regardless of how I set the max charge, (3) the sensing system does not accurately report power consumption, or (4) there are power losses that the power consumption sensing system is not capturing (could be as simple as AC usage, or as complex as electron tunneling). It would be really good to know which of (1) - (4) is true and try to quantify that.
What we really need is a way to more directly measure and track the charge state of the battery in KwH rather than percentage charge. This would require some pretty good EE skills as well as knowledge of the electrical systems of the car. I am going to keep pressing Tesla on this until I get a good answer (no response yet to my log data from last week that was sent to my local service center by Tesla customer care).
EDIT: Here are some tests that we *can* do to test the variables (1) - (4) above:
A. Trip Meter KwH Consumption Inclusiveness (variable #4). It would be helpful to confirm whether the trip meter report of KwH consumed includes (or does not include) various sources of consumption that are within the drivers control or ability to measure. Specifically the following should impact KwH consumed for a given trip and should result in changes in the KwH consumed for the same trip: AC usage, lights, bluetooth, USB charging, increased resistance due to wind and/or rain, and ambient temperature. If the trip meter is measuring actual power consumed from all of these sources of consumption, then future testing of the other variables #1-3 can disregard this variable. The obvious test is to measure energy consumed for a given trip (preferably > 50 miles to ensure accuracy) with the AC off/full blast, lights on/off, bluetooth on/off, USB charging of multiple devices/no charging, against headwind and with no headwind, and in 80 degree heat/70 degree heat/60 degree heat. If the displayed consumption for an identical trip changes with each of these power/consumption sources used/not used, then we know that the trip meter is measuring actual consumption from all measurable sources (or not).
B. Accuracy of Trip Meter Power Consumption (variable 3). It would be helpful to know if the trip meter's power consumed accurately reflects the power that is consumed for a given mileage. To test this, we have to either model power consumption or else rely on the energy app. Since the energy app does not seem to be affected by the Troll, and the energy app report of average wh/mile *does* seems to take into account driver controlled variables (e.g., if I drive with AC on full blast my wh/mile goes up, in wind/rain it goes up, etc.), I think it is probably safe to rely on the energy app's report of average wh/mile as being an accurate reflection of consumption. The test is to drive 30 miles at a relatively constant speed (or as close as possible to a constant speed) and record (i) the starting and ending power consumption in order to determine total KwH consumed in the thirty mile drive, as reported by the trip meter display, (ii) record the 30 mile average wh/mile at the end of the 30 mile test as reported by the energy app, and (iii) confirm that the energy app and the trip meter power consumed agree (e.g., 30*(reported wh/mile)/1000 = (ending KwH consumed - starting KwH consumed). It would also be good to do this test at various levels of battery charge to confirm that they are consistent regardless of battery charge level (one of my hypothesis is that power consumption varies with percentage charge and such variance is not accurately captured by the energy app's wh/mile report).
C. Battery capacity with 5% reserve assumed. If variables #3 and #4 can be eliminated we then know that the power consumed reflected on the trip display is an accurate measure of consumption in all circumstances - a very valuable piece of knowledge. The test then is as I stated in my original response: charge to max capacity and drive (without stopping) until you reach zero miles rated range on the speedo. The total actual batter capacity (Kt) (assuming the 5% reserve as reported to me directly from Tesla) based solely on the report of power consumed (Pc) is then: Kt = (.05Kt) + Pc ... solving for Kt gives Kt = Pc/.95. But for the hopefully remote possibility of major sources of non-consumption losses (electron tunneling, Heisenberg uncertainly in measurement, or battery cell explosion losses), we can determine the actual capacity of the battery from the consumption data and the knowledge (or assumption) that there is a 5% reserve (as reported by Tesla's technical support to me). This will give an individual driver an estimate of his or her effective actual total battery capacity assuming the 5% reserve. If the test is consistent (e.g., the same vehicle consistently delivers the same total battery capacity using the above test in all circumstances) then it we know that we can disregard the other sources of possible non-consumption losses at least as regards the battery capacity and power consumption which is very valuable information.
D. Differentiating Max Capacity from Reserve (#1 and #2). If we have enough folks do this test we can differentiate between variables #1 and #2 to some extent; e.g., an individual driver doing this test still won't know whether there are issues with charging, the battery or the reserve (e.g., individual can't know whether the issues is that the battery does not charge to full capacity versus the battery simply not having the capacity versus there being a greater than 5% reserve). However, with enough data points (Pc from full charge assuming Pc is consistent for a given vehicle as discussed above) from different drivers we should be able to calculate the variance among vehicles with the same battery. If the variance is large enough to explain the "missing" 4-6Kwh that would imply that actual battery capacity varies by vehicle and the 85Khw is just a manufacturing target rather than a guaranteed capacity (it would be nice, from the perspective of customer relations, if the median of this test among a large number of drivers turned out to be 85KwH but that would mean there are folks with greater than 85Kwh batteries too). If the variance is small, that would mean that either (i) the reserve is greater than 5%, OR (ii) the batteries cannot be charged to their full capacity (possibly true if Tesla wanted to provide for loss of cells over time; perhaps some cells are shut off until other cells are lost, and then these reserved cells are activated - a different kind of reserve ...).
E. Differentiating Reserve from Full Charging Capacity. I can't think of a way to differentiate these unless the actual battery charge and resistance of the pack could be directly measured. If such could be measured, we would know for sure...
Last edited: