Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Energy accounting

This site may earn commission on affiliate links.
I am not satisfied with using the modeled 267 wh/mile - it is a kludge and I think that the problem lies in our "acceptance" of that as a means to explain anything. The 267 wh/mile solution is just a matter of trying to fit confusing data to a linear equation - the lost 40 wh/mile is assumed to go to the Troll Under the Bridge. But that cannot be the final answer on this and I think that is why you are seeing anomalies with the REST data not neatly coinciding with a zero point.

The fact is that we still do not have a good handle on the "true" reserve (and IMHO true battery capacity) so how can we calculate consumption rate accurately?

I also don't believe that Tesla would intentionally mislead us by stating rated miles at the outset of a trip that include the reserve, and then decrementing rated miles faster than they are consumed to account for the reserve. That seems very kludge-y and not at all how Tesla approaches engineering elsewhere (not to mention the bad PR they would get if that were proven to be true). If the problem were to account for reserve, there are a dozen other approaches Tesla could have used that would have been more accurate and not misleading. I just don't believe that Tesla would have chosen to intentionally obfuscate rated mileage and consumption in order to "hide" a reserve.

That is why I proposed ignoring the entire concept of rated range (which requires many assumptions about what is going on both in the software and under the hood) and focusing instead purely on consumption from a 100% max range charge - the only variables that way are the accuracy of the trip meter report on consumed KwH, the true capacity of the battery and the true maximum charge. Since the trip meter does report consumption since last charge in KhW, we can calculate the reserve solely from that. We don't need to know what the Troll Under the Bridge is doing (or why). It appears clear to me from testing JUST on the issue of reserve and battery capacity that one or more of the following is true (1) I (we?) don't have an 85KwH battery - mine seems to be more like 82 KwH (inclusive of the 5% reserve), (2) the battery does not charge to full 85 KwH capacity regardless of how I set the max charge, (3) the sensing system does not accurately report power consumption, or (4) there are power losses that the power consumption sensing system is not capturing (could be as simple as AC usage, or as complex as electron tunneling). It would be really good to know which of (1) - (4) is true and try to quantify that.

What we really need is a way to more directly measure and track the charge state of the battery in KwH rather than percentage charge. This would require some pretty good EE skills as well as knowledge of the electrical systems of the car. I am going to keep pressing Tesla on this until I get a good answer (no response yet to my log data from last week that was sent to my local service center by Tesla customer care).

EDIT: Here are some tests that we *can* do to test the variables (1) - (4) above:

A. Trip Meter KwH Consumption Inclusiveness (variable #4). It would be helpful to confirm whether the trip meter report of KwH consumed includes (or does not include) various sources of consumption that are within the drivers control or ability to measure. Specifically the following should impact KwH consumed for a given trip and should result in changes in the KwH consumed for the same trip: AC usage, lights, bluetooth, USB charging, increased resistance due to wind and/or rain, and ambient temperature. If the trip meter is measuring actual power consumed from all of these sources of consumption, then future testing of the other variables #1-3 can disregard this variable. The obvious test is to measure energy consumed for a given trip (preferably > 50 miles to ensure accuracy) with the AC off/full blast, lights on/off, bluetooth on/off, USB charging of multiple devices/no charging, against headwind and with no headwind, and in 80 degree heat/70 degree heat/60 degree heat. If the displayed consumption for an identical trip changes with each of these power/consumption sources used/not used, then we know that the trip meter is measuring actual consumption from all measurable sources (or not).

B. Accuracy of Trip Meter Power Consumption (variable 3). It would be helpful to know if the trip meter's power consumed accurately reflects the power that is consumed for a given mileage. To test this, we have to either model power consumption or else rely on the energy app. Since the energy app does not seem to be affected by the Troll, and the energy app report of average wh/mile *does* seems to take into account driver controlled variables (e.g., if I drive with AC on full blast my wh/mile goes up, in wind/rain it goes up, etc.), I think it is probably safe to rely on the energy app's report of average wh/mile as being an accurate reflection of consumption. The test is to drive 30 miles at a relatively constant speed (or as close as possible to a constant speed) and record (i) the starting and ending power consumption in order to determine total KwH consumed in the thirty mile drive, as reported by the trip meter display, (ii) record the 30 mile average wh/mile at the end of the 30 mile test as reported by the energy app, and (iii) confirm that the energy app and the trip meter power consumed agree (e.g., 30*(reported wh/mile)/1000 = (ending KwH consumed - starting KwH consumed). It would also be good to do this test at various levels of battery charge to confirm that they are consistent regardless of battery charge level (one of my hypothesis is that power consumption varies with percentage charge and such variance is not accurately captured by the energy app's wh/mile report).

C. Battery capacity with 5% reserve assumed. If variables #3 and #4 can be eliminated we then know that the power consumed reflected on the trip display is an accurate measure of consumption in all circumstances - a very valuable piece of knowledge. The test then is as I stated in my original response: charge to max capacity and drive (without stopping) until you reach zero miles rated range on the speedo. The total actual batter capacity (Kt) (assuming the 5% reserve as reported to me directly from Tesla) based solely on the report of power consumed (Pc) is then: Kt = (.05Kt) + Pc ... solving for Kt gives Kt = Pc/.95. But for the hopefully remote possibility of major sources of non-consumption losses (electron tunneling, Heisenberg uncertainly in measurement, or battery cell explosion losses), we can determine the actual capacity of the battery from the consumption data and the knowledge (or assumption) that there is a 5% reserve (as reported by Tesla's technical support to me). This will give an individual driver an estimate of his or her effective actual total battery capacity assuming the 5% reserve. If the test is consistent (e.g., the same vehicle consistently delivers the same total battery capacity using the above test in all circumstances) then it we know that we can disregard the other sources of possible non-consumption losses at least as regards the battery capacity and power consumption which is very valuable information.

D. Differentiating Max Capacity from Reserve (#1 and #2). If we have enough folks do this test we can differentiate between variables #1 and #2 to some extent; e.g., an individual driver doing this test still won't know whether there are issues with charging, the battery or the reserve (e.g., individual can't know whether the issues is that the battery does not charge to full capacity versus the battery simply not having the capacity versus there being a greater than 5% reserve). However, with enough data points (Pc from full charge assuming Pc is consistent for a given vehicle as discussed above) from different drivers we should be able to calculate the variance among vehicles with the same battery. If the variance is large enough to explain the "missing" 4-6Kwh that would imply that actual battery capacity varies by vehicle and the 85Khw is just a manufacturing target rather than a guaranteed capacity (it would be nice, from the perspective of customer relations, if the median of this test among a large number of drivers turned out to be 85KwH but that would mean there are folks with greater than 85Kwh batteries too). If the variance is small, that would mean that either (i) the reserve is greater than 5%, OR (ii) the batteries cannot be charged to their full capacity (possibly true if Tesla wanted to provide for loss of cells over time; perhaps some cells are shut off until other cells are lost, and then these reserved cells are activated - a different kind of reserve ...).

E. Differentiating Reserve from Full Charging Capacity. I can't think of a way to differentiate these unless the actual battery charge and resistance of the pack could be directly measured. If such could be measured, we would know for sure...
 
Last edited:
So most everyone looking at their in-dash displays appears to come to the conclusion that there is a rather linear relationship between rated range and battery capacity which translates to a constant Wh accounted for per mile, with one number for total rated range and a higher number subtracted in order to build up buffer.
I wanted to verify that and went through about two thousand miles worth of driving data that I have collected using the REST API. And there the results look quite confusing...
For any reported SOC (sadly, only reported in full percentage points) we obviously get several different reported rated ranges (as those are reported in tenth of mile resolution). But one would assume that if things were really as simple as explained here that there would be a specific cutoff between each discrete percentage "step" - i.e. 90% SOC corresponds to 184.0-186.1 miles of rated range (sorry, S60 owner here). And then 89% SOC would be 181.8-183.9 miles or something along those lines. Yet the data that I have collected shows something rather surprising:
90% covers 180.7 - 186.1 miles
89% covers 178.0 - 184.1 miles
88% covers 175.3 - 181.7 miles
87% covers 173.2 - 179.3 miles
...
60% covers 111.9 - 120.7 miles
59% covers 109.2 - 118.3 miles
....
This is what I've been saying for a long time now. Rated range is just an estimate derived from taking numerous types of info like ambient temp, battery temp, soc, and a bunch more etc. This is why it's so difficult to attribute losing a few rated miles to battery degradation. Without the CAC value that Tesla hasn't provided to owners, it's all a guess by anyone here(including myself).
 
I am not satisfied with using the modeled 267 wh/mile - it is a kludge and I think that the problem lies in our "acceptance" of that as a means to explain anything. The 267 wh/mile solution is just a matter of trying to fit confusing data to a linear equation - the lost 40 wh/mile is assumed to go to the Troll Under the Bridge. But that cannot be the final answer on this and I think that is why you are seeing anomalies with the REST data not neatly coinciding with a zero point.
I'm not happy with it, but at least the line fits the data plot remarkably well...
The fact is that we still do not have a good handle on the "true" reserve (and IMHO true battery capacity) so how can we calculate consumption rate accurately?

That is why I proposed ignoring the entire concept of rated range (which requires many assumptions about what is going on both in the software and under the hood) and focusing instead purely on consumption from a 100% max range charge - the only variables that way are the accuracy of the trip meter report on consumed KwH, the true capacity of the battery and the true maximum charge. Since the trip meter does report consumption since last charge in KhW, we can calculate the reserve solely from that. We don't need to know what the Troll Under the Bridge is doing (or why). It appears clear to me from testing JUST on the issue of reserve and battery capacity that one or more of the following is true (1) I (we?) don't have an 85KwH battery - mine seems to be more like 82 KwH (inclusive of the 5% reserve), (2) the battery does not charge to full 85 KwH capacity regardless of how I set the max charge, (3) the sensing system does not accurately report power consumption, or (4) there are power losses that the power consumption sensing system is not capturing (could be as simple as AC usage, or as complex as electron tunneling). It would be really good to know which of (1) - (5) is true and try to quantify that.
My main issue is that the SOC is only available in full percentage points - so with a resolution of 0.6/0.85kWh. And that is assuming that SOC is actually calibrated to go from 0 to 60kWh (0 to 85kWh) - which I am not sure I believe.
What we really need is a way to more directly measure and track the charge state of the battery in KwH rather than percentage charge. This would require some pretty good EE skills as well as knowledge of the electrical systems of the car. I am going to keep pressing Tesla on this until I get a good answer (no response yet to my log data from last week that was sent to my local service center by Tesla customer care).
That's exactly what I want - better data on "how much energy is stored in my battery at any time". Right now I'm playing with the data reported by the streaming API (it also shows power in/out of the battery) - and it turns out that those data don't agree at all with what the trip meter in the dash board shows us.
In yet another hack-ish approximation it appears that on my car using "(power in kW - 0.25) times time" and adding that up during a drive tracks what is displayed in the dashboard reasonably well. How insane is that???

- - - Updated - - -

This is what I've been saying for a long time now. Rated range is just an estimate derived from taking numerous types of info like ambient temp, battery temp, soc, and a bunch more etc. This is why it's so difficult to attribute losing a few rated miles to battery degradation. Without the CAC value that Tesla hasn't provided to owners, it's all a guess by anyone here(including myself).
But even 'ideal miles' show the same fluctuations. Why would that be the case???
 
In yet another hack-ish approximation it appears that on my car using "(power in kW - 0.25) times time" and adding that up during a drive tracks what is displayed in the dashboard reasonably well. How insane is that???
Sounds like there's a bias going on and/or that the curve fit is not accounting for additional variables (like climate control). If you're looking over the entire set of data, you may be "oversimplifying". Try bucketing sets of data by temperature, time-of-year, average speed, or other telemetry data. I suspect you'll see different curves/lines fit the data for the different buckets.
 
Sounds like there's a bias going on and/or that the curve fit is not accounting for additional variables (like climate control). If you're looking over the entire set of data, you may be "oversimplifying". Try bucketing sets of data by temperature, time-of-year, average speed, or other telemetry data. I suspect you'll see different curves/lines fit the data for the different buckets.
Excellent idea. I need someone who has more data than me. I have six weeks of driving the car, and really only have good data for the past two weeks (I tweaked the data acquisition algorithm a few times). But I really like the idea of trying to control for other factors.
 
Excellent idea. I need someone who has more data than me. I have six weeks of driving the car, and really only have good data for the past two weeks (I tweaked the data acquisition algorithm a few times). But I really like the idea of trying to control for other factors.
For those that might be interested in supporting the effort, what format would be most convenient for your consumption? And which fields do you want and/or require? For example, before handing out any data I have I'd probably strip out the GPS information (and maybe some other fields).
 
That's exactly what I want - better data on "how much energy is stored in my battery at any time". Right now I'm playing with the data reported by the streaming API (it also shows power in/out of the battery) - and it turns out that those data don't agree at all with what the trip meter in the dash board shows us.
In yet another hack-ish approximation it appears that on my car using "(power in kW - 0.25) times time" and adding that up during a drive tracks what is displayed in the dashboard reasonably well. How insane is that???

But even 'ideal miles' show the same fluctuations. Why would that be the case???

I edited my post above to provide for a series of tests that we can do. I think it would be really helpful to eliminate (or at least isolate) the variables #1-4. If we have enough people doing these tests, we should be able to figure out roughly what is going on. My money is on variance in actual battery capacity but I would love to know the real answer.
 
But even 'ideal miles' show the same fluctuations. Why would that be the case???
Because ideal miles are estimated just like rated except using different input info criteria.

Battery capacity requires complex code to accurately measure(and it's still not even close to being dead on). Ice cars have a definite amount of liquid on board at any given time, which makes it extremely simple to measure.
 
For those that might be interested in supporting the effort, what format would be most convenient for your consumption? And which fields do you want and/or require? For example, before handing out any data I have I'd probably strip out the GPS information (and maybe some other fields).
I currently use the streaming data to get odometer readings and power in/out and speed. And the charge_state data to get battery_level and corresponding battery_range (the latter is higher resolution in the charge_state data than in the streaming data: tenth of mile vs. mile granularity).
But that is overkill for just getting a feeling for specifically the relationship I'm discussing in the post above (yes, the tenth of mile is nicer, but let's keep it simple):
We could derive /some/ information from simply a dump of your streaming data with EVERYTHING removed except for SOC and rated range. That would be a start and would make sure you don't share anything even remotely sensitive.
If you have this for a large sample size that would be really interesting.
 
Last edited:
I think our 85Kwh batteries are actually 82.95 Kwh. That might make the difference in calculating the "zero" point. See http://www.fueleconomy.gov/feg/epadata/12data.zip and look at columns CP and CQ in the spreadsheet: 350 volts * 237 amp-hours = 82.95 Kwh (I think). With no reserve calculated in, at 304 wh/mile you get 272 miles of range - exactly what I see when I max charge.

Note sure what this means, but it partially explains the zero point issue.
 
I think our 85Kwh batteries are actually 82.95 Kwh. That might make the difference in calculating the "zero" point. See http://www.fueleconomy.gov/feg/epadata/12data.zip and look at columns CP and CQ in the spreadsheet: 350 volts * 237 amp-hours = 82.95 Kwh (I think). With no reserve calculated in, at 304 wh/mile you get 272 miles of range - exactly what I see when I max charge.

Note sure what this means, but it partially explains the zero point issue.
The file you link to doesn't appear to contain what you are referencing here. CP and CQ are "Battery Charger Type" and "Comments" and I can't find any Tesla cars in it.
 
You are looking in the wrong tab... the tabs are "FEGuide" which only covers ICE, "PHEVs" which are hybrids, and "EVs" which are our cars.
:redface:
Oops.

- - - Updated - - -

OK, looking at the right tab now... and switching to the 2013 data... there they claim the exact same "Total Voltage for Battery Pack" and "Batt Energy Capacity" for S40/S60/S85. Each time 400V * 245 Ah, or 98kWh...
 
Hey Dirkhh,

Great data!

You should check out this thread on TM site. It covers Tesla's response to questions of how the "reserve" is accumulated.


Sign into My Tesla | Tesla Motors


It looks like you are double counting the hidden + buffer reserve. The hidden is not shown as part of the SOC, only the buffer reserve is. That would show that you have a 7% "reserve" remaining when hit 0 miles remaining. Tesla has set their 0 SOC to hide any battery protective measures under that, i.e we would see it anti-bricking as a negative SOC.

I should also note that all of my data, including my numbers used above are car specific. It is both specific to particular 85kWh cars (which I have been investigating) and I expect significantly different from 60kWh models. If you are going to look at various cars, keep in mind that numbers from one are not directly transferable to another.

I would suppose that for your particular car, from looking at your numbers, that your "rated mile" energy unit is ~287Wh/mi, and your reserve is built up at ~20Wh/mi. It would be interesting to know if that matches your own "rated mile" energy unit calculations done by the car for projected mileage.

As to your scatter around a particular SOC. I think the SOC displayed is a very sanitized version that is sent to the Tesla app to allow it to know how "full" to make the battery displayed on your mobile device. It is most likely that you are seeing results of two effects in your captured data.

1. Temperature correction of the CAC value by the car. If the battery is cold, the amount of available energy from it is reduced, and thus it should (and does) show a lower distance remaining for a given SOC. You might be able to note that this is happening by seeing error scatter extends to the low side of the calculated number for data taken when cold.

2. CAC error due to inaccuracy in battery voltage -> SOC measurements. When trying to predict the amount of energy left in the battery, generally speaking, it is much much easier to know how much energy is left when the battery is nearly full, or nearly empty. This is because most the the energy is stored over a fairly narrow voltage range, and the voltage only shoots up, or falls off when the battery is nearly full, or empty. for example a battery cell voltage of 4.1V to 4.2V represents about 10% of the batteries capacity while 3.6V to 3.7V represents over 20% of the batteries capacity. This would cause more scatter around mid SOC and very little near empty and full. This is part of the reason that the best way to compare batteries is when they are charged to a 100% range charge, thus eliminating as much of this type of error as possible.


Peter




So most everyone looking at their in-dash displays appears to come to the conclusion that there is a rather linear relationship between rated range and battery capacity which translates to a constant Wh accounted for per mile, with one number for total rated range and a higher number subtracted in order to build up buffer.
I wanted to verify that and went through about two thousand miles worth of driving data that I have collected using the REST API. And there the results look quite confusing...
For any reported SOC (sadly, only reported in full percentage points) we obviously get several different reported rated ranges (as those are reported in tenth of mile resolution). But one would assume that if things were really as simple as explained here that there would be a specific cutoff between each discrete percentage "step" - i.e. 90% SOC corresponds to 184.0-186.1 miles of rated range (sorry, S60 owner here). And then 89% SOC would be 181.8-183.9 miles or something along those lines. Yet the data that I have collected shows something rather surprising:
90% covers 180.7 - 186.1 miles
89% covers 178.0 - 184.1 miles
88% covers 175.3 - 181.7 miles
87% covers 173.2 - 179.3 miles
...
60% covers 111.9 - 120.7 miles
59% covers 109.2 - 118.3 miles
....
So there is quite significant overlap in these numbers. Below is a little plot to illustrate this more (I never ran my car below 11%, but did a couple of range charges);

What this means is that for, say, 181 rated miles of range my car sometimes reports 88, 89 or 90% SOC. I tried this with ideal miles as well and while the numbers are different, the graph looks exactly the same.

Any ideas what's up here? Is this bad reporting of the REST API and I should just ignore the SOC value given there and calculate the 'real' SOC from the rated range? If that's the case, then why is the SOC given there at all?
For those who would like to try this for themselves, the SOC is called 'battery_level' and available from the charge_state REST call... speaking of which, I'll be happy to share the tool used to create the graph below - assuming you are collecting your REST data into a MongoDB with the teslams Javascript tools this will be trivial to use.

The other thing that doesn't seem consistent with what is reported here is if I overlay the graph with a 'good fit' approximation (in my case that's a 267Wh line) and look at where this intersects '0 rated miles' I get about 7% SOC. So that's nowhere near the 5% buffer plus about 5% (3kWh) hidden away (209 miles of rated range * 14Wh - bluetinc even assumes 18Wh which would get me 209 * 18 = 3.8kWh or an expected SOC of 11.1% when hitting range 0 - yet at 11% SOC I have rated ranges of 9-5-10.2 miles...
 
It looks like you are double counting the hidden + buffer reserve. The hidden is not shown as part of the SOC, only the buffer reserve is. That would show that you have a 7% "reserve" remaining when hit 0 miles remaining. Tesla has set their 0 SOC to hide any battery protective measures under that, i.e we would see it anti-bricking as a negative SOC.
That sucks. So 100% SOC aren't 85kWh or 60kWh but instead (85 - something) kWh and (60 - somethingelse) kWh. Great.
But actually, reading the thread again, I'm not sure that this is the consensus... many posts seem to assume that you can actually SEE the 5% - there is for example a fellow S60 owner who reports 0 rated miles at 10% SOC.
I should also note that all of my data, including my numbers used above are car specific. It is both specific to particular 85kWh cars (which I have been investigating) and I expect significantly different from 60kWh models. If you are going to look at various cars, keep in mind that numbers from one are not directly transferable to another.

I would suppose that for your particular car, from looking at your numbers, that your "rated mile" energy unit is ~287Wh/mi, and your reserve is built up at ~20Wh/mi. It would be interesting to know if that matches your own "rated mile" energy unit calculations done by the car for projected mileage.
I don't understand how you arrive at these numbers. Would you mind explaining that a bit more? Are you simply dividing 60kWh by the rated range? That would ignore the hidden part of the battery capacity...
As to your scatter around a particular SOC. I think the SOC displayed is a very sanitized version that is sent to the Tesla app to allow it to know how "full" to make the battery displayed on your mobile device. It is most likely that you are seeing results of two effects in your captured data.

1. Temperature correction of the CAC value by the car. If the battery is cold, the amount of available energy from it is reduced, and thus it should (and does) show a lower distance remaining for a given SOC. You might be able to note that this is happening by seeing error scatter extends to the low side of the calculated number for data taken when cold.

2. CAC error due to inaccuracy in battery voltage -> SOC measurements. When trying to predict the amount of energy left in the battery, generally speaking, it is much much easier to know how much energy is left when the battery is nearly full, or nearly empty. This is because most the the energy is stored over a fairly narrow voltage range, and the voltage only shoots up, or falls off when the battery is nearly full, or empty. for example a battery cell voltage of 4.1V to 4.2V represents about 10% of the batteries capacity while 3.6V to 3.7V represents over 20% of the batteries capacity. This would cause more scatter around mid SOC and very little near empty and full. This is part of the reason that the best way to compare batteries is when they are charged to a 100% range charge, thus eliminating as much of this type of error as possible.
I have climate info as well - let me try to correlate that to the scatter data. It's on my todo list, but I haven't had a chance to implement that just yet.
 
That sucks. So 100% SOC aren't 85kWh or 60kWh but instead (85 - something) kWh and (60 - somethingelse) kWh. Great.
But actually, reading the thread again, I'm not sure that this is the consensus... many posts seem to assume that you can actually SEE the 5% - there is for example a fellow S60 owner who reports 0 rated miles at 10% SOC.

I've had my car down to 1-5 miles left several times. I've verified that the SOC each time is ~6% and matched that against the total pack voltage. Yes, the 85kWh pack never did provide 85kWh of drivable energy, mine provided ~ 84kWh when it was new.


Oh, I should note that with old software, after a cold soak of sub 25 degrees, it was possible to see very low "rated range" with very high SOC. I was able to see 0 rated range at about 19% SOC. I'm discounting that type of data for this discussion.

As for lack of consensus, we are dealing with a fairly complicated system with very little hard data and one that makes it hard to take quality, repeatable data. I'm not surprised by the lack of consensus at all :). I have a bit of a leg up because I've designed other Li-Ion BMS systems in the past, and while Tesla's is significantly larger, the basics are all the same.

I don't understand how you arrive at these numbers. Would you mind explaining that a bit more? Are you simply dividing 60kWh by the rated range? That would ignore the hidden part of the battery capacity...

These were just rough numbers but they went something like this:
I first calculated that you would use 267Wh/mi*210 miles of range = 56.070kWh used from 100% -> 7%. If this is 93% then 100% = 60.290kWh pack size (drivable). This 60.290kWh then divided by the original 210 miles yields 287Wh/mi as a "rated range" energy unit. Interestingly, this would show that the 60kWh cars do have a drivable range of ~60kWh.

This number would be quite interesting to know if it is correct. While not the simplest thing, you could check it by driving with the energy app up, on averaging mode, and check to see if when the left side shows 287Wh/mi, the projected range is identical to the "rated range" displayed on the speedo.


I have climate info as well - let me try to correlate that to the scatter data. It's on my todo list, but I haven't had a chance to implement that just yet.

That would be very cool to see! Such a shame that Tesla doesn't provide any way for us to see the internal battery temperature!


Peter
 
Last edited:
My MS60 was sitting in the shop for several weeks waiting repairs, so I plotted up the rated range as a function of battery capacity as the vampires drank the juice. As you can see (if the image upload worked), the rated range definitely goes to zero when the battery reaches 10% capacity.
Jaggies on the line are the 1% charge increments.

Screen Shot 2013-09-04 at 4.56.03 PM.png