So most everyone looking at their in-dash displays appears to come to the conclusion that there is a rather linear relationship between rated range and battery capacity which translates to a constant Wh accounted for per mile, with one number for total rated range and a higher number subtracted in order to build up buffer.
I wanted to verify that and went through about two thousand miles worth of driving data that I have collected using the REST API. And there the results look quite confusing...
For any reported SOC (sadly, only reported in full percentage points) we obviously get several different reported rated ranges (as those are reported in tenth of mile resolution). But one would assume that if things were really as simple as explained here that there would be a specific cutoff between each discrete percentage "step" - i.e. 90% SOC corresponds to 184.0-186.1 miles of rated range (sorry, S60 owner here). And then 89% SOC would be 181.8-183.9 miles or something along those lines. Yet the data that I have collected shows something rather surprising:
90% covers 180.7 - 186.1 miles
89% covers 178.0 - 184.1 miles
88% covers 175.3 - 181.7 miles
87% covers 173.2 - 179.3 miles
...
60% covers 111.9 - 120.7 miles
59% covers 109.2 - 118.3 miles
....
So there is quite significant overlap in these numbers. Below is a little plot to illustrate this more (I never ran my car below 11%, but did a couple of range charges);
What this means is that for, say, 181 rated miles of range my car sometimes reports 88, 89 or 90% SOC. I tried this with ideal miles as well and while the numbers are different, the graph looks exactly the same.
Any ideas what's up here? Is this bad reporting of the REST API and I should just ignore the SOC value given there and calculate the 'real' SOC from the rated range? If that's the case, then why is the SOC given there at all?
For those who would like to try this for themselves, the SOC is called 'battery_level' and available from the charge_state REST call... speaking of which, I'll be happy to share the tool used to create the graph below - assuming you are collecting your REST data into a MongoDB with the teslams Javascript tools this will be trivial to use.
The other thing that doesn't seem consistent with what is reported here is if I overlay the graph with a 'good fit' approximation (in my case that's a 267Wh line) and look at where this intersects '0 rated miles' I get about 7% SOC. So that's nowhere near the 5% buffer plus about 5% (3kWh) hidden away (209 miles of rated range * 14Wh - bluetinc even assumes 18Wh which would get me 209 * 18 = 3.8kWh or an expected SOC of 11.1% when hitting range 0 - yet at 11% SOC I have rated ranges of 9-5-10.2 miles...