Although this method may show the energy put into the pack, it does not take into account intercalation efficiency
I assume to some extent it must account for the input intercalation efficiency, since these values from the API do seem to be a measure of energy put into the pack in some fashion. Note that the EPA indicates that 78kWh can be pulled from the battery per the Tesla-provided test results. And API pulls similar to those used above to generate the plots (which were done on an SR+)
extrapolate to 76kWh added for a full charge (0->310 miles) on the AWD vehicle.
That being said, there is definitely a discrepancy between energy added to the pack vs. what shows up on the trip meter (for AWD it's 245Wh/rmi for the energy you put in, and 230Wh/rmi for the energy you take out). It's somewhat of a semantic question, since all that matters is how far you can go for a given amount of energy from the
meter. I could argue that the Trip meter in the car reads low by the ratio 230/245, but it might actually be reading accurately, but unfortunately you can't get the same amount of energy out of the pack as you put into it (that hypothesis is not supported by the EPA numbers though). But it doesn't really matter; one way or another the energy is either undercounted by the trip meter, or lost due to intercalation or whatever.
If you're trying to assign an exact "true" efficiency (however you decide to define that) to the charging process, yes, it's unclear what exactly is being counted by the API pulls used here (as mentioned immediately above). But it also doesn't matter much if the objective is to compare efficiency at different charge rates.
I suspect the larger ”error” at low power is just that the model is imperfect. e,g. for a better fit the -5% constant charger loss should perhaps be modeled as slope or curve itself.
For sure that's a factor. Actually, I was just getting at the fact that your datapoints for LCD off deviate considerably from your 250W model fit (not even looking at the low end). That suggests that there is quite a bit of static loss - more than 250W...which is just more than I
expected...though it could well be correct. In any case, this additional loss really adds up and hurts efficiency at the low end - and that's not even counting the
additional inefficiency due to running the AC->DC converter at low input power (which yes, was not in your model).
So: I was just wondering whether, for a more extended charge interval, with the car unbothered, while charging at say 2kW input power, whether the resulting data would line up better with your 250W overhead model than your existing data. That's all. Just wondering whether the somewhat shorter measurement intervals introduced some error. It might not have.
It's all fairly unimportant of course. Just trying to understand the minor non-idealities that aren't being captured by your model.