Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

3.0 Battery Longevity

This site may earn commission on affiliate links.
  • Like
Reactions: Mark77a
I do think it possible that the declining CAC values may in part be due to algorithm errors. But I have no way to prove that.

The fact that CAC jumps up a lot when you range charge is clearly due to an algorithm error. There's just no way that doing that makes the cells better. I suspect that it's also an algorithm error that the CAC drops precipitously when you run the car down to a low SOC. While it's conceivable that that damages the battery that much, it seems more likely that it's an algorithm error.

The open question is how much are the batteries really declining, and how does that compare to the CACs. I have no good answer to that question, though I suppose we could drive a car from full charge until it stops and see how the used Ah compares with CAC. Any volunteers? :)
 
  • Like
Reactions: ICON
I do think it possible that the declining CAC values may in part be due to algorithm errors. But I have no way to prove that.
I agree.
The fact that CAC jumps up a lot when you range charge is clearly due to an algorithm error. There's just no way that doing that makes the cells better. I suspect that it's also an algorithm error that the CAC drops precipitously when you run the car down to a low SOC. While it's conceivable that that damages the battery that much, it seems more likely that it's an algorithm error.

The open question is how much are the batteries really declining, and how does that compare to the CACs. I have no good answer to that question, though I suppose we could drive a car from full charge until it stops and see how the used Ah compares with CAC. Any volunteers? :)
It seems pretty clear that the CAC is no longer accurate on the 3.0. But is battery capacity actually deteriorating at a rapid rate? How can we really know for sure?
 
I agree.

It seems pretty clear that the CAC is no longer accurate on the 3.0. But is battery capacity actually deteriorating at a rapid rate? How can we really know for sure?

This is exactly what I asked in the letter (co-signed by a big chunk of the 3.0 community). Two months and counting on what should be an easy answer for the manufacturer.
 
  • Like
Reactions: GSP and dhrivnak
Thanks for clarifying. I do not consider anecdotes to be useful data.

I do think it possible that the declining CAC values may in part be due to algorithm errors. But I have no way to prove that.

The procedure I outlined (specific modes, specific SOC% values) was supposedly documented in Tesla Service recommendations to address a bug in CAC calculation in specific circumstances, but I've never seen it written down as such. Anyway, the values may be different for R80 packs.

I too suspect an algorithm error. We know they don't have people working on the roadster firmware any more, and that algorithm must have been tuned for the behaviour of the old cells.
 
  • Like
Reactions: ICON
I too suspect an algorithm error. We know they don't have people working on the roadster firmware any more, and that algorithm must have been tuned for the behaviour of the old cells.
Even if it is an algorithm issue, I'm thinking that it may not be altogether cosmetic. What will happen when the car thinks the battery is only capable of, say, 200 miles range when it's really over 300, and you try to drive for 250? At best it will leave you with a "range cannot be determined" status for over 50 miles (the ultimate in range anxiety). At worst it will leave you stranded either because it thinks the "bottom" is closer than it really is, or prevented you from fully charging before you left.

In any scenario, this needs to get understood, then if it's algorithmic, fixed.
 
I too suspect an algorithm error. We know they don't have people working on the roadster firmware any more, and that algorithm must have been tuned for the behaviour of the old cells.

Whatever the issue is, it's not that. The algorithm behaved very, very differently with the old cells. It showed them degrading much more slowly, and if it was overestimating their capacity by too much we'd have seen owners complaining that their car died when they thought they had tons more range.
 
What will happen when the car thinks the battery is only capable of, say, 200 miles range when it's really over 300, and you try to drive for 250?

The CAC and Range algorithms concern the 'top' of the pack; what triggers charging to stop, and what estimated range is shown. This is non-trivial and usually goes way beyond mere cell voltages.

The actual range you can drive concerns the 'bottom' of the pack, and is much simpler. When the lowest brick hits the low voltage limit, you're done.

if it was overestimating their capacity by too much we'd have seen owners complaining that their car died when they thought they had tons more range.

Given the CACs being shown are at most 15% off what a brand new R80 is showing (185 vs 215), I think that would be in the range of individual drive variance; quite hard to see the effect. But, yes, the way to know for sure is to do a drive from 100% down to 0%, measuring the actual kWh used (or charge the car back up measuring at the wall, while taking into account charging losses).
 
  • Like
Reactions: Mark77a
The CAC and Range algorithms concern the 'top' of the pack; what triggers charging to stop, and what estimated range is shown. This is non-trivial and usually goes way beyond mere cell voltages.

The actual range you can drive concerns the 'bottom' of the pack, and is much simpler. When the lowest brick hits the low voltage limit, you're done.
So if both the top and bottom have a direct dependency on actual measurements, how can the reported CAC continue to get worse? Reality should correct the CAC during a full-range event, and the evidence for this so far has been just the opposite. The effects are temporary, and very soon one is back to where they were before.
Given the CACs being shown are at most 15% off what a brand new R80 is showing (185 vs 215), I think that would be in the range of individual drive variance; quite hard to see the effect. But, yes, the way to know for sure is to do a drive from 100% down to 0%, measuring the actual kWh used (or charge the car back up measuring at the wall, while taking into account charging losses).
But we also have before-and-after data from same driver, same car, and the slopes of the two curves are very different. The one example where the slopes matched turned out to be a bad (failing) original pack. Are you telling me that the majority of the new 3.0 packs are bad?

We really need an answer from Tesla Engineering on this. My grip on $30k will not be released until I do.
 
So if both the top and bottom have a direct dependency on actual measurements, how can the reported CAC continue to get worse? Reality should correct the CAC during a full-range event, and the evidence for this so far has been just the opposite. The effects are temporary, and very soon one is back to where they were before.

I kind of think this is what it's doing. I suspect that it only gets useful data when it's very full or very empty. There are tons of instances where a range charge makes the CAC go up, which is what would happen if it were underestimating the top. There are also instances of CAC dropping when the car is driven a long way (i.e., to low SOC), which could mean it's wrong the other way on the bottom.

When doing several range charges in a row, the increase in CAC seems to get smaller, but it keeps going up for a while. This says to me that it's doing some kind of average between the old CAC and the new measurement, so you don't get the whole increase at once, but do get diminishing returns.

In addition to the experiment of running a battery all the way from full to empty and seeing how much charge (or Ah) it actually produces compared to the CAC, another thing to try would be to take a car that hasn't had all that many range charges and do a bunch in a row. If it's really doing measurements plus averaging (and if it's really consistently been underestimating the top), then the overall increase should be much bigger than what you see charging my car, since I've been doing it regularly. Probably the best car in the dataset to try this with is #33, @slcasner , since it's had few range charges and a pretty linear dropoff. If it wound up climbing back up to 200-205 Ah (roughly where my car was at the same mileage after range charging), that would indicate that the estimates just keep getting worse with no data, and also that mileage is more important than age (which is the opposite of what the CAC algorithm is saying).

We'll eventually get this figured out. I'm not convinced we'll like the answer when we do, but we'll get it.
 
  • Informative
  • Like
Reactions: markwj and Mark77a
Usually with BMS and SOC measurements/calculations you set a few voltage "drift points", these drift points are used to more-or-less recalibrate the calcuation. These drift points are at the high and low sides of the voltage curve, the high-side drift points usually drift the SOC up while the low-side drift points will drift the SOC down.

To further elaborate;

The drift points are known voltage-SOC relationships about the battery cells in use. For example 3.20V may be used as a known 10% SOC drift-point and 4.10V may be used as a known 90% SOC drift-point (just examples) when a cell is discharging and hits 3.20V the SOC will start to correct itself and drift towards 10% SOC. When the cell is charging and hits 4.10V it will start to correct SOC and drift towards 90%...

This sort of feature may give some insight as to the accounting being used by the vehicle to determine SOC. If the battery has much more capacity and you rarely get it to the drift points it may become relatively inaccurate until it gets a few full-cycles in to re-calibrate.

Now, I am not the guy who wrote the code for the BMS chips in the Roadster so I can't honestly say I know what the case may be but this seems to be consistent with what is being reported...
 
Usually with BMS and SOC measurements/calculations you set a few voltage "drift points", these drift points are used to more-or-less recalibrate the calcuation. These drift points are at the high and low sides of the voltage curve, the high-side drift points usually drift the SOC up while the low-side drift points will drift the SOC down.

To further elaborate;

The drift points are known voltage-SOC relationships about the battery cells in use. For example 3.20V may be used as a known 10% SOC drift-point and 4.10V may be used as a known 90% SOC drift-point (just examples) when a cell is discharging and hits 3.20V the SOC will start to correct itself and drift towards 10% SOC. When the cell is charging and hits 4.10V it will start to correct SOC and drift towards 90%...

This sort of feature may give some insight as to the accounting being used by the vehicle to determine SOC. If the battery has much more capacity and you rarely get it to the drift points it may become relatively inaccurate until it gets a few full-cycles in to re-calibrate.

Now, I am not the guy who wrote the code for the BMS chips in the Roadster so I can't honestly say I know what the case may be but this seems to be consistent with what is being reported...
Good insight. So, fundamentally the system is self-correcting with use over time, meaning that the observed trend, if not the exact values, for CAC (and the battery degradation it implies), is real. Yuck.
 
Even if it is an algorithm issue, I'm thinking that it may not be altogether cosmetic. What will happen when the car thinks the battery is only capable of, say, 200 miles range when it's really over 300, and you try to drive for 250? At best it will leave you with a "range cannot be determined" status for over 50 miles (the ultimate in range anxiety). At worst it will leave you stranded either because it thinks the "bottom" is closer than it really is, or prevented you from fully charging before you left.

In any scenario, this needs to get understood, then if it's algorithmic, fixed.

Been there, or very close to it last year after getting my 3.0 pack fitted. Did a range charge, topped out about 329miles, drove to Manchester (UK) and back to Oxford, about 300 miles roundtrip, on the way back in range mode the car suddenly jumped from 45 miles remaining to can not calculate range, leaving me fundementally terrified that I was about to brick the pack. Got home with a heart in mouth moment, everything switched off to minimise power drain less headlights (it was night!) doing another ten miles or so, and got car on charge, did the will charge faster with a 120V message - and after 24 hrs all was normal. That sufficiently scared me that I now never use range mode except for a range charge which I then switch back to normal mode at the 100mile mark. My SC say they're looking at it, but what that means I have no idea. Summary, it's not fun. :-(
 
  • Informative
Reactions: Mark77a
Been there, or very close to it last year after getting my 3.0 pack fitted. Did a range charge, topped out about 329miles, drove to Manchester (UK) and back to Oxford, about 300 miles roundtrip, on the way back in range mode the car suddenly jumped from 45 miles remaining to can not calculate range, leaving me fundementally terrified that I was about to brick the pack. Got home with a heart in mouth moment, everything switched off to minimise power drain less headlights (it was night!) doing another ten miles or so, and got car on charge, did the will charge faster with a 120V message - and after 24 hrs all was normal. That sufficiently scared me that I now never use range mode except for a range charge which I then switch back to normal mode at the 100mile mark. My SC say they're looking at it, but what that means I have no idea. Summary, it's not fun. :-(
Yes had the same experience and when I THOUGHT I had about 20 miles of range left the car died, no warning just died. It did charge just fine but it was a rather expensive experiment. But I also set a personal record of 375 miles on a charge and I was NOT hypermiling
 
  • Informative
Reactions: markwj
Good insight. So, fundamentally the system is self-correcting with use over time, meaning that the observed trend, if not the exact values, for CAC (and the battery degradation it implies), is real. Yuck.

Only if people run the packs to very high or low states of charge and give the algorithm data. I don't think that this is happening all that much, so we just get the no-data assumption in the algorithm. I range charge relatively often, which is probably why my car looks better for the mileage/age.
 
  • Informative
Reactions: markwj
Been there, or very close to it last year after getting my 3.0 pack fitted. Did a range charge, topped out about 329miles, drove to Manchester (UK) and back to Oxford, about 300 miles roundtrip, on the way back in range mode the car suddenly jumped from 45 miles remaining to can not calculate range

I think it's supposed to display "cannot calculate range" at 10% SOC, or ~35 ideal miles. So getting it at 45 means it was only off by 10 miles, or about 3% of the battery capacity, which isn't all that bad. Still scary.
 
  • Informative
Reactions: markwj
Only if people run the packs to very high or low states of charge and give the algorithm data. I don't think that this is happening all that much, so we just get the no-data assumption in the algorithm. I range charge relatively often, which is probably why my car looks better for the mileage/age.
Thanks, but after range charging how often do you then run your pack down to a low state of charge? I have only done that a handful of times in the almost two years I’ve had my 3.0 pack, but never down to less than 65 miles of range remaining after doing a range charge.
 
Thanks, but after range charging how often do you then run your pack down to a low state of charge? I have only done that a handful of times in the almost two years I’ve had my 3.0 pack, but never down to less than 65 miles of range remaining after doing a range charge.

A few times, but not all that often. When I do, the CAC goes down. You can see the same thing in the data for other people's cars (assuming that lots of miles driven in a day is correlated with low SOC, which isn't necessarily true, but probably usually is).
 
When doing several range charges in a row, the increase in CAC seems to get smaller, but it keeps going up for a while. This says to me that it's doing some kind of average between the old CAC and the new measurement, so you don't get the whole increase at once, but do get diminishing returns.

^^ This.

The CAC measurement is not an instantaneous reading. It is an algorithm applied over time, most often based on somehow averaging previous history with the current reading. The algorithm's accuracy depends on the type of cell being modelled.

Stories of the range mode drives suddenly jumping to "cannot calculate range" at prematurely high SOC% further reinforce to me that the estimation algorithms (SOC%, CAC, etc) are not modelling the new cells that well.
 
Good insight. So, fundamentally the system is self-correcting with use over time, meaning that the observed trend, if not the exact values, for CAC (and the battery degradation it implies), is real. Yuck.

It can self-correct only if it gets the data, and the algorithm matches the actual behaviour of the cells (so the algorithm's model of what is going on can be accurate). That most likely requires multiple top-end and low-end real charge and discharge readings, over time to allow the algorithm to correct itself using averaging, and no bugs or incorrect parameters in the algorithm.

Here's an analogy: Say I have a bucket of water and want to know how much you drink each day. You drink exclusively from the bucket, and after five days you've emptied it. I can get a pretty good estimate that you drink about 1/5 of a bucket a day. Now, say the bucket is about half empty, and you drink a bit, then top it up from the tap. I can't measure exact quantities, just look at what you are doing and try to estimate rate of water coming from that tap, and how far the bucket level falls. The estimate I am going to get for water consumption rate in that second case would be significantly less accurate than with the first method. How can I estimate the size of the bucket (aka CAC)? Now say the size of the bucket changes over time (due to degradation in the bucket itself as the chemical structure of the plastic changes over time), and when you measure again using the first method it now takes 4 days to empty the bucket? Are we seeing a change in bucket size, water consumption rate, expected statistical variance, or some combination of all three? What is the size of the bucket now? You can get an insight into how difficult it is to accurately model things like CAC, SOH, SOC%, etc.

I'm not saying that I'm 100% convinced what we are seeing is an algorithm error, but rather that I'm not 100% convinced it is not.