Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Wiki Sudden Loss Of Range With 2019.16.x Software

This site may earn commission on affiliate links.
From the dissertation (what Alchemist forgot to post):

A comparison with 2.0 A CCCV charging, depicted in Figure 52, reveals a considerably faster degradation for supercharging, although the total charging time differs only in the order of 10%. Hence, the high initial charging currents have led to disproportionate degradation. The increases in Rac,1kHz, shown in Figure 71b and Figure 71d, reveal side reactions entailing a decomposition of the electrolyte. Based on these results, lithium plating can be identified as the main driver of degradation for charging with the SC protocols. Due to the poor cycle life performance, the cycling sequence was terminated already after ca. 120 EFC.

Two modifications of the basis SC test protocols were also examined. At first, the discharging voltage was increased from 2.5 V to 3.2 V to reduce the stress at very low SoC. Furthermore, the boost intensity was reduced: During the boost interval, the charging power of 14 W was replaced by a constant current of 3 A to obtain a better comparability to the experiments on CCCV and BC protocols. The boost periods ended when the cell voltage reached 3.9 V, which was identical to the first SC protocols. As illustrated in Figure 71, the increase of the discharging voltage reduces the degradation also for the SC protocols. Capacity fade and resistance increase are lower (see
SC (3.2V)). The lowering of the boost charging currents from ca. 3.7 A to 3 A (see SC-3A (3.2V)) entails a marked improvement compared to the first SC variant. However, the capacity fade and the resistance increase still remains considerably worse than for the 1.0 A CCCV reference protocol, particularly when comparing the curves to those for discharging to 3.2 V only (see CCCV (3.2V)). As
shown in Figure 71c, 500 EFC were obtained for the SC-3A protocol in combination with a charging voltage of 4.1 V before reaching a capacity fade of 20%. However, this is more than twice the capacity fade from 1.0 A CCCV charging for the same discharging voltage of 3.2 V. Overall, substantially higher stress is observed for all SC charging protocols due to their high initial charging
currents, although the charging currents were adapted to the SoC of the cell.

You see they already changed protocol years ago, but still too much relying on BOL phase of cells and thermal management not perfectly adopted?

But i'm far away from blaming them, fast charging is a compromise, especially for NCA cathode cells (NCM can go to 138°F while charging).

Well, I did not really forget to include that section, I merely decided to keep my post a bit shorter. What to include in such a post is every bit as much of a compromise as finding the right charging speed for electric vehicles. ;)
But yes, the section is important. With those results, we can be all but certain that Li plating is to blame for any occuring problems. However, it also becomes apparent that even small reductions in amperage (from 3.7 to 3.0 for example) can have overproportional effects on cycle life which for me raises the hope that at least the packs which only suffer from Chargegate won't be reduced in capacity any time soon.
NCM does of course have its own set of problems, such as manganese dissolution and deposition on the anode which played a big role in the Nissan Leaf degradation scandal a few years back if I'm not mistaken.
Btw, what are you referring to with the BOL phase? The abbreviation escapes me for the moment.

And this is what customers should believe. From wk057s diagrams you can read the internal resistance being according to the datasheet.


Please don't mix the graphite anode with SiC composite anodes!

And the recovery from anode overhang (a common feature too) is another mechanism than Li-stripping!

Concerning the chemistry, obviously only Tesla and Panasonic know for sure. The most notable difference between the NCR18650PD and Tesla cells is a higher capacity in the latter (~3100 mAh compared to ~2900 in the PD), however I would presume that difference is mostly due to elimination of safety features like the CID and PTC (which might come back to haunt them?). The discharge curves themselves are largely identical, but they don't tell much about minor changes for longer lifetime like new additives in the electrolyte.

Of course the anode from the 90 packs on uses a different composition but that does not necessarily imply that the graphite anode in the 85 packs was not continuously tweaked. Unfortunately, I am as of now not aware of any hard data to support my assertion. I presume testing individual cells from salvaged "A" and "B" packs would help, but obviously this is a lot of work to find out presumably small differences. Do you by chance know where to find wk057's IR charts?

I am well aware that there are multiple mechanisms for capacity recovery and never stated otherwise. I just used Li stripping as an example because you mentioned it first, sorry if I didn't make that clear. My post was admittedly a bit ambiguous.

Looks to me, by shoving high amperage to these packs (a wild west mentality to me) they have damaged (Li-Plated prematurely) some of these packs, i.e., the impacted owners' packs. Do you concur?

That certainly sounds reasonable. I hope that the lawsuit will uncover more information from Tesla's side. Your analogy is on point, btw. The Wild West was dangerous for sure, but you have to take some risks in order to conquer new frontiers. :)

You would have to find the posts by WK057, but I think it was the bus bars that were changed. And the issue was likely heat build up from the length of time charging, while the 300kW for driving is not continuous. (Like they can burst ~750A through the charging infrastructure that is only rated for 350A continuous.)

Here are the two related posts that I could find quickly:

Thank you for the quick answer. I am not one to doubt wk057, but it seems to me that one change does not negate the other - If the cells can only take 90kW, there is no need for cables that are capable of more. Does anyone have the original mail by Jerome that is referred to in the first post? If it is indeed true that the A and B revisions have the same cells, then in theory a smaller percentage of A packs should be affected by capacity reduction because they have always seen more easy charging regimes than the B packs and later. Does anyone have data on that?
 
Last edited:
From the dissertation (what Alchemist forgot to post):

A comparison with 2.0 A CCCV charging, depicted in Figure 52, reveals a considerably faster degradation for supercharging, although the total charging time differs only in the order of 10%. Hence, the high initial charging currents have led to disproportionate degradation. The increases in Rac,1kHz, shown in Figure 71b and Figure 71d, reveal side reactions entailing a decomposition of the electrolyte. Based on these results, lithium plating can be identified as the main driver of degradation for charging with the SC protocols. Due to the poor cycle life performance, the cycling sequence was terminated already after ca. 120 EFC.

Two modifications of the basis SC test protocols were also examined. At first, the discharging voltage was increased from 2.5 V to 3.2 V to reduce the stress at very low SoC. Furthermore, the boost intensity was reduced: During the boost interval, the charging power of 14 W was replaced by a constant current of 3 A to obtain a better comparability to the experiments on CCCV and BC protocols. The boost periods ended when the cell voltage reached 3.9 V, which was identical to the first SC protocols. As illustrated in Figure 71, the increase of the discharging voltage reduces the degradation also for the SC protocols. Capacity fade and resistance increase are lower (see
SC (3.2V)). The lowering of the boost charging currents from ca. 3.7 A to 3 A (see SC-3A (3.2V)) entails a marked improvement compared to the first SC variant. However, the capacity fade and the resistance increase still remains considerably worse than for the 1.0 A CCCV reference protocol, particularly when comparing the curves to those for discharging to 3.2 V only (see CCCV (3.2V)). As
shown in Figure 71c, 500 EFC were obtained for the SC-3A protocol in combination with a charging voltage of 4.1 V before reaching a capacity fade of 20%. However, this is more than twice the capacity fade from 1.0 A CCCV charging for the same discharging voltage of 3.2 V. Overall, substantially higher stress is observed for all SC charging protocols due to their high initial charging
currents, although the charging currents were adapted to the SoC of the cell.

You see they already changed protocol years ago, but still too much relying on BOL phase of cells and thermal management not perfectly adopted?

But i'm far away from blaming them, fast charging is a compromise, especially for NCA cathode cells (NCM can go to 138°F while charging).

Is NCM cathode cells used in any German EVs?
 
Btw, what are you referring to with the BOL phase

I read my homework and I believe that stands for "Beginning-of-Life":

"As renewable power and energy storage industries work to optimize utilization and lifecycle value of battery energy storage, life predictive modeling becomes increasingly important. Typically, end-of-life (EOL) is defined when the battery degrades to a point where only 70-80% of beginningof-life (BOL) capacity is remaining under nameplate conditions. Understanding temperature impact on battery performance is equally important to understanding degradation performance from a control or energy dispatch perspective. A battery’s capacity at 0o C, may be just 70% of that under nameplate conditions."

https://www.nrel.gov/docs/fy17osti/67102.pdf

On Edit: I could even get my answer posted right before one of my teachers did :)
 
Last edited:
  • Informative
Reactions: Alchemist42
Btw, what are you referring to with the BOL phase?
Begin Of Life

The IR you may calculate from his launch chart:

launch1-CANgraph.jpg.jpeg


First generation of Leaf had LMO cells afaik?
 
Last edited:
guarantee from Tesla that all these packs would be charged to a cell Vmax of 4.2V forever. I have yet to see such a promise that was made.

exactly


degradation is the loss of usable energy storage capacity at a full charge.

right. But the issue is how do you define full charge? You suggest:

on a Tesla charged to 4.2v [is] 100%,

No. That isn't what a full charge? Sayeth Battery University: Full Charge is simply when the BMS determines the voltage threshold is reached, but that doesn't have to be at 4.2v.

upload_2019-8-30_15-41-47.png
 
  • Like
Reactions: dk10438 and VT_EE
First generation of Leaf had LMO cells afaik?

True, but the problem is inherent in all cells containing manganese.

By the way, I stumbled on the forum post that led me to believe that A and B packs contain different cells. The guy salvages totalled Model S packs and sells custom welded packs from the cells. However, unfortunately he did not post any meaningful data concerning discharge curves or IR of the A packs (or the later 90kWh pack, for that matter), so anecdotal evidence at best.
FS: Tesla Model S 18650 cells! Welded Packs! Panasonic; NCA - Page 5 - Endless Sphere
 
  • Informative
Reactions: GSP and Droschke
I had a 2013 S85, and upgraded to a 2015 S85D. I’ve put about 40000 miles on the car and the range has always shown 271 miles when fully charged. After an update it shows 13 miles less when fully charged...and I seldom use Superchargers, (although I do have a Signature Wall Charger inside my fake Supercharger cabinet between my garage doors)
 

Attachments

  • F55F55FC-9659-4F7F-AADC-E36548F88960.jpeg
    F55F55FC-9659-4F7F-AADC-E36548F88960.jpeg
    330 KB · Views: 49
The dissertation of Peter Keil, which was linked here already. https://mediatum.ub.tum.de/doc/1355829/file.pdf


NCR18650A

Please note that the graph is at 77°F, where Li-plating is the main driver for degradation at 3A! I don't know how was the thermal management at the SuC back in 2014, but maybe some of you used to read out battery temperature already at this time?
In 2013 the BMS limited supercharging current below about 29.5 degrees C. It’s about the same now, except that it severely limits the current as it approaches 50 degrees C. The median supercharging temp is usually about 35 degrees C.
 
True, but the problem is inherent in all cells containing manganese.

By the way, I stumbled on the forum post that led me to believe that A and B packs contain different cells. The guy salvages totalled Model S packs and sells custom welded packs from the cells. However, unfortunately he did not post any meaningful data concerning discharge curves or IR of the A packs (or the later 90kWh pack, for that matter), so anecdotal evidence at best.
FS: Tesla Model S 18650 cells! Welded Packs! Panasonic; NCA - Page 5 - Endless Sphere
There was someone who tested them way back in the beginning(when the A and B pack thing was a big deal), and came to the conclusion that they were different cells.
 
No, specially LMO spinel (LiMn2O4) cathodes show up with this problem. NCM has a far better stability than NCA!

Yep, that's why the first Nissan Leaf was particularly suspectible to that failure mode. You are right with the stability (which mostly means resistance to thermal runaway), but that does not say anything about Mn dissolution and deposition. NCA does of course not have that particular problem due to lack of manganese. Not saying any of those chemistries is superior per se, just that both have advantages and disadvantages.

There was someone who tested them way back in the beginning(when the A and B pack thing was a big deal), and came to the conclusion that they were different cells.

Yes, that's also what I remembered.
 
  • Informative
Reactions: Droschke
exactly




right. But the issue is how do you define full charge? You suggest:



No. That isn't what a full charge? Sayeth Battery University: Full Charge is simply when the BMS determines the voltage threshold is reached, but that doesn't have to be at 4.2v.
The full charge is when the car reaches the voltage with the BMS that was supplied with the car one bought. Otherwise, Tesla, or anyone else, could pull a bait and switch with the car specs like they have.
 
Not saying any of those chemistries is superior per se, just that both have advantages and disadvantages.
The main disadvantage of NCA is that it degrades directly to a non-conducting rocksalt layer, while NCM degrades to a conducting spinel first. And on thermal runaway its the first to release oxygen.

DDK.jpg


The advantage is that its cheaper, as long as you can handle its process from base material to cell.
 
Last edited:
If it is indeed true that the A and B revisions have the same cells, then in theory a smaller percentage of A packs should be affected by capacity reduction because they have always seen more easy charging regimes than the B packs and later. Does anyone have data on that?

Our A pack 2013 S85 has 3% degradation after 140000km and hundreds of supercharge sessions. Daily charged to 90 and a few times a month to 100.

I’ve added my details to a few online battery degradation studies. The manufacture date of 2012 or Jan/Feb 2013 are good indicators of an A pack.
 
The full charge is when the car reaches the voltage with the BMS that was supplied with the car one bought.

Hilarious. Just writing it doesn't it make it so. Even if you write "Period!" at the end.

The BMS software and all the software changes every few weeks. And Tesla can modify their software as they see fit so long as it doesn't cause a breach of the warranty terms.

And for general competitive reasons they will want to modify the software to maximize owner satisfaction across many dimensions, including increasing MTBF and minimizing risk of incidents that might make their owners have to service their cars. That is just good business sense.
 
Last edited: