I have had two different proposals for rooftop solar. One is 4.8kW with 16 LG Mono-X 60-cell 300W modules; the other 4.62kW with 14 Panasonic HIT 96-cell 330W modules; both with SolarEdge optimizers and inverter. Net cost difference after tax credit is $1055 in favor of the LG system. Panasonic PTC rating is 311.3W x 14 = 4.36kW vs LG PTC rating 271W x 16 = 4.34kW, so despite the higher nameplate rating of the LG array, actual production would be almost identical for my house. Average ambient high temp here is 68ºF and we get regular afternoon off-shore breezes of 4 to 5 mph - in other words PTC conditions. I asked Panasonic what might make the more expensive Panasonic modules worth the difference and was told that: -LID for Panasonic is only 0.5% (type n cells) vs LG's claimed 2%, so Panasonic's system = 4.34kW vs LG = 4.25kW -Panasonic degradation rate is 0.26% annually vs LG 0.55% -Panasonic temperature coefficient is 0.258%/ºC vs LG 0.41%/ºC. Pretty much taken care of in PTC, I guess. -Panasonic's 96 cells produce higher voltage (max 69.7V) for any given irradiance than LG's 60-cells (max 38.9), so it will reach the 8V threshold level of the optimizers earlier and stay producing later in the day, thus producing more power. So how can I quantify all that? Can the Panasonic system overcome its $1055 initial cost deficit? PV Watts is not specific enough to help much. I guess I could enter the PTC ratings corrected for LID instead of STC. Degradation rate difference is very small, 0.29% annually. I guess I could average it; 1.5% over 10 years, 2.9% over 20. The most interesting point is the voltage threshold effect. How can one calculate how much difference that would make? Longer active time in the afternoons would mean selling more power to SCE at higher peak TOU rate. How does one even arrive at PV productive hours, anyway?