I have a primitive understanding of Joule Heating and I also understand P=IxV and the related equations based on V=IxR. However, i do not understand why in order to deliver a power of 213 kW (for the Roadster as an example) there are 99 batteries in series such that a voltage of 376V is used. Would a lower voltage, say 200V be used? A lower voltage would be good for safety and other benefits. The current delivered would have to be proportionally higher based on on P=IxV. How much loss due to resistance does exist in this battery pack? The way the battery works now is P=376V x 567A = 213 kW of max power. Could it just as likely deliver the same power according to P=200V x 1066A = 213 kW? How does the battery pack resistance influence the current delivered? If anyone can shed light on how I can think about this i would appreciate it. Thanks

More current equals more heat. More heat means less efficiency. And current is really what is dangerous, voltage is just what scares people. They generally go hand in hand. If you doubled the amount of current flowing (by halving the voltage) you would have to make all of your copper connectors double in cross sectional area (effectively over doubling the amount of copper wire in the car) this is large cost and weight hit. I fully expect in the future battery packs going higher voltage.

I think so as well. If we also want to charge even faster we'll need to increase the voltage as well. Superchargers do 200A and that cable is beginning to become bulky. If we want to double the power, we'd have to go to 400A or we could go from 500V to 1000V. The problem with higher voltage though is the insulation of the cabling and possible arks.

so if i understand correctly, if you double to cross sectional area of the wiring then you would experience the same loss you previously had but with double the current and half the voltage (given you take a hit in battery weight and cost) am i correct?

Basically, yes. This would work for the DC circuits of the battery pack. In the AC part of the Tesla drivetrain, the skin effect increases resistance at higher frequencies (think kHz) which cannot be countered effectively by increasing the cross section.

Help me understand this then. The voltage of tesla roadster pack is 365 volts based on 99 cells in series operating at 3.7V each. 3.7V x 99 = 366.3V so i see no voltage drop which is in line with what we discussed above. For capacity however, there are 69 cells in parallele, supplying approx 2.2Ah each (i read somewhere thats the 18650 rating from Panasonic, dont know if its correct). So thats 2.2Ah x 69 = 151.8Ah total. The pack is rated at 56 kWh and if i do the math based on the numbers above i get 151.8Ah x 375V = 55.6 kWh. Given the max discharge rate of the pack (around 4C), then max power at 4C is 222 kW (max current available at 4C is 151.8 x 4 = 607A, 607A x 366.3V = 222kW power). Given that the rated max power is 213 kWh maybe there is a little drop in current here but nowhere close to I^2R. Why do i not see the expected current loss here, at max power? Can someone please explain the discrepancies and why my calculations do not reflect a sizeable loss in current in the battery pack in line with Joule heating of I^2R? Thank you

I already mentioned in another thread about resistive losses caused by current, where higher voltage minimizes this. There are lots of resources on the internet as well about this.

my understanding is that voltage does not affect the current loss. high voltage implies a lower current which in turn will result in a lower current loss because of the formula I^2R. according to my math above i dont see almost ANY current losses. why?

> Why is such high voltage needed for the battery pack? [curiousguy] Forget the question of losses for now. You need first to get the answer to your original question. Follow this reasoning: To charge a 12 volt lead-acid batt the generator needs to be producing 14 volts or more to overcome the 12 volt reverse voltage and actually push electrons INTO the batt, i.e 'charging' the batt. So, without climbing inside the Tesla batt to connect all the series/parallel cell arrangements into a different scheme so it could then be charged at a lower voltage, you are stuck with providing a higher voltage than the nominal 376 volts dc. Around 400vdc would be a good starting point. Note that the Tesla fast charging stations are supplying in this ballpark. With 440ac being a popular distribution voltage the choice of 376vdc makes sense since rectifying 440ac to dc using efficient rectifiers can easily get you over 400dc. Forget that EVSEs are supplying 240vac to the car's chargeport (or 208vac - boo/hiss). This voltage is going into the PEM and not directly to the batt. The PEM converts to ~400vdc which then charges the batt. This is what needs to happen. If there are excessive IR losses anywhere in the circuits, then you simply need to upsize the conductors that are overheating thereby reducing losses to an acceptable level. But (and here is your real question) if you halve the batt voltage you must increase the conductor size to allow for the extra current as per Ohm's Law. Then all contacts need to be increased, esp wrt the EVSE plugs. TM has reached a compromise of many variables in choosing 376vdc for the batt. --

Ah, voltages and amperages. Electricity. The way to get around these limits is to use magnetic energy. But it's hard to control. Anyway, enough hints.