Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Investor Engineering Discussions

This site may earn commission on affiliate links.
You're mushing different things together. Aero drag and rolling resistance are both part of what auto engineers call "road load". Obviously if you increase aero 10% and hold RR constant, road load will increase by less than 10%. But it's the same for EVs and ICE, thus has no bearing on this discussion.

The engine, drivetrain and parasitic losses in your graphic are not part of road load. They are a side effect of producing power. Increase power and you increase those losses.

A full hybrid like Prius has all the losses shown in your graphic. The engine losses are a bit lower, say 61-65% vs. your 71-75%. So your logic should still apply. But it doesn't -- Prius suffers almost exactly the same range hit due to headwinds, higher speeds, etc. as a BEV.

Why do Prius and BEV suffer similar range hits while conventional ICE sometimes suffers a lesser hit? It's not the absolute level of tank-to-wheel efficiency -- BEV is vastly more efficient than either Prius or conventional ICE. It's due to the way that efficiency changes with power output. Prius and BEV efficiency is pretty constant over a wide range of highway conditions. Conventional ICE efficiency can vary meaningfully at highway speeds, though, and it generally improves as road load increases. It's a second order effect, e.g. my 7.3% vs. 10% range hit example. But it can be noticeable. And in fact that was an early criticism of Prius -- MPG dropped off more dramatically at very high speeds than conventional ICE drivers were used to.

TL;DR - it's not the difference in efficiency that causes EV range hit to sometimes be worse than conventional ICE, it's how that efficiency varies. If you had a vehicle with a constant 1% power train efficiency it would suffer the same range hit as a BEV.
It's the increase compared to the total If the total vehicle usage increased 10% then fuel usage would increase 10% (assuming constant conversion efficiency), but aero or road load are not the total load. If your rear brakes are dragging, 10 MPH of headwind will have a lower percentage impact on range than if they aren't.

If chemical to mechanical conversion is 35% efficient and the engine requires 10 HP just to turn, transmission takes 2 HP and a certain speed requires 20HP due to road load it's 32HP / .35 = 91.5 HP of gas
10% more road load = 22HP + 12 Parasitics = 34HP / .35 = 97 HP or a 6% fuel increase / range decrease

EV: 90% battery to mechanical, 2 HP drivetrain, 18HP road load = 20 / .9 = 22.2 HP battery
+10% = 19.8 HP road + 2 = 21.8 HP/ .9 = 24.2 battery or a 9% battery usage increase
 
You're mushing different things together. Aero drag and rolling resistance are both part of what auto engineers call "road load". Obviously if you increase aero 10% and hold RR constant, road load will increase by less than 10%. But it's the same for EVs and ICE, thus has no bearing on this discussion.

The engine, drivetrain and parasitic losses in your graphic are not part of road load. They are a side effect of producing power. Increase power and you increase those losses.

A full hybrid like Prius has all the losses shown in your graphic. The engine losses are a bit lower, say 61-65% vs. your 71-75%. So your logic should still apply. But it doesn't -- Prius suffers almost exactly the same range hit due to headwinds, higher speeds, etc. as a BEV.

Why do Prius and BEV suffer similar range hits while conventional ICE sometimes suffers a lesser hit? It's not the absolute level of tank-to-wheel efficiency -- BEV is vastly more efficient than either Prius or conventional ICE. It's due to the way that efficiency changes with power output. Prius and BEV efficiency is pretty constant over a wide range of highway conditions. Conventional ICE efficiency can vary meaningfully at highway speeds, though, and it generally improves as road load increases. It's a second order effect, e.g. my 7.3% vs. 10% range hit example. But it can be noticeable. And in fact that was an early criticism of Prius -- MPG dropped off more dramatically at very high speeds than conventional ICE drivers were used to.

TL;DR - it's not the difference in efficiency that causes EV range hit to sometimes be worse than conventional ICE, it's how that efficiency varies. If you had a vehicle with a constant 1% power train efficiency it would suffer the same range hit as a BEV.
I find it easiest to think of this in the following way as examples of consumption at constant speeds;
  • At idle, an ICE has 0% efficiency with respect to propulsion.
  • Maximum ICE efficiency occurs at a specific power output, x. Let x = say 30 kW for a typical 200 kW gasoline engine.

1691175003413.png

Chart: Absorbed power as a function of speed. source Kadijk and Ligterink, Researchgate

As seen in this example chart, a constant speed of 40 mph (70 km/h) requires 6 kW, which is closer to idle than to x.
X occurs at 85mph (136 km/h).
If the speed increases from 40 mph to 85 mph, the engine efficiency increases from close to 0 to max, which partly counteracts the energy lost to the increased road load. The result is a fuel consumption that isn't increasing proportionally to the road load.
 

Attachments

  • 1691173514801.png
    1691173514801.png
    8.8 KB · Views: 330
  • 1691173540597.png
    1691173540597.png
    8.8 KB · Views: 34
  • Informative
Reactions: CorneliusXX
Would they though?

There's already mini-ITX motherboard component supply that handles DC in the 16-19v range, so getting a board built for the Ryzen MCU 3 that was native for the new 16v battery was trivial.

AFAIK nobody makes 48v input motherboards though-- and developing one rather than stepping down seems a pretty significant thing they'd need a third party to make for them rather than doing a dime-a-dozen-part 48v->12v (or 16v) conversion into the MCU.... (if you've got newer info I'd love to see it, last I knew they were pushing 48v for data centers, but even then it was mostly sending 48v to racks and going into 48->12v stepdowns from there)

Yeah, main reason to not go single stage would be lack of suitable controller chips. And then, they would be just as likely to integrate it into the MCU as use an external module and more harness.
Re-rereviewibg, the board may be two stage already. The later silkscreen calls out a 12V rail.

 
Last edited:
  • Informative
Reactions: RabidYak
What does mini-ITX have to do with anything?

It's an example of industry parts already existing to handle the 16v system on the newer cars so "designing" a board for the 16v input in the car to a Ryzen CPU board is trivial- those parts already exist even if the "board" itself isn't just an off the shelf MB.

Whereas for places that do have 48v coming in they're still stepping down to 12v, so if Tesla wanted native boards they'd need something that can take 48v native and I'm not aware of that existing.


Only reason to not go single stage would be lack of suitable controller chips. And then, they would be just as likely to integrate it into the MCU as use an external module and more harness.
Re-rereviewibg, the board may be two stage already. The later silkscreen calls out a 12V rail.


That's surprising since as I mention there's boards today that can handle 16v native no issue-- but it kind of makes a native 48v board even less likely if they're already doing step down.
 
It's an example of industry parts already existing to handle the 16v system on the newer cars so "designing" a board for the 16v input in the car to a Ryzen CPU board is trivial- those parts already exist even if the "board" itself isn't just an off the shelf MB.

Whereas for places that do have 48v coming in they're still stepping down to 12v, so if Tesla wanted native boards they'd need something that can take 48v native and I'm not aware of that existing.





That's surprising since as I mention there's boards today that can handle 16v native no issue-- but it kind of makes a native 48v board even less likely if they're already doing step down.
Yeah, I edited my inital sentence before you posted after I realized what you meant anout motherboards.
Native 48V meaning a Tesla design that can connect to 48V and downconvert from there.

Normal motherboard grade converters would be expecting a fairly stable input rail whether 12V or 16V. Automotive supplies are typically not as as clean, nor controlled. So a two stage step down would provide a buffer between the outside world and the sensitive CPU/GPU/memory.

Additionally, a high ratio step down would have a really short duty cycle. 48V to 1.2V is 2.5% on at steady state. That's not great in terms of ripple and control range. 48V to 16V would be 33% and 16V to 1.2V would be 7.5%. 48V to 12V is 25% to 1.2V is 10%. One fouth the ripple current to deal with.

This Green photo has a conversion stage in the bottom left and more on the top left ablnd center top, but I'm not sure what feeds what.
hardware-4-2.jpg
 
TL;DR : "12V" vs "48V" is a trivial thing from engineering a board, and no modern computers use 12V directly for anything, they're always going through DC-DC stages already, so changing to support a different voltage range is a trivial change to the voltage regulator setup.

Modern motherboards (and correspondingly CPUs, GPUs, RAM, etc) are designed around a 12V input, and output many voltages below that. A CPU might require several voltages or just one (and in most cases these days, are 1.2V or lower, dynamically adjusted in response to performance demands), the RAM another (typically 1.5V for DDR3, 1.2V for DDR4), the GPU similar to CPU in being both dynamic and less than 1.5V, connected PCIe devices would be potentially getting both the same 12V input as the motherboard as well as 3.3V (in most designs, both 3.3V and 12V are coming from the PSU, but there's a new standard gaining OEM popularity ATX12VO - the 12VO indicating the PSU only provides 12V and nothing else for motherboard or peripherals, so any other DC-DC stages are on the motherboard including 12V to 3.3V for PCIe, powering any traditional 5V devices such as fans, HDDs, etc).

Almost nothing (in most cases, nothing) actually runs on 12V in a modern computer anymore, it's a convenient compromise between backwards compatible design and a reasonably high voltage vs amperage for sane wire sizes (though with the latest Nvidia GPUs, a 48V input would probably be more sane), and is pretty much always going through one or more DC-DC stages to power anything. For computers, they can usually assume the 12V rail input is +- only half a volt or less, so not quite wide enough range to be happy in a traditional automotive environment (with lead acid batteries), so designing for automotive would have already required some minor changes, but supporting a wider voltage range is easy enough to do by just choosing the right components for the job, and trivial from an EE perspective.

So with this sea of various voltages, and basically nothing likely using 12V directly to begin with, to update a 12V design (which likely had to be 12V+ tolerant in a car to begin with) to "16V" probably doesn't require much change, perhaps some slight tolerance changes on various input stage parts, especially if the original design was 12-16V tolerant and they want something more like 14-18V or whatever for "16V" design, and their originaly choices weren't wide enough range.

For a 48V input, you would definitely need to choose different parts (unless you actually designed originally for 12V-48V, which is actually doable, but the extra pennies you pay for unnecessarily wide input range support on the older "12V" and "16V" cars would add up, so unlikely), but again, doing this is super straightforward from EE perspective. If you didn't want to redesign the board, you could slap a 48V to 12/16V DC-DC in front, but at Tesla's scale, it makes more sense to build a new board revision that is 48V input natively.

If I was designing this, as soon as I knew 48V was in the future, I would started updating my designs (at least on the larger devices like FSD computer and ICE boards) to be a single PCB design, but with different parts used for "48V" or "16V" (including "12V") operation. This way I could have hundreds of thousands (or millions) of PCBs run off identically, and just adjust the percentage of boards populated one way or the other to follow the production demand curve for different vehicle types. Realistically though, you'd be producing smaller batches, as various board revisions come through to follow changes in the BOM from different changes in IC suppliers and such, even though the overall board would be functionally identical. You can likely even build a board that doesn't leave unused areas, just swapping out 48V for 16V capable parts as needed or vice-versa, rather than having a "48V" area and "16V" area with only one populated, there should be voltage regulators available in the same package type with both operating ranges.

If "HW4 for Y" and "HW4 for Cybertruck" are materially different, I wouldn't jump to assuming it is just due to "16V" vs "48V" changes. There's probably a bunch of other BOM-level changes going on, that result in changes to PCB layout, etc. Could be anything from switching to different DRAM chip density or package type, to different number of camera inputs, etc.
 
- can you please explain for the rest of us whether you are in wild agreement with each other or otherwise ?

- and then I can start to follow the logic without just doing a complete nodding dog impression.

I'm serious, because I am genuinely interested, irrespective of being an investor.

Neural nets can be trimmed down to get a useful level of precision for engineering requirements. Tesla did an "acqui-hire" of a Silicon Valley AI company several years ago who specialized in these 'trimming' techniques:

Tesla Beefs Up Autonomy Effort With DeepScale Acqui-Hire | thedrive.com (Oct 1, 2019)

Here's a simple analogy I hope will be useful: Engineers use PI all the time (the ratio of the circumference of a circle to its diameter). However, PI is an irrational number (as far as we know) with an endless, non-repeating series of digits.

Turns out that level of precision isn't needed for many engineering purposes. Having sufficient precision matters, after that it's a waste of time, space, and effort (lost in mechanical noise in the real world). Instead you can specify useful approximations for PI like 22⁄7 (about 0.040% too high) or 223⁄71 (about 0.0238% too low). Not too bad for 3 digit precision. ;)

In a similar sense, trimming a neural net involves choosing an arbitrary level of accuracy compared to the results given by some larger reference neural net, then reducing the size of the matrix (or the number of NN parameters) until that tolerance is exceeded. Voilà. :D

This works well for autonomy because millimeter accuracy often isn't needed for navigating 4 meter wide lanes, while faster execution time matters much more. Millimeters matter when autoparking, which has it's own neural net. It's like an enginneer's Autonomy toolkit (and not using a wrench like a hammer).

HTH. ;)
 
Last edited:
Neural nets can be trimmed down to get a useful level of precision for engineering requirements. Tesla did an "acqui-hire" of a Silicon Valley AI company several years ago who specialized in these 'trimming' techniques:

Tesla Beefs Up Autonomy Effort With DeepScale Acqui-Hire | thedrive.com (Oct 1, 2019)

And yet 2 years later the NNs had grown so large (while still being clearly far short of any better than L2 capable) they were forced to use extended compute mode to use the second node instead of having 2 redundant onces.

In the 2 MORE years since they've only grown further, to the point they can't even run all the NNs at full camera FPS using BOTH nodes for compute... and there's been no evidence of any kind they'll ever "fit" L3 or better code back into a single node when even the L2 stuff lacking a complete OEDR has fit there in a couple of years.
 
Now here's an "Investor/Engineer" if I ever saw one: ( h/t @NicoV #423,560 )

IMO, those 3 answers are basically the same, maybe just focussing on different aspects. I’ll try to rephrase my answer:
Image that 300K lines of C++ code as navigating through a giant maze, carefully deciding at each step what step to take next. It runs on the CPU part of the autopilot computer, because that part is really good at such code (i.e. evaluating something and then deciding which step to take next). It is developed by carefully thinking of all the possible situations that might arise, and writing the code corresponding to those situations and what to do next. The output is e.g. the speed at which to drive, which angle the steering wheel should be at, etc.

Imagine the neural network part as a firehose of calculations, doing many calculations at once. It runs on the part of the autopilot computer that is optimised to do hundreds or thousands of calculations at once. It just adds and multiplies numbers until it outputs the same speed to drive and angle to turn the wheel. It is developed by training a neural network by providing thousands of samples of situations and what the speed and angle should be.

Replacing the C++ code with neural network will lessen the load on the CPU part of the autopilot computer, and increase the load on neural network part of the computer. Only Tesla knows how this affects the available neural network resources on the autopilot computer: While they are adding load on the neural network part of the computer because of the added V12 functionality, they may also optimise (i.e. lessen) the load used by the V11 functionality.

Placed here in I/E Discussions so it doesn't get buried in the main thread... Thanks @NicoV

Cheers!
 
I think this is about as close to closure on LK99 that we'll get for the moment. Replication in a US lab with a battery of lab tests conducted on the positive sample. TL;DR is that the difficulty in replicating the levitation is likely due to the relatively rare iron impurities in 99.99% purity precursors, and the resistivity drop at critical temperatures is likely due to a structural phase change in the Cu2S (Copper(I) sulfide) contained in the sample; but it never exhibits superconductivity at ambient pressures/temperatures.

 
Hypothetically speaking:-

Could Tesla's experience with electric electric motors and inductive charging give them any useful expertise when it comes to building an inductive cooktop?

Could a heat pump based electric oven have a performance edge over an oven based on resistive heating?

Could an electric oven and a heat house pump occasionally exchange useful heat, in a way that lowers the energy consumption of one of the products?

I admit that is is a long shot, and if Tesla, had any intention to make a product like this Elon would not have been able to resist the temptation to drop a hint.

I am more curious to know if this an area where a home appliance can potentially be improved.

One problem with phasing out the domestic usage of gas is that inductive cooktops are expensive.

Gas cooktops can increase the chances of children developing asthma.
 
Hypothetically speaking:-

Could Tesla's experience with electric electric motors and inductive charging give them any useful expertise when it comes to building an inductive cooktop?

Could a heat pump based electric oven have a performance edge over an oven based on resistive heating?

Could an electric oven and a heat house pump occasionally exchange useful heat, in a way that lowers the energy consumption of one of the products?

I admit that is is a long shot, and if Tesla, had any intention to make a product like this Elon would not have been able to resist the temptation to drop a hint.

I am more curious to know if this an area where a home appliance can potentially be improved.

One problem with phasing out the domestic usage of gas is that inductive cooktops are expensive.

Gas cooktops can increase the chances of children developing asthma.
The main improvement for an inductive cooktop would be to separate the controls from the cooktop. Unless there is some new type of induction, I don't see much room for improvement in the heating. Same for the oven--regardless of type. Most oven failures have to do with the controls overheating. Mostly the issue with most home appliances is that they are no longer made to last--especially the historical brands.
 
  • Like
  • Informative
Reactions: MC3OZ and navguy12
Induction cooktops are cheap.

Elon has dropped hints about this.

The octovalve would be great to recycle heat to and from your oven but not worth replumbing once walls are closed. Putting waste AC heat into hot water heating is a more obvious first step as those two units are frequently in the same room already.
 
Last edited: