You're right, he does make a lot of references to the power consumption of the hardware, and not as often in terms of the FLOPS or whatever computing metric people use nowadays.
In this case, however, he's talking about the cooling tower itself...so it makes sense to describe it in terms of the power it can dissipate, and therefore the power of the hardware it can cool. And from a "factory/building owner" perspective, running all the electrical power, and the cooling lines, etc. is the specification that is more at the forefront in the building design and construction. And similarly, for the compute power integrated into the car or an optimus bot...it is again the cooling and power requirements that need to be worked in the design. So maybe, from the persepective of Elon/Tesla, power just ends up being the metric that they focus on the most.
*Edited to add more thoughts.
In the traditional world of high-performance computing (HPC), the annual Top 500 supercomputer list often
includes power ratings for the installation:
top500.org
The top installation by power is at 38.7 mW at Argonne National Laboratory. The power budgets are for unified systems which are not a dog's breakfast of installations in Texas that Elon is hinting at.
[Aside: One interesting setup is in an old church which houses Spain's largest supercomputer, listed as #8.
I just returned from a vacation there, but it was unfortunately closed for visitation.]
It might look like something out of “Mission: Impossible,” but you don’t have to sneak inside like Ethan Hunt to catch a glimpse of this supercomputer in action.
www.lonelyplanet.com
Mind, the petaflop/exaflop counts are based upon 64-bit floating-point LINPACK benchmark for standardization
(using trad n**3 matrix multiplication math, not the Strassen method etc.) Although these national lab installations
are topping out at 1.5-2 exaflops, using 16-bit FP16 math they are more like 18-20 exaflops, which is closer
to the marketing flops of Tesla et al. (9 petaflops per training tile?).
See below for a nice article on how a better equivalence between FP64 measurements and the AI-oriented workloads may obtain, along with cost-performance tidbits relating to their installation contracts. Note that the AMD Instinct series (MI250 transitioning to MI300X GPUs) is well-represented along with Nvidia H100 accelerators in the Top 500. The first 2 FP64 exaflop machine will be Livermore's El Capitan using AMD; they are willing to spend 30-35 megawatts (and ~500M capex) on this, while they are not short on power due to their laser fusion efforts.
The question is no longer whether or not the “El Capitan” supercomputer that has been in the process of being installed at Lawrence Livermore National
www.nextplatform.com
About power efforts again, here is Oak Ridge (ORNL) chatting about their 30 megawatt installation using a new
power plant, while *all* of ORNL uses 150 mW for their entire campus.
Today we have the new ORNL Frontier supercomputer taking the top spot on the Top500 using HPE Cray EX with Slingshot and AMD CPUs and GPUs
www.servethehome.com
So, Elon humblebragging about 150-500 mW peak cooling must be Texas talk, but it's fairer to normalize
against actual computing power.