* Posts by Lomskij

7 publicly visible posts • joined 15 Apr 2023

Additional hatch operations on a Boeing vehicle – but this time it's Starliner


Just put Boeing's space program out of its misery and stick with Space X...

To quench AI's thirst, the way we build, operate datacenters needs to change


A bit of nitpicking

Can you please stop posting invalid comparisons like this:

"For reference, Submer and LiquidStack, two immersion-cooling vendors, often tout PUE ratings of less than 1.05 — making them far more efficient than typical air-cooled datacenters which usually come in at 1.4-1.5."

PUE of 1.05 is efficiency of an immersion tank in isolation, while data centre PUE of 1.50 is the efficiency of the whole data centre, including power transmission losses, CRAH units, chillers, etc.

If you're comparing PUE (or pPUE to be precise) of a Submer immersion tank of 1.05 on average against a typical air-cooled OCP rack at 1.10, then your total data centre PUE will drop from 1.50 to 1.45, as power loss and chiller consumption is not affected - if your servers need 10MW, it needs to be transmitted and then vented into the air regardless of the heat collection method.

Back to the topic - lots of countries are now banning the use of adiabatic coolers outright, try to get a permission to build an evaporative cooled data centre in the UK and you'll see what I mean. Even some states in the US are doing that - increasingly large number of new data centres are either being built further north, or adopting new solutions like hybrid chillers with free cooling etc. I would take these AI water consumption figures with a big grain of salt.

European datacenters worried they can't get cheap, reliable juice


Depends on geography unfortunately. If you live somewhere cold and can use free-cooling chillers, or have access to lots of water for evaporative cooling, or even dump your heat into nearby river, it won't require a lot of energy. Combine it with immersion cooling and closed water loop, and your cooling is pretty much free.

However, if you live somewhere hot, with no water, then yeah, it'll cost you a lot...


Re: Standalone vs Integrated

You make it sound so easy... Unfortunately converting the low quality heat that DCs are outputting into something that is useful for district heating requires massive investments - who's going to pay for that? Councils in the UK with no cash? DC owners with zero ROI on this "investment"?

Why Microsoft is really abandoning evaporative coolers at its Phoenix DCs


"For example, direct liquid cooling or immersion cooling, both of which are significantly more efficient compared to air-cooled systems." - won't help, as all the heat needs to be vented into atmosphere somehow. Data centres in the southern states either use adiabatic coolers, or pretty much double the energy consumption - either way it's not sustainable.

LiquidStack CEO on why you shouldn't ignore immersion cooling


Re: nope

We're running immersion cooling in our DCs with no problems - there are adaptors for power and network cables specifically to prevent the capilarity you described in your post. Regarding the lack of benefits - we're running 100kW worth of GPU servers in each 42U tank - good luck achieving that in air cooled racks.


The guy has no idea what he's talking about. PUE is the ratio between total energy consumed by the data centre and energy available to the IT equipment. PUE of 1.5 means that for 100kW consumed by the servers, 50kW is consumed by infrastructure such as power delivery and heat rejection. As a ballpark, these 50kW would break into: 10kW transmission loss (transformers, UPS etc), 10kW rack cooling, and 30kW heat removal - CRAH, chillers, water pumps etc.

While it is correct that immersion cooling tank has incredible power efficiency, especially compared to air cooled racks, the consumption of transformers and chillers will stay the same. So if you take a typical HPC data centre with PUE of 1.5 and replace all OCP racks with immersion cooling, it'll drop the PUE to 1.4 at best, not 1.05.