back to article Swedish data centre offers rack-scale dielectric immersion cooling

Dielectric fluids conduct heat but don’t conduct electricity, which is why they’re a fine way to cool electronics. But they’re seldom used to cool servers at scale because even though dielectrics won’t harm the machines, the mere idea of lots of liquids inside a data centre provokes entirely justifiable fear. So why and how …

  1. James Anderson

    Interesting but .....

    Coolants -- what exactly are they using. The PCBs they used to use in transformers were incredibly toxic.

    Fire risk -- oil leak, lots of electrical wires gotta be a risk.

    And worst of all-

    Condensation -- water condenses on cold surfaces, water dripping on to your expensive electrical toys cannot be good.

    1. Anonymous Coward
      Anonymous Coward

      Re: Interesting but .....

      IIRC, only on the fire part, these fluids are not flamible. Might be toxic though. :(

      These are not sub ambient, so no water condensation.

    2. Chris G

      Re: Interesting but .....

      Dielectrics for rack cooling currently come in two main types

      ; single phase and two phase, the single-phase that I have read about is non toxic, has a high flash point and is comparatively long lived, because it is single-phase and doesn't boil off it ca be used in the form of a coolant and consequently open to easier maintenance than a system using sealed containers.

      Still a bit more effort than a fully open rack though.

      1. teknopaul

        Re: Interesting but .....

        The idea that allowing servers to be less power efficient is Eco is pure marketing bs.

        Clever tech but "Eco" is painting a green leaf on a petrol engine.

        1. Dr. Mouse

          Re: Interesting but .....

          From reading this, how is it allowing servers to be less per efficient?

          The main thing it does is allow servers to be more densely packed. There is a limit to the amount of computations we can currently attain per unit of energy, so a limit to the amount of "work" which can be performed in a rack within a certain power envelope. To go beyond that, we have to increase the power envelope, which is what this did. It doesn't let servers be less efficient, it allows now servers to be crammed into the same rack space. As the article mentions, datacentres are basically real estate, so this is like building a block of flats instead of a bungalow.

        2. Anonymous Coward
          Anonymous Coward

          Re: Interesting but .....

          We will know in some 50 years which is to be the most elligible for a green leaf sticker, the petrol enigine or the battery-fed electrical enigne. Needless to say I want an electrical motor fed from some kind of energy cell.

    3. Kevin McMurtrie Silver badge

      Re: Interesting but .....

      There are plenty of oils that are inert, difficult to ignite, and having a flash point above solder's melting point. Motor oil with no metal additives could work.

  2. redpawn

    With 500MW

    worth of servers, build a power plant next door to take advantage of the extra heat and generate electricity. Yes, yes, entropy and all, but that can be legislated against.

    1. HPCJohn

      Re: With 500MW

      It is in the Nordics. They probably have existing hydro electric capacity next door.

      For the UK, I have often said that why is there not a green data centre in Kinlochleven?

      There was a hydeoelectric plant there for an aluminium smelter.

      https://en.wikipedia.org/wiki/Kinlochleven

      I guess the hydro plant may no longer be active, and not worth reviving.

      1. rmacd

        Re: With 500MW

        You got me all excited there for a second.

        From Wikipedia:

        > The associated hydro-electric plant was converted into a general purpose power station connected to the National Grid

        1. Dave Pickles

          Re: With 500MW

          There is a photograph in the Kinlochleven museum showing what happened when they had a pipe burst in the power station. Not the kind of Dielectric Cooling most data centre owners would be keen on.

    2. Anonymous Coward
      Anonymous Coward

      Re: With 500MW

      I suppose you might get to 70-80 C in the outgoing coolant, may be not even as much , which is pretty low grade heat . There are methods to generate electricity from that but not economically even with a free heat source. More practical - data centre + tomato growing etc..

      1. rcxb Silver badge

        Re: With 500MW

        You're correct, but you can still use that "low grade heat" to at least pre-heat the incoming working fluid (water) for a power plant or similar. Would be perfect if you needed to melt snow on a massive scale.

    3. Anonymous Coward
      Anonymous Coward

      Re: With 500MW

      "entropy... can be legislated against "

      .... but not to any effect.

    4. Loyal Commenter Silver badge

      Re: With 500MW

      To generate electricity, you need pretty steep heat gradients (think superheated steam vs ambient).

      The waste heat from data centres is likely to raise temperatures by no more than a few tens of degrees above ambient, far too little of a gradient to effectively extract work. In order to get hotter, the heat source would need to be at a higher temperature, which is the thing you're cooling your servers to avoid in the first place.

      The waste heat could effectively be used (and may well be) for local heating, in much the way that waste heat from geothermal energy production in Iceland is used to keep the streets of Reykjavík ice-free in winter.

  3. This post has been deleted by its author

  4. vtcodger Silver badge

    In days of yore

    In the distant past -- say two or three decades ago -- some folks cooled REALLY high performance PCs by dumping the whole device into a container of oil and cooling the oil with a heat exchanger. I never tried that. But I was told that it worked fine as far as cooling went. The problem that limited the utility was that the oil eventually found its way out into the environment through every minute opening. Like between the gaps between the insulation and conductors in the wiring leading to the power switch and indicator lights. Things got quite messy after a few hours I was told.

    1. tony2heads

      Re: In days of yore

      I have seen it done a couple of years ago for a highly overclocked system. The circulation was by convection only.

      It worked well, but needed an oil change on a regular basis

      1. John Brown (no body) Silver badge
        Boffin

        Re: In days of yore

        Caution, for best performance use only approved oil and filters, change every 30,000 Mips.

    2. Pascal Monett Silver badge

      Re: In days of yore

      You can still find YouTube videos of guys doing that - and at least one is only a few weeks old.

      I used to do watercooling when it was DIY. I still have a massive finned heatsink and two pumps that I cannot bear to get rid of.

      These days, watercooling is practically the default solution. The range of CPU coolers that use it is impressive, and motherboards use nothing else.

      1. Boothy

        Re: In days of yore

        I haven't used an air cooler on a CPU for over a decade (in systems I've build).

        All-in-one water coolers these days are so simple to fit, easier than most air-coolers. They also generally provide better cooling and are normally quieter doing it than an equivalent air cooler (due to lower fans speeds needed), and don't get in the way of your memory slots.

        Only real issue tends to be price, plus eventually they'll need to be replaced as they can bung up internally over time.

        Never done a custom loop yet, but I think I'd only look at that, if I was sticking something like a 3950X and a 2080Ti in the same case, and I'd water cool both CPU and GPU then. But they are stupidly expensive, and I just can't justify that sort of money on a hobby machine.

        1. Anonymous Coward
          Anonymous Coward

          Re: In days of yore

          In tests water cooling is not "better" as such. It has a larger heat sink (the water compared to a copper fin or two), so harder to saturate. Air cooling is still the simplest. And I assume even the noise benefit is still from not saturating it as quickly, thus the fans don't ramp up as quickly. Unless you have more fans on the water cooler sink, in which case, it's more fans and a big copper heatsink would also do the same and similar dB noise.

          Is it more flexible in positioning in cases? Does it allow for larger radiators? Yes. Is it "better"? Results may vary. ;)

          1. Dr. Mouse

            Re: In days of yore

            "Unless you have more fans on the water cooler sink, in which case, it's more fans and a big copper heatsink would also do the same and similar dB noise."

            I think it comes down to surface area. There's a limit to the amount of heat you can extract using just a copper heatsink because of the distance the heat must travel (very much simplifying it). Heat pipes do much better (better than water cooling, in fact), but they are limited by not being flexible.

            A liquid cooler, however, can be positioned and configured in such a way that it gives a massive space area, by comparison, and a huge thermal conductivity.

          2. Boothy

            Re: In days of yore

            @TechnicalBen

            Quote: In tests water cooling is not "better" as such.

            Granted 'better' is subjective, but a good water cooler will always * beat a good air cooler. Water is just more efficient at moving heat around, and normally you'd mount the water cooler rad on the top of a case, venting heat directly outwards, unlike an aircooler that vent's within the case and so then needs that heat removing via case fans. (You still have case fans in a water cooled system, they just have to do less work as they are only moving GPU and motherboard heat, so tend to run slower, so quieter).

            * 'beat' is only by something like 2-4 deg c. Which may not matter to you, and so may not be worthwhile depending on your use case, but could matter to someone else.

            * Also a good aircooler will beat a bad, or mediocre water cooler.

            Gamers Nexus did a good video on this only a few months ago. Liquid Cooling vs. Air Cooling Benchmark In-Depth

            That includes running the coolers at 100% fan speed (so best cooling performance irrespective of noise), and also noise normalised testing (so same db for all coolers).

            Quote: Is it more flexible in positioning in cases?

            Yes, although that's kind of obvious ;-) An aircooler can only go in one location, whereas a water cooler rad can be (depending on case) mounted on the top, front or bottom of the case. Most modern mid tower cases are designed with water cooling in mind.

            Personally, I'd also say overall, an All-In-One is easier to fit, at least compared to a good, i.e. large tower, air-cooler. (Based on experience with both types). Although a simple small aircooler (like the sort AMD provide with their CPU's) is easier still.

            Quote: Does it allow for larger radiators?

            Yes, the standard sizes (for an All-In-One) are normally 120, 240 or 360, which have 1, 2 or 3 x 120 mm fans respectively, or 140, 280, 420, which have 1, 2 or 3 x 140 mm fans respectively. With the radiators matching the area that the fans cover. So a 280 would have a rad double the size of a 140 etc.

            Radiators are usually slightly wider than the width of the fans, and are longer than the posted size.

            For example a '280' (one of the more common ones in use), uses 2 x 140mm fans, so the rad would be around 143mm wide, 315mm long (as it needs to accommodate the fans + a small reservoirs at one end, plus the pipe connections at the other end), and around 30mm thick. Although these sizes vary by model and make.

            Quote: Yes. Is it "better"? Results may vary. ;)c

            Indeed they do vary :-)

            Edit. And just to mention, all the above is related to AIO (All-In-One) rather than a custom loop.

        2. squigbobble

          Re: In days of yore

          I've been using XPSC EC6 non-conductive coolant (smells like rapeseed oil and probably is) for about 4 years and I've never had any sort of buildup. The only thing it seems to do is form condensation in the air spaces in the loop, which is odd seeing as it's s'posed to be a waterless coolant... maybe I got some moisture into the loop. It stains things if it's left to dry on them and leaves a sticky mess if there's enough of it.

          I've unintentionally tested its non-conductivity with my graphics card and it really is. I've also been reusing the same litre of it by draining the loop back into the bottle without any problems.

          1. teknopaul

            Re: In days of yore

            Our hacklab had a fish tank* filled with oil and a 486 sunk in it that ran for years. Cpu fan and all.

            * complete with san, placcy fish, lights and a sunken galleon.

      2. Steve K

        Re: In days of yore

        Linus TechTips had an oil-cooled one in a fish tank (without fish but with gravel and ornaments), with wires exiting via the top of the tank. They used a pump, rather than just relying on convection.

  5. Julz

    You

    can have liquid cooling without putting your computer in a bath. You have to design for it but it's not too hard. ICL's SX mainframe range were all liquid cooled even though it wasn't much talked about as it was deemed it might scare off customers given the not so good experiences with IBM's attempts at liquid cooling.

    If you would like a trip down memory lane: link (pdf)

    1. vtcodger Silver badge

      Re: You

      you can have liquid cooling without putting your computer in a bath.

      Indeed -- CDC's supercomputers of the 1960s were water cooled. I know that because I was thrown off a CDC 6400 in one of their labs on a chilly May afternoon because -- as an agitated engineer informed me -- the computer had "sprung a leak".

      1. Keith Oborn

        Re: You

        Cray 2. I always had a hankering to chuck some fake goldfish in--

  6. tr7v8

    Coolants used are typically things like 3M Novec which is actually used as an extinguishing liquid or synthetic oils. No chance of condensation as you design so it doesn't go through dew point.

    As regards heat out if you can achieve an inlet temp in the 50s then you can use it to provide building heating via an underfloor system, generally using a heat exchanger.

    Even at lower temps you can use a heat exchanger to raise the outlet temp. to enable heat reuse. Minor problem is that generally when you want maximum DC cooling is in the summer & yu don't need building heating then.

    Some issues are technology lock in. You're beholden to whatever server model & technology they use. Also if using oil servicing is a truly horrible job. Novec is much better but is horrifically expensive, $50 a litre when I last looked.

  7. martinusher Silver badge

    De-ionised water works OK

    Back in the early 80s I worked on a radio transmitter that used water cooling. The water was de-ionised because the main source of heat was sitting some 10Kv above earth so needed a fair bit of plastic tubing between it and the the rest of the system. The only snag with it was that pure water freezes at 0C 'on the money' and Murphy's Law being what it is there was a cold snap over the Christmas break that froze up everythiing. (You know what its like -- you dream of a White Christmas but all you get is mud and maybe a bit of slush.)

    I've also used informal water cooling on the loads used to test high power audio amplifiers. (Its a high falutin' way of saying "put the load resistors in a bucket of water".)

    1. TDahl

      Re: De-ionised water works OK

      In the late 1990s at DEC, we had a prototype AlphaServer that was water-cooled. The server (a 6U or 8U rack-mounted model as I recall) was sealed in a clear plexiglass/perspex box which was fitted with a number of spray nozzles. Deionized water was circulated directly in, on, and through the complete server chassis, circuit boards, everything soaking wet under a fine blast of water. It was freaky to see it running.

      I think "SprayCool" was the planned branding, but it was not brought to market by DEC. A brief web search reveals a product line with the same name offered by Parker Aerospace for electronics gear today; I don't know if there is any relation to DEC's earlier work.

  8. Anonymous Coward
    Anonymous Coward

    This would only work for cloud computing

    Having to drain an entire rack of coolant in order to replace faulty memory or whatever would be too difficult, it seems like you could only use it for truly "lights out" datacenters that fail in place and a full rack would be your unit minimum size for repair/replacement.

    1. Dr. Mouse

      Re: This would only work for cloud computing

      https://xkcd.com/1737/

    2. tr7v8

      Re: This would only work for cloud computing

      Nope, you just remove the one server. Then dependent on coolant if its something like Novec you fix it immediately if it is oil then you let it drain & then service it. The later is one of the issues with using oil as a cooling medium

  9. HPCJohn

    Energy use

    Having worked for a company which has liquid cooled servers, a lot of the energy use in a data centre is due to the small whing fans inside servers.

    SO going for Opencompute with fanless servers and large fans in the rear is a plus already.

    Then go for water cooled rear doors.

  10. HamsterNet

    oes this mean

    With 140KW per rack does this mean they could cool Intels latest chips?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like