back to article You're all wet: Drippy chips to help slash data centre power consumption and carbon costs

Researchers in Switzerland demonstrated an approach that may have gone some way to addressing the sweltering heat and worrying carbon emissions from large computer systems: by integrating liquid cooling into microprocessor fabrication. Professor Elison Matioli of the Swiss university EPFL and his team have demonstrated …

  1. Missing Semicolon
    Boffin

    No physics again.

    The energy saving is in the pumps required to force coolant through the microchannels. The overall energy consumption of the device is unchanged, and the amount of energy required in the heat-pumps to transfer the waste heat from the devices to the air (as that is always the final destination for data centre heat, unless you are on the coast and use seawater cooling). So the energy saving is marginal at best, and won't lead to the devices under the micrchannels using any less energy.

    Remember, every Joule of energy that comes in from the power grid into a data centre leaves it in the form of heat. Every single one.

    1. Annihilator Silver badge

      Re: No physics again.

      "Remember, every Joule of energy that comes in from the power grid into a data centre leaves it in the form of heat. Every single one."

      Or noise. Or light. But I take your point.

      I'd argue that with micro-channels of cooling though, you'd be able to use capillary action and heat pipes to do it. These chips wouldn't necessarily require a pump.

      1. katrinab Silver badge
        Boffin

        Re: No physics again.

        "Or noise. Or light. But I take your point."

        No, everything ends up as heat, even the light and noise.

    2. Cuddles Silver badge

      Re: No physics again.

      "So the energy saving is marginal at best"

      I'm not sure where you get this from. They claim to reduce the power needed for pumping coolant by over 50 times. That's likely to be quite a signficant saving in a data centre. Just because it doesn't make the actual chips any more efficient (and no-one claimed it would, so I'm not sure why you felt the need to bring it up) doesn't mean it's not useful.

      1. Missing Semicolon

        Re: No physics again.

        Because moving the cooling fluid about is only a part of overall cooling costs. I'd suggest that the majority is the heat pumps (i.e., chillers) that expend energy to "push energy uphill" to make the heat-exchangers that cool the coolant with fresh air hotter, and so more effective.

        This is why data centres in arctic countries work so well - when the air is sub-zero, you don't need a heat-pump.

      2. MatthewSt Bronze badge

        Re: No physics again.

        Depends who you are. Google have a reported PUE of 1.06, which means that for every 100 watts of power going to a server 6 watts is spent on cooling and other infrastructure. That means that 50%* knocks them down to 3 watts. It's not uncommon to see PUE numbers of 1.22 or higher though, so the savings for those datacentres are more significant. Also, economies of scale means that if you can reduce energy usage by even 1%, that's a significant saving across the number of datacentres

        * I know it says "50 times greater", but what does that even mean. Does it mean it now uses 2% of the power that it did before?

        1. Dave Pickles

          Re: No physics again.

          "I know it says "50 times greater", but what does that even mean."

          (According to Slashdot) The original article states that previous attempts at water cooling with microchannels in the silicon used more power to pump the water than was being dissipated in the chip. Presumably it is this figure which is being reduced by a factor of 50.

        2. Ian Mason

          Re: No physics again.

          Somebody is being economical with the truth there with their claims. Assuming a fairly normal data centre air conditioning schene, using just 6W to get rid of 100W of heat would imply a Carnot Coefficient Of Performance of nearly 17. Practical heat pumps achieve COPs in the region of 4, i.e. for every 100W of heat you need to get rid of you'll use another 25W to do it.

          1. Richard 12 Silver badge

            Re: No physics again.

            As far as I recall, they do a lot of "run hot" and passive environmental cooling, only using radiators instead of chillers.

            Presumably they do have the chillers available if needed though.

  2. John Savard Silver badge

    The Technology We Need

    Obviously this is the technology that is needed to allow true 3D integrated circuits. None of this just putting one die on top of another, and still cooling them with a fan on top (i.e. Intel's Foveros). With this, we could have dies that are 100mm by 100mm... in a stack 100mm high.

    That would definitely leave a lot of growth room for the continuance of Moore's Law!

    Of course, keeping a 100mm by 100mm die cool would require a lot of water, so I may have let my optimism get away with me. But even 10mm by 10mm, 10mm high would be an incredibly massive improvement on what we have now!

    1. DS999

      Re: The Technology We Need

      This is only going to be useful in a datacenter, it will not be used for your home PC or phone. Just because you can get the heat out of the chip much more efficiently it doesn't make it any more efficient to get the heat out of the PC/home, or building it is in.

      Even looking at their example, 1700 watts is more than the typical residential circuit (120v 15A, which has to be reduced to 80% load) is designed to carry for extended periods of time. You'd need a commercial setup with a non standard 20A plug to use that in your house. And I sure as hell don't want a 1700 watt heat source in my house 8 months of the year.

      1. DS999

        Re: The Technology We Need

        Before someone points it out, I meant to say the "typical US residential circuit". I know you have 240v in the UK.

      2. John Savard Silver badge

        Re: The Technology We Need

        I certainly don't expect the CPU in my smartphone to be drawing 1700 watts of power!

        But I think the usefulness of this technology does not lie in allowing a chip to draw more current, but in allowing a chip to be more compact. If you take an existing microprocessor die, and split up its circuitry into multiple fins, with water cooling between them, on a tiny substrate, now a lot fo the signal paths will be shorter.

        1. DS999

          Re: The Technology We Need

          It is only useful if we figure out how to build 3D circuits (and no, the way '3D NAND' is built will absolutely not work for logic circuits)

          Wire delays aren't the main limiting factor for speed, it is the switching speed of the transistors. There are some other materials aside from silicon that can switch a lot faster, but they use a LOT more power. So now we'd be able to cool them, but won't be faster per watt so only better where single thread performance is paramount. In the datacenter these days it is almost all about performance per watt, and most datacenters are limited by rack power density so using a lot more power per rack simply isn't feasible no matter how much performance it would provide.

      3. katrinab Silver badge
        Mushroom

        Re: The Technology We Need

        I have a 3000W heat source sitting next to me, and attached to the mains with a UK domestic plug.

        Such things are pretty widely available

        https://www.argos.co.uk/search/3kw-heater/?clickOrigin=searchbar:home:term:3kw+heater

  3. John Smith 19 Gold badge
    Coat

    First implemented around 1980.

    Petersen. "Silicon as a mechanical material"

    What appears to have changed is they have found a way to lower the pumping losses. An obvious way (borrowed from rocket combustion chamber cooling practice) is to use different sizes of channels in different patterns. A few wide and low for low heat, many and narrow for the high heat areas.The base of those fins will be highly stressed so it might be an idea to round the base of them with a liquid isotropic etch.

    BTW 1.7KW/cm^2 may not sound much.

    100W/cm^2 was the thermal limit for the Apollo heat shield for a reentry from lunar orbit (post landing analysis indicated it actually around 78w/cm^. NASA believed in having substantial margin).

  4. Eclectic Man Bronze badge

    'Waste' heat?

    I was just wondering whether the 'waste' heat from the chips could be used to generate electricity? After all, classical power stations (i.e. not solar or wind) basically boil water into steam to turn turbines which generate electricity (which is passed into the national grid down aluminium* cables to transformers down more cables to my home where I use it to heat an element in a kettle to boil water to make a cup of tea). A Stirling engine attached to the system would surely be whirling away.

    *( "aluminium" because I am English, not North American. )

    1. Danny 2 Silver badge

      Re: 'Waste' heat?

      Thermoelectrics are what you are after, not a Sterling Engine. They are not as efficient but have no moving parts to fail, and are cheaper to produce and integrate. You don't need to be efficient with waste energy, it's quantity not quality.

      [My niece would tell me what the Cardi B song WAP meant. I get it now, Wet Assisted Processor]

      1. katrinab Silver badge
        Unhappy

        Re: 'Waste' heat?

        If you can get 5% efficiency out of one of them, you are doing well, and you still need a cold side of the thermocouple for it to work.

    2. DS999

      Re: 'Waste' heat?

      I really doubt it could ever be made efficient enough to be worth it on the scale of an individual PC. For a huge datacenter, sure, but if you can often find better things to do with that heat like using it to heat buildings during the heating season. Even if you only need heat 4 months out of the year, that beats anything with a <33% conversion efficiency to electricity.

      Another possibility would be providing heat for commercial processes, like if you put a giant commercial laundry service next door to a datacenter you could use all that waste heat to provide hot water for the washing machines and hot air for the dryers. Maybe Amazon needs to compete with Aramark in uniform services, and build a giant laundromat into their next datacenter...

      If you can concentrate the heat enough, you can make cement. We can never get enough cement, and that's not something that will ever be reasonable to import from China...

  5. stevebp

    Tackling the problem from the wrong direction?

    This is all very interesting, however, a good professor friend of mine told me that the reason chips emit so much heat is that when the electrons flow along the logic pathways, as soon as they hit a full stop (i.e. a decision pathway which involves a change of flow (essentially any logic gate), the current flowing down the unused branch is dissipated as heat.

    The answer then - and apparently this is being worked on - is to redirect that current through the chip reducing the power requirement for processing and emitting less heat as a consequence!

    I would be interested to know if any of the good people on here have come across this research and know where we are with it?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2020