How does it work currently?
Is cooling just on or off? Is temperature monitor and cooling adjustment a new thing?
Datacenters make a lot of hot air, and IBM Japan and NTT Group's in-house integrator NTT Comware think they can use it to calculate power consumption and CO2 emissions – and maybe reduce both. Their technique uses temperature sensors placed around a datacenter – an approach that means servers don't need to be reconfigured to …
> Is cooling just on or off?
"Inverter" chillers use variable-frequency drives to vary the power output, these have been common for at least 20 years ...
> Is temperature monitor and cooling adjustment a new thing?
No. Move along.
Why is the reg wasting our time with rubbish like this? There have been lots of interesting recent events that I was hoping the Reg would cover, but alas, nothing.
El Reg is turning into Blocks & Files ...
“found a strong correlation between exhaust heat temperature and power consumption”
Duhhh.
You need to track humidity as well as it does make a difference in efficiency.
Power consumed:
Cooling system (including air handling)
Wiring loss (depends on bean counter involvement)
Servers and other appliances
Network gear
Lights
Power switch gear / contact points. (Breakers, plug connectors, and transformers -never to be allowed in a Datacenter)
Power factor.
UPS and battery losses.
And wall warts.
If you use a FLIR unit and look at your rack and power Pannels for hotspots, you will find a massive loss from wall warts.
There needs to be a study on which is more efficient, 120v, 208v or in some cases 480v.
Power factor is another efficiency item. Run in 3ph, the best. Worst is 120v. Without correction you can loose as much as 20% or more. Reading the temperature at the rack would at best only give a reference.
Yep. Who'd'a Thunk it, eh?
The only interesting thing going on here is that apparently all we have to to is drop some buzzwords in to a naff press release, and the poor clueless hacks at El Reg think it must be worth writing a piece about.
Unless Simon Sharwood has actually been sacked and replaced with Simon LLaMa
You can treat basically all electrical devices as close to 100% efficient heaters. All the energy has to go somewhere. Whether it's a gpu or cpus it's all ends up as heat in the end. A tiny amount will leave in the network cables, but still ends up as heat in the next device.
AC works by transferring the heat to a medium, pumping it elsewhere and releasing it. Convientiently it can move more units of heat than it consumed.
I had always just assumed that ac dynamically scaled with the workload. There have been temp probes in every dc I hadn't been in. Why was thier not done from the start?
That's exactly what I was thinking - that (almost) all input electrical power ends up as heat.
I was trying to think beyond the words in the article and was wondering if they are monitoring the output temperature as a proxy for processor temperature. This could allow them to save energy and money on cooling by allowing the processors to run as hot as they dare.
Yes, that's how I read it. Instead of running the cold isle at "add layers or you'll get frostbite" temperatures, they can allow them to rise when workloads are low without cooking the servers.
This isn't really anything new, it's been fairly obvious - not to mention that most servers are "comfortable" at much higher temperatures than many places run. All that's new is throwing in a few current buzzwords and tossing out a PR piece.
If they wanted to "get ahead of the curve" as it were, turning on additional cooling before the temperature sensors could register the additional load, there's a FAR easier way - monitor the total amount of power going into the racks. As you said, electrical devices turn effectively 100% of incoming power into heat, so if a rack is consuming x kW, then it's going to generate x kW of heat. When the power consumption goes up, increase the cooling.