
Turn down the lights!
There is a virtual tour of LD4...
Its very bright inside. They could save a bit more energy by using lower output light strips
In the hopes of cutting its power bills, Equinix says it's turning the thermostat up in its datacenters. The colocation provider expects to increase the operating temperature of its server halls to as much as a rather balmy 27C – that's about 80F in freedom units – to align with American Society of Heat, Refrigerating, and Air …
There is a virtual tour of LD4...
Its very bright inside. They could save a bit more energy by using lower output light strips
at least where equipment is at I'd assume almost all data centers use motion activated lighting. I remember the first data center I visited in 2003 was an AT&T facility, really liked that place. I don't recall if they used motion sensors or simple timers on their floor lights(which was one giant warehouse floor with a huge raised roof). Heard stories about that place when it first opened, there was so little equipment that gear was running too cold they actually put space heaters in for some customers to get the temperature to more normal levels. By the time I saw it they had plenty of customers and no heaters to be found.
The practice is so annoying that I install my own LED lighting(utility clamp lamps from a hardware store). I don't go on site often(currently in probably my longest stretch haven't been on site in just over 3 years), but there have been times when I was on site for a dozen hours straight. Not only did my LED lights provide much better lighting but I didn't have to walk around the cage every hour(or more) waving my arms to trigger the motion sensors.
Nothing was more annoying than a Telecity data center I dealt with in Amsterdam for a few years though, the only place I've been at with hot/cold isle isolation. Sounded neat in theory but it was so annoying to have to walk ~120 feet between the front and the back of the rack. Almost as bad as they required you put these little elastic "booties" on your shoes before walking on the raised floor, another practice I've never seen anywhere else(and completely crazy). I hated that place. Ironically Equinix acquired them eventually(after my org cut ties with them). Then it took another 3 years to get off their mailing lists.
But of course the power provided by lights is probably a rounding error given how efficient LEDs are.
Sensible. 80 degrees F is not that hot, in days of yore they'd run the whole datacenter at like 68F, but it really is a waste of power, and I think part of the reason for that was you just generally had a big room with computers in it, no planned airflow or anything. The thing to watch out for would be hot spots, but I expect these facillities are designed to avoid having hot spots.
We need to stop converting AC to DC at the rack unit and deliver DC power rails to racks. Even with our very efficient PSUs they generate a good chunk of waste heat, right at the point where it's most burdensome (within the server unit)
Standardisation of DC input modules on server frames would allow this to be moved out of the most vulnerable locations and enable efficient heat harvesting.
Move the AC-DC conversion out to the same space as the air-con heating battery and harvest it there to offset the localised heat requirements within the air con & heat recovery systems.
That would prevent a lot of heat being generated in the server rooms, and the subsequent need to extract it again, with lossy recovery or completely wasteful exhaust systems, leaving more capacity for the actual cooling needs of the processors etc. within those server rooms.
Could we ever get server manufacturers to provide DC-DC swap-in modules to replace our current PSUs to facilitate this?
I think if there was enough research into the potential savings in more efficient centalised AC-DC conversion/distribution and the potential offset through waste heat harvesting, we might get a build up of sufficient customer demand to prod the manufacturers.
The down side is that without a proper industry / IEC standards at the outset, we'd have them all trying to do proprietary systems and that would take years to settle down.
I wrote an internal paper on this for our corporate group 15 years ago, but then they off-loaded us from their mega corp to another mega corp and all the momentum was lost. The copyright and patents would I suppose now be disputed between both mega corporations if anyone ever rattled that cage. Inertia wins again :(
Are you sure you want to distribute HVDC to the rack? You'll still need to drop the voltage locally in the rack, probably several times before it's down at the sub-volt, thousands of amps needed, you can't afford or manage the copper or aluminium busbars to shift seriously high currents from your proposed external power supplies to the racks, if you want to feed the racks low voltages. The powers are just eyewatering, many tens of kW per rack, hundreds of racks. Getting this stuff to scale, be tolerably efficient, reliable, installable and maintainable is exercising a lot of folks, hence orgs like OCP (the non-robocop one)
All you'll be pushing out of the rack is the AC-DC stage and PFC, and that's pretty darn efficient. To be fair, big in-rack PSUs are pretty efficient full-stop.
HVDC is unfunny in many ways.
https://www.opencompute.org/wiki/Open_Rack/SpecsAndDesigns
V3 has power shelves ingesting 3phase and putting 48Vdc at hundreds of Amps onto vertical busbars running the height of the rack, that equipment shelves connect to as you slam them into the rack. No need to rummage at the back of the rack to make power connections. Quite sweet.
Definitely better to move the heat out of the enclosures, with proper standards you can have something like 12,9,5,3 v distributed to outlets which in unit modules only pick up the required voltages. Higher voltages can be added on more robust outlets, as those are required less frequently and often are further converted to lower voltages, a step which which could be eliminated by proper provision within the input module. You can even design the distribution rails in rack with plug in modules which are only hooked up at required points, no more unnecessary provision of outlets just in case they may be needed.
The actual to rack distribution model we used was a roof rail system which had shuttered inserted rack rail drops. You could even get to the point where the racks are assembled incorporating the rail hook-up / outlet distribution drops.
What we found was that the PSUs were both a point of failure and added a chunk of heat which could be easily eliminated. By the time the multiple PSUs were totted up it was a real eye-opener. Not having to provide & maintain multiple PSUs across whole rack rooms was also a potential cost saver.
The other advantage was that UPS maintenance and transition in DC voltage rails would be more robust.
It's a bird that never flew because of circumstances back in the day, but there were many advantages there for the picking.
Move the AC-DC conversion out to the same space as the air-con heating battery and harvest it there to offset the localised heat requirements within the air con & heat recovery systems.
That would prevent a lot of heat being generated in the server rooms, and the subsequent need to extract it again, with lossy recovery or completely wasteful exhaust systems, leaving more capacity for the actual cooling needs of the processors etc. within those server rooms.
Out of interest (genuinely no idea), what's the power loss on the AC-DC conversion stage vs. loss on the step-down transformer from 120/240V to 12V? Because obviously you can't supply power around the building at low voltages - the copper you need to carry that sort of current (hundreds/thousands of amps per rack/row) would be prohibitive. So you could do AC-DC outside, but it's going to need to come to the rack at high voltage and be stepped down at (or very close to) the server.
There are places where I have long thought that there are obvious inefficiencies - such as a UPS full of batteries converting 12/24/28VDC into 240VAC to shunt round the building and then back down to 12VDC. That sort of thing makes you think about whether it's more efficient to have a per-rack UPS outputting current at DC close to the server. Some people have also played around with the idea of rack-top hydrogen fuel cells in lieu of backup diesel generators. Piping hydrogen into a datacentre is it's own risk assessment, but again - you're going to get DC out of a fuel cell, which means you really want your servers to consume that directly without additional conversion steps. Which means you want the mains supply AC-DC conversion done prior to the server PSU.
I do note though that the fact that the hyperscalers don't seem to have done this in their (homogenous, bespoke-hardware) datacentres suggests it's not the most pressing concern in DC design.
I can's see the local fire marshalls allowing the amount of hydrogen needed for a data center, because of the size of the fire inside the building they'd have to fight of something goes wrong. And considering hydrogen can leak from a steel pipe (atom size, doncher know) causing the metal to become brittle (and unable to hold pressure) over time, the system would always have a slow leak all over. Until it had a sudden, short lived, huge leak. Then a loud Boooom!
Lots of gear can come with DC power supplies. For a while companies like Rackable who was popular in the hyperscale space back before 2010 had rack based AC-DC systems. They were bought by SGI then HP bought SGI those product lines are long dead. I was interested back in 2009 in one of their products called "CloudRack", which was neat in theory, never got to see it in action though. Built for hyperscale, the servers had no power supplies or fans, there was rack level of both that supplied the servers(sample server picture from my blog at the time http://www.techopsguys.com/wp-content/uploads/2011/05/c2-tray.jpg). I wanted to get it for a Hadoop build out at the company I was at. I wouldn't dare use them for anything mission critical of course.
I think going beyond rack based DC distribution is likely to be wasteful/inefficient with the loss in energy over the distance? Thought I was told/read something along those lines at one point. Also I think I was told/read that DC is much more dangerous than AC.
Another efficiency gain is increasing the voltage. I've never seen it used myself but the PDU vendor I use (ServerTech) at one point was pushing 415V (https://www.servertech.com/solutions/415v-pdu-solutions). Unsure how much savings that higher voltage can bring.
You can already get DC PSUs for both HP and Dell servers that fix the "common slot", although they are stupidly expensive new, and they say you can't mix DC and AC PSUs, which is a bit crap.
I bought a second hand 48v dell one so I could run a server direct from a solar battery, but it's got some random connector, neither pin says whether it's + or -, so I'm a bit scared to try it as I can't find any documentation about it.
I guess I've got a 50% chance of being right first time, and 50% chance of it letting out the magic smoke......
Wasn't the moving of the AC/DC convertor out of the server blade something Google did a few years back with its own build servers?
I suspect this is part of the problem, vendors such as Dell, HP, Lenovo et al all produce 'conventional' rack servers which convention says come with an AC socket and not a DC socket. However, if you build your own rack servers, you can rearrange the PSU components to enable PSU's to be located outside of the cabinet.
I suspect similar considerations apply to the problems of cooling (server) CPU's; build your own servers and you can standardise on a particular liquid cooling system and fit out the cabinet accordingly.
It would be interesting to know a bit more about OVH's servers, as they build their own server cabinets.
Google did something more creative than that, at one point looks like back in 2009 they released info showing that they were building servers with batteries built in (instead of large centralized UPSs), with the justification being that most power events only last a few seconds, so they could cut cost/complexity with that design.
Don't know how long that lasted or maybe they are still doing that today. Never recall it being mentioned since.
I wrote an internal paper on this for our corporate group 15 years ago, but then they off-loaded us from their mega corp to another mega corp and all the momentum was lost. The copyright and patents would I suppose now be disputed between both mega corporations if anyone ever rattled that cage. Inertia wins again :(
Not sure patents would hold anyway. Telecomms traditionally has been a DC shop, pretty much for the reasons you state. I've installed many BFRs from Cisco, Juniper and stuff like DWDM switches that have all had DC PSUs. Which was sometimes FUN! because contractors (even NY Union ones) would occasionally rock up with DC cable that could handle 50V.. But not at >150A. Their cable may be fine for a cooker, but not for very expensive to refill fire supression systems. I've also done VoIP switches like Nortel's (Ok, Avaya) CS2K that typically had (very expensive) 'PC' servers, mostly with DC PSUs. Or sometimes plain'ol AC and invertors.. But those generally failed evaluation. Companies like Sun, IBM, HP, Dell etc also offer DC options for sensible markets. Problem is usually with the customers though as they're more familiar with AC.
That's really the problem. In a managed service, the providers like Rackspace, Equinix etc should just offer DC powered options. With 'cloud' services, it can be less of a problem because the HP or equivalent servers can (arguably should) be DC fed anyway. Then it's just dealing with collo customers, and it was the job of oiks (Ok, consultants) like me to encourage them to spec DC. Often managed with the help of sales to make DC feed's prices more attractive.
Data centers do not have a stable temperature. Equinix SLA states the maximum temperature at the bottom of the cold side of the rack. It is not supposed to exceed 27 degrees Celsius for more than 15 consecutive minutes. In fact the temperature currently fluctuate between 24 and 30 degrees. The hot side of the rack is a lot warmer. Top of the cold side is a lot colder since this is where the cold air is coming from.
Now the main problem-- if you increase the average temperature on the cold side of the rack, you automatically reduce the amount of equipment you can put in a rack. The limiting factor on the number of servers you can put in the rack is cooling. With the current setup racks are already often half empty. If your servers need to perform some number of tasks, they need certain amount of energy and this energy needs to be dissipated as heat. That amount of heat translates to the amount of energy needed for cooling. Modern servers are energy efficient per task but very energy dense per rack unit. To cool them at higher ambient temperature you will need to spread them over more data center space.
That's always been the case. Each cabinet will have a power allowance.
I was doing datacentre stuff around the advent of blade systems, which could get huge compute density (by the standards of the day), along with equally huge power consumption. I recall some deals where customers were having to rent 10 full-height racks, eight or even nine of which were left basically empty.
GJC
You can get a good idea how the servers etc. are coping by monitoring fan speeds and outlet temps. I have found the outlet temps do not really rise with inlet until the fans are maxed out. Similarly as you drop the inlet temps the fans ramp down until at minimum at which point the outlet temps start to drop. That does assume you are running cold side, hot side rather than room cooling.
You can also monitor the power drawn as fans ramp up to look for a balance in the cooling costs. Well, if it your own room and you are paying for both cooling and server loads.
There does not appear to be much advantage running below 24C for us and, as I found out when the AC packs shutdown in summer, most of the kit did not even start to complain of high temps until inlet got over 35C. Exhaust on a larger router hit 65C and it just flagged up that internal temps were moderately high.
We are looking at moving to using filtered external air for most of the cooling and even seeing if another unit close to us would like the exhaust air to heat their warehouse.
Be sure to deploy your own environmental sensors.. Most good PDUs have connections for them. I have at least 4 sensors (2 front/2 back) on each rack(2 PDUs * 2 sensors each)). They monitor temperature and humidity.
I remember the first time my alarms for them tripped, I opened a ticket with the DC asking if anything had changed. It wasn't a big problem (the humidity had either dropped or exceeded a threshold I forget which), I was just curious the dramatic change in readings. They responded that they had just activated their "outside air" cooling system or something which was the cause of the change in humidity.
Had major thermal issues with a Telecity facility in Amsterdam, no equipment failures just running way too hot in the cold isle. Didn't have alerting setup for a long time, then when I happened to notice the readings it started a several months long process to try to resolve the situation. Never got resolved to my satisfaction though before we moved out.
I remember another facility in the UK at another company that was suffering equipment failures(well at least one device, their IDS failed more than once). The facility insisted the temperature was good, then we showed them the readings from our sensors and they changed their tune. They manually measured, and confirmed the temps were bad and fixed them. Never was on site at that facility so not sure what they did perhaps just opened more floor panels or something.
But just two facilities with temperature issues over the past 19 years of using co-location.
Power and cooling, two things most people take for granted when it comes to data centers(myself included). Until you've had a bad experience or two with either then you stop taking them for granted.
Here are some hard earned info:
When you are deciding on who to install you cooling system, ask them if they will install a upflow or downflow cooling system. If they say upflow, show them the door and get someone that know how to install systems in a datacenter.
Upflow systems mix the hot and cold air in the room.The idea is heat rises so blow cold air at it. Correct for home systems, bad for datacenters. You want to remove heat. The less you contaminate it with cold air, the more effective, hence "Hot and Cold Isles".
Fresh air systems. We do this and it is great way to save money and energy. If your outbound air (hot isle) is 90F and you are pulling 70F you are saving a bunch. Humidity is an issue with this the colder it gets. Don't consider it as part of the redundancy. There is always hot days.
You want to have between 40% and 55% humidity. Servers don't sweat so the only way to cool them is by dumping the heat into the moisture in the air as it goes by. Lower the humidity the more it decreases the utilization on the cooling systems, but increases the heat in yours.
At 74F inlet to the dell servers, the fans will just start to ramp up. That is the sweet spot. You may have to be @72 int the cold isle to achieve it.
Inline power is usually considered a luxury. It in fact actually saves money and cooling. In power systems there is a power factor. If your power company has their act together it will be no better than .8. In a 3 phase system (your cooling systems need to be this too) you are able to use almost 100% of the power and when it comes out @ 208v it is at 1, no loss. If you have a computer running 120v directly from the grid you will loose 20%. That has to be made up by a higher current draw by the server. I know some will disagree, but I am going by putting a meter on it.
Another way to save energy is for the datacenter to charge the customer by the watt used, not the capacity of the circuit. The bean counters will see the bill and ask the tech how to decrease it. The correct response is to upgrade the hardware which will be more efficient, and install SSDs. Replacing 4th gen back with new gives a 10month ROI and the techs get sleep at night. I have seen as much as 5kwh down to 170w with new equipment.
Oh and if the raised flooring guy comes by, show him the door too. It is much better to have a higher ceiling for the heat concentration than the restrictive action of raised flooring, also that is where the inlet for your cooling systems go. Higher the better. And don't buy into the "we use tiles with holes in them". Use the "flood" method into the cold isles and spend the money on redundancy to protect your techs mental stability.
It is well worth the money to buy one of these flir devices to attach to your phone. Take a look at your rack. You will see the hot spots, loose power cables, bad connections on breakers and the white hot wall warts.
Also, be wary of electricians. Don't let them put the step down transformer in your server room or datacenter.
Total greenwashing. "we pay less so you pay more to run your fans at higher rpms". Win-win. Equinix use less energy, but also charge you more for the privilege of using more power. Digital realty have been deploying the same approach this year, and we're already seeing thermal throttling on cpus with fans running full-tilt.
This may work for under utilised servers, but when you're running kit hard, it becomes a problem
While HVDC has issues, they are manageable.
All modern PSUs will run off 240 V DC, and will actually do so more efficiently, so no changes needed in the racks (unless they are using AC fans). The UPS(es) can also be of a type that has a 240V battery stack - they do exist. I've worked on them. Apart from charge balancing circuitry they are actually much simpler than conventional ones, and again more efficient.
The initial cost would be greater, but would be quickly offset by improved performance.
P.S. In the ones I saw, there was no power wasting inverter. Also, the AC mains input to the UPS was harmonically 3rd tuned to flatten the waveform and give the rectifiers an easier time.
I have seen attempts to use ambient atmospheric air (outside air) in data centers. My experience is in Minnesota. One would guess below zero F air would make for great cooling. The problem with such cold air is that it tends to be very dry. The unmentioned part of data center environment control is maintaining stable humidity. Low humidity causes static problems and other issues.