
BBU charging temps
Not sure about the 11G but the 10G would refused to charge RAID BBU at any temps over 34C.
A lot of new hardware features were rolled up into Dell's PowerEdge 11G servers, which came out in 2009 and were enhanced throughout 2010. But up until now, Dell has not discussed an important feature of the plain vanilla PowerEdge servers that it has kept in its back pocket in reserve as it did further testing: They can run …
At least their laptops have. It's nice that they've now designed their hardware to take it though. I would worry about the hard drives though. Despite Google liking to run them a little hotter, I can't see that 90 degrees with high humidity would be great for a drive array. I also can't see high humidity being wonderful for tape backups.
A couple questions come to mind... do the machines contain a Flash-based data logger, keeping track of temperatures over the service life / warranty period of the machine? As part of AMT 12.0 maybe? :-)
High-temp computing is interesting stuff. If you pay attention to board-level design, there is a lot you can do to help your design survive longer in operation at higher (broader) temperatures. And there's a lot you can *spoil* by careless board-level and system-level design.
MLCC capacitors (for power blocking) are made of several dielectric materials with quite different sensitivity to temperature - even if we speak "comparable" models in the range of tens of uF per unit, typically used to block low-volt high-amp CPU power. Some drop to ~40% of their capacity at -20 *C, some are much more stable.
Cheap Aluminum electrolytic capacitors also lose capacity at low temperatures and their ESR increases maybe tenfold, but even in conventional Al Elyts the chemistry can be slightly modified (alcohol added?) to make them perform better at low temperatures. (I hope the capacitor plague is over by now.) More importantly nowadays, solid-polymer elyts don't seem to have that low-temp problem at all, and they don't dry out at higher temp either. They last much longer. The downside is, that solid-polymer elyts are not made for voltages above say 30V - so you cannot use them at mains PSU primaries :-( So the PSU may well be the weakest spot in any computer, especially the PSU primary, which must contain conventional Al elyts and is typically a point of hot air exhaust, which certainly doesn't help the Al caps' longevity.
Next, in order to compensate for low-temp effects and gradual ageing, there may be room for designing in more capacity on a motherboard, just to be on the safe side, to have some headroom. Connecting more caps in parallel may bring the added bonus of decreasing the actual ripple current per capacitor, which decreases the capacitors' internal heating -> allows for operation in a higher ambient temperature. (The effect of cap addition -> ESR decrease might actually translate quadratically into the temperature difference / derating, which then translates into the cap's service life along some vaguely exponential curve.)
Effectively it all translates into attention to detail, and into board space occupied by the caps and by on-PCB heat dissipation space. Any additional heatsinks (e.g. on VRM FET's) mean mechanical design, which means substantial added cost (apart from designer headache) - so the board designers typically let the FET's run rather hot, because they're relatively insensitive... And space is always at a premium, especially in the ever-more-compact datacenter gear.
Let me suggest an interesting concept: if you could let your gear run at hotter ambient temperatures, you wouldn't have to use air conditioning (artificial freezing) all that much, you could use plain heat transfer more of the time. As far as I can tell, higher ambient temperatures are prevented by relatively sharp "temperature gradients" in the hardware = poor heatsinking.
Heatsinking seems to be a nasty can of worms for PCB and case designers. Especially fanless designs are highly suspicious in principle... It's interesting to see how different hardware vendors deal with this, deduce who's willing to take radical + effective + systemic steps, and who resorts to eyewash... (put shiny galvanized heatsinks on chipset + VRM, run some pointless heatpipes among them, then cover the biggest heatsink by a company logo badge).
I have to admit that in this respect, the top three name-brand hardware makers are generally in a higher league, compared to the noname market - and have been in a higher league for many years back.
One last observation maybe: even noname motherboards that started coming with solid-polymer elyts in the VRM, last much longer. The transition from the "plagued" Al Elyts to solid-polymer elyts also concided with a transition from P4 Netburst to Core2Duo (I'm Intel-based, sorry), which overall ran much cooler. My favourite way of building a long-lasting computer used to be: take an LGA775 motherboard that has enough VRM oomph to support the 130W P4's, and slot in a 45nm low-end C2D or Celeron :-) It tends to take a BIOS update, if the board has some older chipset.
I don't know the Dell servers but with HP ProLiants (ML300 series and most of the DL300/500 series as well) you can set the fan control properties ("Full speed" for rack operation in a data center or "Thermally controlled" for offices) in the BIOS, as well as the power settings (i.e. power efficient). With the ILO2 Advanced license you can also monitor and even limit the power consumption (Watts).