back to article Dell PowerEdgies built like Marilyn Monroe

A lot of new hardware features were rolled up into Dell's PowerEdge 11G servers, which came out in 2009 and were enhanced throughout 2010. But up until now, Dell has not discussed an important feature of the plain vanilla PowerEdge servers that it has kept in its back pocket in reserve as it did further testing: They can run …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Mushroom

    BBU charging temps

    Not sure about the 11G but the 10G would refused to charge RAID BBU at any temps over 34C.

  2. Anonymous Coward
    Flame

    Dell has always run hot anyway...

    At least their laptops have. It's nice that they've now designed their hardware to take it though. I would worry about the hard drives though. Despite Google liking to run them a little hotter, I can't see that 90 degrees with high humidity would be great for a drive array. I also can't see high humidity being wonderful for tape backups.

  3. Alphabet Soup 1
    Unhappy

    I'm disappointed

    I was hoping that the vibration dampening mechanism on the disk might be jello on springs.

  4. Frank Rysanek
    Coat

    interesting stuff

    A couple questions come to mind... do the machines contain a Flash-based data logger, keeping track of temperatures over the service life / warranty period of the machine? As part of AMT 12.0 maybe? :-)

    High-temp computing is interesting stuff. If you pay attention to board-level design, there is a lot you can do to help your design survive longer in operation at higher (broader) temperatures. And there's a lot you can *spoil* by careless board-level and system-level design.

    MLCC capacitors (for power blocking) are made of several dielectric materials with quite different sensitivity to temperature - even if we speak "comparable" models in the range of tens of uF per unit, typically used to block low-volt high-amp CPU power. Some drop to ~40% of their capacity at -20 *C, some are much more stable.

    Cheap Aluminum electrolytic capacitors also lose capacity at low temperatures and their ESR increases maybe tenfold, but even in conventional Al Elyts the chemistry can be slightly modified (alcohol added?) to make them perform better at low temperatures. (I hope the capacitor plague is over by now.) More importantly nowadays, solid-polymer elyts don't seem to have that low-temp problem at all, and they don't dry out at higher temp either. They last much longer. The downside is, that solid-polymer elyts are not made for voltages above say 30V - so you cannot use them at mains PSU primaries :-( So the PSU may well be the weakest spot in any computer, especially the PSU primary, which must contain conventional Al elyts and is typically a point of hot air exhaust, which certainly doesn't help the Al caps' longevity.

    Next, in order to compensate for low-temp effects and gradual ageing, there may be room for designing in more capacity on a motherboard, just to be on the safe side, to have some headroom. Connecting more caps in parallel may bring the added bonus of decreasing the actual ripple current per capacitor, which decreases the capacitors' internal heating -> allows for operation in a higher ambient temperature. (The effect of cap addition -> ESR decrease might actually translate quadratically into the temperature difference / derating, which then translates into the cap's service life along some vaguely exponential curve.)

    Effectively it all translates into attention to detail, and into board space occupied by the caps and by on-PCB heat dissipation space. Any additional heatsinks (e.g. on VRM FET's) mean mechanical design, which means substantial added cost (apart from designer headache) - so the board designers typically let the FET's run rather hot, because they're relatively insensitive... And space is always at a premium, especially in the ever-more-compact datacenter gear.

    Let me suggest an interesting concept: if you could let your gear run at hotter ambient temperatures, you wouldn't have to use air conditioning (artificial freezing) all that much, you could use plain heat transfer more of the time. As far as I can tell, higher ambient temperatures are prevented by relatively sharp "temperature gradients" in the hardware = poor heatsinking.

    Heatsinking seems to be a nasty can of worms for PCB and case designers. Especially fanless designs are highly suspicious in principle... It's interesting to see how different hardware vendors deal with this, deduce who's willing to take radical + effective + systemic steps, and who resorts to eyewash... (put shiny galvanized heatsinks on chipset + VRM, run some pointless heatpipes among them, then cover the biggest heatsink by a company logo badge).

    I have to admit that in this respect, the top three name-brand hardware makers are generally in a higher league, compared to the noname market - and have been in a higher league for many years back.

    One last observation maybe: even noname motherboards that started coming with solid-polymer elyts in the VRM, last much longer. The transition from the "plagued" Al Elyts to solid-polymer elyts also concided with a transition from P4 Netburst to Core2Duo (I'm Intel-based, sorry), which overall ran much cooler. My favourite way of building a long-lasting computer used to be: take an LGA775 motherboard that has enough VRM oomph to support the 130W P4's, and slot in a 45nm low-end C2D or Celeron :-) It tends to take a BIOS update, if the board has some older chipset.

  5. Matt Bryant Silver badge
    Thumb Up

    Nice!

    Should give some savings on coolling. On the downside, I'm sure there's some twits that will try running the things under their office desks.

  6. Santa from Exeter
    Joke

    Tit el

    "Insert "nice rack" joke here"

    Well, I'd serve 'er

    Yeah, the one with the copy of Gentlemen Prefer Blondes in the pocket, Ta

  7. Velv
    FAIL

    Great, but ...

    What about the rest of the kit in the DC that isn't certified to the same level.

  8. Anonymous Coward
    Boffin

    no title

    It may not reduce lifetime, but no mention of whether running hot affects performance.

  9. devlinse

    Id favour quiet over performance

    I've a T610 at home and boy is it loud. While it's nice that the Dell tech is robust, I'd actually prefer quieter operation at the expense of performance. Guess you can't please all the people all the time though.

    1. Davidoff
      Thumb Up

      Quiet and slower

      I don't know the Dell servers but with HP ProLiants (ML300 series and most of the DL300/500 series as well) you can set the fan control properties ("Full speed" for rack operation in a data center or "Thermally controlled" for offices) in the BIOS, as well as the power settings (i.e. power efficient). With the ILO2 Advanced license you can also monitor and even limit the power consumption (Watts).

  10. Anonymous Coward
    Linux

    Good stuff

    Nice to know when the aircon in one of our DCs fails (like it does every summer) and we wonder what we're paying Level 3 for!

    On the down side, most of our kit is 1850, 2650, 1950 and 2950...

  11. Anonymous Coward
    Coat

    "Insert "nice rack" joke here"

    I was trying to see the rack,

    but some strange woman's standing in the way.

  12. G Olson
    Pint

    DC protection more than HVAC

    That's nice. Will Dell now spend time researching how to protect servers against dust, gravel, mice, spider, crickets, snakes, lizards, Vice Presidents, and Marketing? Cushy controlled datacenters provide more that just heat and humidity control.

This topic is closed for new posts.

Other stories you might like