back to article Blade servers 101

How are blade servers different from their rack-mounted counterparts? The blade server trend started about ten years ago when RLX launched its system of servers built into a chassis that slotted into standard 19-inch racks. The idea is that you can install a blade server or any other type of device that would fit into a …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    virtualised

    I saw an array of HP blades the other day that don't have any hard disks just a port for a usb key on the inside so you can stick your hypervisor of choice. The main advantage is it allows them to pack in double the number of memory slots so you can use twice the amount of cheap ram and still get both speed and capacity advantages on a budget.

  2. John Brookes

    Double that density!

    Actually you can get 32 servers in a c-class chassis - 16 2x220 blades....

    Other than that, you've got the pros and cons bob-on

  3. ToddRundgren

    RLX did not invent the blade server, I did

    The blade server trend started about ten years ago when RLX launched its system:

    Wrong I invented the Blade server in 1999, a good 2 years before RLX.

  4. danolds

    Rear door heat exhangers

    ...can do a great job at handling the high heat output from blades. The downside is that you have to plumb them into a chilled water source. However, the upside is that they really work well. I had a vendor demo one for me a few years ago. The ambient temp in the room (at the front of the system) was about 75 F. The temperature coming out of the back of the rear door was a delightful 68F. If I remember correctly, the input temperature of the water wasn't all that cold - maybe in the 50's F. The cost wasn't too bad either, around $15k for the hardware, plumbing extra, of course.

    1. The Cube
      Stop

      Erm, no, they don't

      Rear door heat exchangers are a very expensive retrofit, they obstruct the airflow out of the servers which causes a whole range of problems, air recirculation within the rack is very hard to stop with rear door clutter and the server fans may not be able to drive enough air through the rear door obstruction. Many of them come with extra (non resilient) fans to try and mitigate this issue but this just means more fan power on your cooling system, the big fans in CRAC units move a lot more air per kW than the little things on the door units.

      Of course doing this also scuppers any attempt you had made to improve the efficiency of the main CRAC units because they need a high supply to return temperature difference to be efficient, that high delta-T that the blades gave you but you have buggered that up by taking the hot air from the blades and cooling it down locally.

      The other big problem with them is that they generally need a heat exchanger to feed them water meaning that the chiller plant has to work hard to provide colder water which then ends up warmer thanks to the heat exchanger to do less cooling - and don't forget the extra water circulation pumps either....

  5. Brennan Young
    Thumb Up

    Thanks!

    I had already worked out the general details from reading the reg for nearly 10 years, but seeing as I never have to deal with exactly this area of IT (I'm in software), I really appreciate that El Reg takes the time to define and illustrate a rather common kind of occult technology. I assume it's not exhaustive, nor even entirely accurate but who cares! I am definitely better-off than "none the wiser" after reading this article. Another please!

  6. Anonymous Coward
    Linux

    Blade systems

    Deployed 2 full racks of HP P class blades ages ago. Single biggest issue was heat.

    learned many lessons from that installation. Including the most important one - Hammer out details before you deploy. (something the management have trouble with still)

    Blades are great for utility computing - - we've gotten away from one blade one app deployments, we now manage them just like any other system, add apps to utilize all the resources .. .and you can put some serious resources on both HP's and IBM's newer blades. One territory we're now looking at is using blades + virtualization to deploy smaller apps. This is producing some very nice utilization numbers and fairly good ROI.

    With tools like cfengine and RH kvm and/or VMware esxi, Altiris for winders etc we can roll out substantial application installations in days, given that the hardware was rolled out with solid planning behind it.

    I would not suggest blades to every computing environment, but in large enough enterprises with sufficient standardization of deployment requirements they can reduce turnaround and complexity resulting in improved ROI. It DOES need solid planning and architecture behind it though.

  7. Anonymous Coward
    WTF?

    Err, what?

    "if one system breaks, there is another ready to can take over."

    Ignoring the bizarre English, since when were blades not tied to the OS running on them? If a server fails, whatever was running on it stops. There's no magic dust which moves it to another piece of hardware - that's virtualisations job.

    On another note, you do need to know what you're doing with blades. It's still somewhat easy to back yourself into a corner, options and infrastructure wise.

    1. Anonymous Coward
      Anonymous Coward

      Err...

      Cluster two blades, if one fails a spare can be automatically deployed and brought online with no service loss.

    2. Michael Duke

      Autoprovision + Boot from SAN

      With tool's like HP's Insight Control Environment (ICE) and FC Boot from SAN configurations it is a matter of seconds to get a spare blade to take the load of a failed blade running a non-virtualised application.

      Yes there will be a small outage but for applications that are not cluster aware or have to run on a non-cluster OS it is a great solution.

      30-60 seconds + OS boot time to recover an application.

  8. blaine gaither
    Happy

    Wasn't Superdone HP's first blade server

    I think that it was introduced in 2000.

    1. Clutch
      Boffin

      Re Superdone

      HP's Superdome has only become blade based in its most recent (Tukwila) iteration (and even then it is several blades plus other stuff in one or more cabinets). From when first introduced in 2000 on PA-RISC up until last year the Superdome was a cell board based system housed in multiple cabinets. I think HP's first blade was the bc1100 in December 2001. The first ProLiant blade (pre-merger) was the Compaq ProLiant BL10e in January 2002.

  9. Anonymous Coward
    IT Angle

    AdvancedTCA

    How do you all feel about AdvancedTCA (ATCA).

    http://en.wikipedia.org/wiki/Advanced_Telecommunications_Computing_Architecture

  10. Mikel
    Boffin

    No mention of HP blades?

    @article - The new architectures don't burn nearly as many watts as the old ones do, and DC power helps a bit as do the new Platinum power supplies. HP sells more blades than anybody, and has more bladesystem choices than anybody too. Modern processors and RAM are available in LV versions that don't take nearly as much power. Consider the maximum capacity of 4 10U C7000 chassis in a 42U rack, each populated with 16x BL2x220c G7 blades. Each of those blades is two servers. Each server can be configured with up to 48GB of LV RAM (Double when the LV 32GB DIMMS are validated) and, for example, a pair of the 60W 6-core Intel Xeon L5640 2.4GHz processor with hyperthreading and all the usual goodies. That's 4x16x2x2x6= 1536 Intel Xeon cores or 3082 threads in one rack. You're going to burn the industry standard 24K watts for the rack, yes, but you'll be able to get a lot for those watts too - VMHosting will oversubcribe those 4x at least, and serve hundreds or thousands of accounts on each shared VM.

    Say in a web hosting VM such described you put 1000 accounts per core (500 per thread), with some software to migrate accounts to VMs to balance demand (most accounts do nothing at all, but the active ones need to be distributed between VMs to give good performance). One rack would be good for 1,536,000 accounts of which 7500 were very active. It would pretty much be paid for by subscriptions within the first week, yielding great profits from then on.

    Modern datacenters are moving to 28K watts per rack. That C7000 solution is 15.26 Watts per core (and 4GB RAM per core) for the whole system. It's no competition for the K computer in Flops/W but it does a respectable job of hosting industry standard VMs without setting your rack on fire. It's almost as if HP and Intel are thinking a bit about this watts/rack issue - whodathunkit? There's even management controls so you can tell your C7000 not to burn more than 6KW or whatever limit you prefer, and it will downstep processors if it has to to prevent excursions.

    At 160Gbps stock with FlexFabric or Flex10 interconnects, one single C7000 chassis exceeds the network bandwidth of Wikimedia, so upstream connectivity isn't an issue unless your switches ain't got the grunt. A rack of them is 640GBps of upstream bandwidth, which is - to be subtle, quite a bit. In that case each server has four "virtual NICs" to divvy up their bandwidth in 10Mb increments and if you're totally retro up to one virtual FC channel at up to 8GBps (depending on what's left after Ethernet is allocated. Most folks will allocate 4Gbits for FC and the rest to Ethernet) To enable FC on the blades of course FC uplinks from the interconnect is required, which diminishes net available Ethernet bandwidth - which is not in scarce supply so it's a small loss. If you enable FC, it's a one-hop deal - The FC port on your Flex Fabric interconnect has to connect to a FC switch. Routed or switched FCoE is an emerging standard that isn't fully baked, For now, DataCenter Ethernet is a one-hop deal.

    @AC 22nd June 2011 17:22 GMT - Yes, All the new HP blades have internal USB and SDHC for booting from USB or SD. USB and SDHC are up to 32GB or more, so a respectable OS is possible but most people PXE boot or use ESXi, which doesn't need much. They can also come standard with 10G FlexFabric (FCoE), have Infiniband QDR and FusionIO based PCIe attached SSD storage available. They support PCIe attached sidecars if you're into the GPGPU thing, though that cuts your CPUs per rack in half. Most of the HP blades now support the new SFF SAS SSD's, that go up to 800GB each and really pump the IOPs but are quite spendy.

    @everybody talking about thermals - If you feed the racks with outdoor ambient air and vent to stacks, then there is no thermal issue unless you chose for some reason to locate your datacenter in Phoenix, AZ. In some cases the delta between ambient and exhaust can be economically recaptured, or the hot exhaust can be used in other ways. There is no good reason for chilled datacenters. The servers work at 40C or 104F and the hotter they are, the better they work right up until they fail. Just put your servers where it doesn't get that hot and feed them filtered ambient air with a blower. For example if you're somewhere that it gets really freaking cold, put ducts under the sidewalks and parking lot and use your servers to heat the patio. If the load is light and the snow is deep, run benchmarks or fold some proteins. If you're somewhere that it gets hot, a heat pump with the cold side on the inlet between the filter and the server cold side should do on the rare occasions that it gets that hot if you're not like Google and can't shift load to a cooler datacenter - and it can be OFF the rest of the time. There is no good reason why I need to wear a sweater in your datacenter. That's just wasteful. Servers are not some odd alien species that thrives in cold air.

  11. Little Me

    time warp?

    Can someone tell me if it is still 2011? I feel like I am back in 2002 when such issues were new news.

    I remember the wonderful expressions on datacentre managers when they realised just how many KW a rack could potentially draw. priceless.

  12. John Brookes

    Thermals

    @Mikel - You're right in your area of interest, but the point of the article was that how appropriate blades are for you is application- and datacentre-specific. If you're hammering the cores and memory constantly, small deltas can mean significant spot heating, leading to throttling, leading to crappy performance - if one of 1000+ cores is throttling during a tightly-coupled parallel job, you're wasting half your machine. Which might only last half as long - or, equivalently, require twice the maintenance -because you're pushing the envelope. Datacentre (thermal) environment is therefore a crucial, expensive and non-trivial consideration.

    @danolds,the cube - you're both right: water-cooled rear doors CAN do a great job, but you DO have to be damn sure that the fans can pull (at the very least) sufficient air through. If they can't, they're worse than useless due to internal recirculation.

    Oh, and speaking of doors: for maximal-density, heavily-hammered blades, front doors considered silly.

  13. Captain TickTock
    WTF?

    WTF

    I thought this would have made a good WTF series article.

This topic is closed for new posts.

Other stories you might like