back to article Supermicro knocks big boys with Superblades

Supermicro appears to have outclassed the Tier 1 server vendors with its latest blade design. Unveiled at Computex, the SuperBlade system holds 10 blades in a 7U chassis. That compares favorably with Sun Microsystems' latest blade chassis which boasts 10 blades in a 10U chassis. It also stacks up well against HP's c-Class …

COMMENTS

This topic is closed for new posts.
  1. Rune Moberg

    I hope Supermicro have improved

    We have some Supermicro servers that are between one and four years old.

    Besides the non-working SATA RAID controllers (mirrors suddenly broken for no apparent reasons, unable to boot the OS, flawed Adaptec drivers, etc) it struck me that paying extra for their remote management product was a moot exercise. Their 6013A-T model is a cruel mistress.

    Enter HP. I can now hook up using a web browser, fiddle with BIOS settings and install an OS from scratch, as if I were actually at the same physical location. (I'm not, the site is 15km from here)

    I manage six HP blades at ease, whereas our Supermicro servers depend on Remote Desktop Connection at all times. With tweaking we could perhaps get control over the power button (IPMI), but... No cigar.

    Maybe Supermicro have improved. I don't know. I sure hope so.

  2. Anonymous Coward
    Anonymous Coward

    Haven't I seen this before?

    Another clone of Dell / Fujitsu-Siemens OEM'ed product? Some resemblance... http://www.quantatw.com/Quanta/english/product/qci_es.aspx

  3. David Braswell

    Blade article with no mention of IBM!?!

    You cannot compair any new blade system to the "Big Boys" if you don't include IBM. I have worked with HP, Dell, Penguin Computing and IBM BladeCenters. I know I would never buy a system of this type from a second teir vendor such as SuperMicro or Penguin Computing(Horror stories for sure). Dell I wouldn't touch either, HP passes up a little higher but still does not compair to the IBM. Also I don't want to hear anything about price, becuase IBM gear is cheaper than you think. The engineering, or the years of engineering within this and other technologies IBM can produced over they years to than create a wholistic system like the BladeCenter makes it simpley a no brainer. Just for starters compair the power supply and fan modules between the IBM's and other vendors. IBM backed up and looked at the need and came up with a approach spicifly for the BladeCenter. Seems most other vendors are retooling exsisting technologies and hardware to meet the needs of an encloser.

  4. Fazzi Auro

    CPU density isn't everything

    Typically, blade installations don't populate all the blade slots because thedatacenter can't handle so much power and cooling. What's more important are blade management, servicing etc. It's where IBM/HP has an edge over everyone else. Not many customers would trust anyone else, even DELL, let alone Supermicro on blades. Sun's new 10U blade offerings are definitely superior to this. Come on, only 8 DIMM slots for 4 socket quad core cpus. If I need 32G RAM, I have to pay 80% permium for 4G DIMMs over 2G DIMMS. I can buy a 2socket 32G blade a lot cheaper from Sun than from Supermicro.

    I Can can go to 64G RAM with Sun's blade which allow me to run a lot more virtual machines.

    What about the chassis design. IBM and HP have built an entire ecosystem with variety of offering in plug-in IO cards, switches, blade management modules and strong blade management framework. Sun is relying on PCI express 'express module'(EM) standard which is also at least as good an offering as IBM and HP, and the overall the sun blade chassis design is commendable, and supports IO hotplug.

    Sun's x86 server management software seems to be well reviewed, and plays with IBM and HP's management tools.

    As far as I know, IBM's blade chassis is OEM'd from Intel. 80% of the blade market belongs to IBM and HP, so they are experienced enough in this market. Not easy for a newcomer to crack - and there is little chance for Supermicro to break in here.

  5. Jack Pastor

    There are really two Tier-1 vendors dividing the market today.

    Hey Dave - Does that IBM BladeCenter come with a spell-checker ?? <g>.

    Anyway, to your point, Blades are an investment as much in the vendor as in the technology. No sense buying a bunch of chassis, just to have the vendor decide they're not getting enough market share to continue producing blades to fill them.

    For all their advantages, they are, and will likely remain, a highly proprietary architecture. IBM and HP have pretty much divided up the market. That doesn't mean that Sun won't survive, but I would hate to have made the decision to buy a bunch of 8000 chassis only to have the 6000 come out less than a year later.

    Dell has already exited and re-entered the market only to be allegedly planning a whole new architecture in the fall.

    As for DIMM density, I assume everyone will eventually provide the maximum allowed by chipsets, but what happens when INTEL dumps Fully Buffered DIMMS (which is the technology, with associated chipset) that allows that killer density ?

  6. Fazzi Auro

    DIMM density

    I don't think Sun blade 8000 chassis is a conventional blade. The blade market mostly plays in the 2-socket region, so the the Sun blade 8000 is not really in competition to HP/IBM blades. I doubt Sun did well with that chassis, not many customers are deploying dedicated 4-socket blades, and the price premium you pay over traditional 2 socket blades makes it cost-prohibitive. The Sun blade 6000 is in the same field as HP/IBM blades, and it does have some inherent advantages over HP/Sun blade design, so customers will surely take a look at the Sun 6000 blade design before making a decision.

    With regard to DIMM density, Opteron supports 8 DIMMS/socket today - so it's not unique to FBDIMM (In fact Sun's new AMD blades have 16 DIMM slots as well). With multi core CPUs, DIMM density is important more than ever. So even if Intel does dump FBD, 8 DIMMs/socket is guranteed to be supported. What's more important is for the system vendors is to make space for all those DIMM slots in their boards and supplement with the required power and cooling. HP's full height C-class blades has 12 DIMM slots. If the blade form factor is small, it just becomes harder to increase the DIMM density, and even harder to cool it. So while HP can easily come out with a 16DIMM blade in their 10U full height blades (provided they can cool it with their cooling mechanism), it's harder to do so on SuperMicro's 7U form factor. But, surely kudos goes to Sun for being amongst the first to support the highest configuration blades.

  7. Jack Pastor

    First, but not the last ..

    No, the 8000 was NOT a conventional blade, but it represented Sun's entry into the market a year ago. When it did not go well, they came up with a more viable (incompatible) alternative. Anyone purchasing blades would naturally be wary of this strategy when investing in infrastructure.

    Bechtolsheim is a smart guy, and the 6000 is a nice design but they still lose on overall density. Not all blade customers are running full-blown virtualization clusters. Doubtless, HP will be able to shoehorn 16 DIMMS in their next gerneration (AND cool them.) If they went diskless, they could probably do this in the half-height models and just boot from SAN.

    Sun did some nice I/O design, but HP's Virtual Connect is pretty unique. IBM and HP each own approximately 40% of the blades market, and have each been in it for over five years.

    Also, they are both on second-generation designs, having learned a few things from their earlier attempts. Sun may have an advantage for a couple of months for hard-core VMware shops that want bargain memory. In the long run, it will probably be less expensive to stick with IBM or HP, pay a premium for 4GB DIMMS and take advantage of the mature manageability tools and robust infrastructure (switches, SAN certification testing, etc.)

    Customers are likely to use Sun's checklist item as a pricing leverage against IBM and HP, but very few server RFPs attach 90% of their weight to memory capacity.

    Tier-2 vendors can buy blade products from Taiwanese design houses, and maybe sell a few to Tier-3 customers that HP and IBM (and Sun) don't want to bother with anyway. Any enterprise of reasonable size and stature will likely have a whole team of decision makers add their $.02 so that no single feature will make or break the adoption of a blade standard.

    No room for one-trick ponies, when you bet your business on a given technology.

  8. Fazzi Auro

    Sun blade 8000 and 6000

    Jack, the blade 8000 was introduced in July 2006, and now blade 6000 is introduced in June 2007. Are you saying that Sun had time to see whether blade 8000 was doing well and then start designing blade 6000 and do all these within a 11 month period ? That can't be true. They were probably working on both designs at the same time, but decided to bring the 8000 blade to market first with the resources they have (for whatever reason I don't know). Each of the servers are very well engineered designs, and there is no way you can do them in a short time, taking into account so many rigorous steps the Tier-1 OEMs take - design, verification, testing, beta testing and early access, certifications etc. There is no indication that Sun is going to EOL the blade 8000 design, both coexist - with blade 6000 aimed at the volume market, and blade 8000 being a niche product.

    Both HP and IBM blade designs are centered around the chassis. Just because HP and IBM are doing it for years and are in their second generation design, does it mean that they have the best blade designs ? From product maturity perspective, I agree they are mature enough - but that can't be the the only reason why a new customer can't explore alternate designs and choose the one that suits them the best ? Please list me some reasons why you think that HP and IBM's second generation design is better than Sun's first generation design.

    HP Virtual connect only fixes part of the problem, this feature is already built into the sun blade design (Ethernet NEM, and Fiber Channel NEM). Still, how do you service a blade when a IO card installed in one blade fails, it's cakewalk on Sun's blade chassis (hotplug from rear of chassis), not so on HP/IBM. The overall IO design is much more open in Sun's blade, you get to buy third party plug-in IO modules, no need for HP/IBM specific switches or mezzanine cards. I am pretty sure HP/IBM makes heck a lot more money on these added modules than on the blades alone. Why would the HP/IBM solutions be less expensive overall ? And what about the ablity to use unique IO card in each blade ? Certification aside, the Sun blade can easily integrate into a HP or IBM management infrastructure. Do you have any data to suggest that HP can indeed make a half height blade with 2 cpu sockets and 16 DIMM slots ? And power it and cool it effectively. And they can also easily do a 4-socket full height blade with 32 DIMM slots I guess :-) Sure DIMM density isn't everything, but it plays a good role when you want to quote a lower price on the RFQ.

    With any new product, I don't deny it's easy to break in into a market owned by two giants. I truly believe Sun's blade 6000 is a breakaway design and very different from current offerings. I see no reason why a new RFP can simply skip over the Sun offering without doing a detailed cost benefit analysis, and basing the decision based on how long HP/IBM have been doing blades and how much of the market they control. I am not predicting that Sun will be able to capture even 5% of the blade market in the next 2 years, I only believe that this is a product that deserves some credit for being different and will surely impact how the blade market develops going forward.

  9. Jack Pastor

    You are obviously a Sun fan !!

    I have nothing against Sun. They so some brilliant stuff (despite the fact they are run by software guys who don't see the need for RAID controllers because SOLARIS can do that in SOFTWARE....)

    I'm not suggesting that they designed the 6000 specifically to address shortcomings in the 8000 series. Hopefully they made this road map clear to their customers, but they sure did not publicize it much before announce.

    As per IBM and HP's designs, obviously longevity has a role in maturity of support. Both IBM and HP (from Compaq) have a long history of providing "commodity" x86 servers, specific skill sets that Sun lacks. They both have long histories of partnerships with OS vendors (beyond their own Unix flavors) and server-specific management features, especially software tool sets) which are necessary when deploying servers in volume and around the globe. Fitting into an IBM or HP infrastructure is one thing, but reporting up to a Configuration Database (as ITIL suggest is a best practice) can only be done with MIBS that can reach deep enough into the hardware to provide such detail. Proprietary they may be but both IBM and HP spend a fortune developing their respective management tools, and Sun and Dell do not. Perhaps CIOs who make purchasing decisions don't care, but when Admins in the trenches who want to get home on time have a say, they will pick a server with a robust management infrastructure.

    I'm not sure you really understand what Virtual Connect provides ... NEMS are just aggregation modules. Please read up on what VC actually does.. It truly is a unique value proposition (and even IBM has to cobble together a bunch of Cisco gear and shims to make anything similar work.) In the HIGHLY unlikely event that an I/O module should fail, there are sufficient redundant ports on any given blade to accommodate this, and with VC, it is pretty simple to migrate any specific workload to a "hot spare" blade to repair the module. The MAC and WWN belongs to the ENCLOSURE SLOT and not the physical blade.

    I have no data to suggest anything, but I have seen the real estate on half-height blades that suggests removing two hot-plug drives and a BBWC module could conceivably leave room for 16 DIMM sockets in that form factor. Perhaps it WOULD be impossible to cool, but this is all hypotheses.

    As far as cost-effective I/O modules are concerned, it appears to me that at least when you look at HBAs, the blade form factors cost HALF what a stand-up PCI card costs. In 25 years of working with servers, I/O component failure has probably been one of the LEAST significant issues I have witnessed. Again, give

    Sun points on an RFP, but not enough to necessarily seal the deal as a "must-have" (unless they write the RFP themselves.)

    I do maintain that a portfolio of half and full height blades gives a customer more choices to accommodate specific tasks.

    Sun will obviously sell some of these into their installed base, and prevent some further erosion of their x86 business. They certainly deserve that for their efforts.

This topic is closed for new posts.