* Posts by Philip Buckley-Mellor

5 publicly visible posts • joined 3 Jun 2008

IBM boasts of full 8Gb Fibre Channel for blades

Philip Buckley-Mellor

HP have a c-class 8Gbps mezz in final testing now

So given this is just a minor rework of an existing third-party chip it's no big deal now is it?

HP hunts down 'rare' BladeSystem problem

Philip Buckley-Mellor

This is, again, very old news indeed

We replaced all of ours in November last year after this advice was released.

Rackable stays horizontal with x64 servers

Philip Buckley-Mellor

I think the market needs both approaches

and both approaches can be implemented in good or bad ways. It really all comes down to budgets, time to deliver working systems and the integration of an organisation's server & LAN/SAN teams.

For my company the ability to pre-fill racks with relatively-cheap empty blade chassis, power them up, run the very few LAN/FC cables required back to the central switches, pre-config the switches and slam in blades as they're needed/delivered outweighs the 'cable-as-you-go' approach that I believe Rackable kit generally uses. This is obviously because there's a lot of latency in my company between the server and LAN/SAN teams and this isn't a problem at smaller or more integrated companies.

Also, and I'm happy to be wrong here, I'm pretty sure you can get more blades into the same space than rackable (HP C-class=160 servers/1920 cores in a 50U rack, Rackable=100/1200 cores).

Why blade servers still don't cut it, and how they might

Philip Buckley-Mellor

I'm not normally one to critisise...but

One socket to rule them all - I see this happening very shortly after world-peace. Just because it makes sense (does it really?) doesn't mean it gives the chip-makers any advantage at all and without advantage they won't do it.

Modular motherboards - most people buy blades because they're cheap, one of the cheapest parts of a blade is its motherboard. Introducing the required connectors/interfaces between the various parts of a modular motherboard would inevitably make them more expensive, more prone to failure and make it even harder to configure and buy.

Blade and sub-blade standards - you want standards AND innovation? I'm not saying it can't be done but again what is in it for the manufacturers who are making a fortune on their custom NIC/HBA/etc cards?

No more AC power inside the data center - changing out 32A AC commando PSU lines in an existing data centre for DC ones makes little sense as the majority of other devices will still need AC so you end up leaving those running in each rack too - doubling cabling you rely on, plus the power switching circuitry itself. If it was a greenfield data centre and everything you wanted to put inside was DC then fine, good idea - but that rarely happens. Plus, in my experience, M&E work is the slowest part of any change, so you end up with a very long time to change in your deliver plan.

Integrated refrigerant cooling for key components - so you suggest that datacentres have normal AC for most things and direct coolants for hotter things - I worry enough about single cooling systems, never mind two of the things. Isn't it cheaper, more resilient, quieter, and just more '2008', to draw in ambient air and expel at force the exhaust air, just like Switch in Vegas do (http://www.theregister.co.uk/2008/05/24/switch_switchnap_rob_roy/)?

HP launches siamese-twin server blade

Philip Buckley-Mellor

Fuzz is right

Fuzz is right, It's 16GB per server and they're not being aimed at heavy VM usage, more for compute-heavy or forced-scale-out environments.

Also in reference to the IBM iDataplex, they're not in the same league - two racks-worth of these this HP blade model equals 2048 xeon cores and 4TB of memory and are already shipping. It's hard to find details on the IBM system but it looks like they only get either 800 or 1600 cores into the same space, plus of course you end up building your environment two-racks at a time - not always 'do-able'. Oh and they're coming in July.