* Posts by Lee McEvoy

8 publicly visible posts • joined 20 Oct 2011

Quick as a flash: A quick look at IBM FlashSystem

Lee McEvoy

Re: I think I know the answer

Here's an interesting article I'd point people at when they had made assumptions about relative costs - you may have read it already:


Lee McEvoy

Re: I don't think FlashSystem can do anything unique.


you've got the right sort of idea, but I think you've made it sound like a negative, which I'd disagree with.

Spectrum Virtualise (what used to be called SVC code) runs on a variety of controller "engines".

If you want to use it to enhance* a large estate of existing (perhaps heterogeneous) FC block storage you'll likely run it on SVC nodes, if you want some of your environment to be AFA you'll likely have V9000 (there's a licence cost saving in most cases), if you want a midrange array you'll most likely go for a Storwize V7000 (although a lot of V7000 customers virtualise FlashSystem 900 too). If you want low cost block storage, the V3700 and V5010 are there for that.

Given the common code base, you can replicate between different members of the family, which is good if your DR doesn't need to be as fast or big as production. Just pick the "engine" or "engines" based on your performance/capacity/function requirement.

So, you are correct when you say "if you really want SVC you don't have to buy Flashsystem", but you've missed the point that amongst AFAs, that functionality is unique.

If you don't like SVC, that's fine! Get yourself along to one of the Flash events IBM are running at the end of the month (I'll be at the one in London) - you'll probably like what you'll hear.

*"enhance" could include enabling existing kit to stretch/hyperswap between sites (or just normal replication as sync/async/async with snapshots), be realtime compressed and/or encrypted, pretty much irrespective of the backend storage (which is nice if you have a heterogenous environment following an acquisition). Having a "legacy" also means that you can work (and be supported) on OS that are also legacy - not many AFAs can do that as they tend to only support what is mainstream when they're designed/launched.

Want to know the key to storage happiness? It’s analytics. Yeah, it’s that simple

Lee McEvoy

Analytics separated from the storage system

Multivendor is often harder than we'd hope (not everyone bothers doing anything above "bare minimum" for SMI-S), but I know a lot of people are working on it.

IBM have something that works for all their arrays - I've had good feedback from customers that have tried it.


Fibre Channel over Ethernet is dead. Woah, contain yourselves

Lee McEvoy

FCoE was going to take over the world....

....it hasn't, and I've pretty much only seen it used as described by Enrico to reduce spend in chassis (not very often as ToR).

That's not how it was being positioned - it was going to be the protocol that took over the FC world. It hasn't even taken over all the networking in chassis based compute (where there is a potential case to use it).

Saying that people have bought it and therefore it can't be dead is missing the point - a lot of people bought Betamax (and some even bought HD-DVD). Did that mean those didn't turn out to be technical cul-de-sacs without a future?

How to get the best from your IOPS

Lee McEvoy

agree with above comment

"stick with what you already use" sounds a bit dogmatic to me.

If you're consolidating a decent number of servers (some with big application workloads) on to a small number of virtualised boxes, sticking with iSCSI might not be the best bet - making the decision based on circumstances and requirements tends to be the best bet!

I also loved the comment about the overhead on FC - does anyone drive 10GbE at "rated speed" in the real world? Overhead / inefficiency on Ethernet is a bigger issue than on FC!

NextIO punts I/O virtualizing Maestro

Lee McEvoy

apples vs apples

To start off - I don't work with NextIO or sell there stuff.....

Do you think that NextIO may have used 1U servers with a little more "oomph" than the single dual core processor with 2GB memory blades that you configured.

Where we've been involved in building infrastructure for hosting (including one that used NextIO), we've been using multicore processors (minimum hexacore, sometimes 12 core) with multiple sockets in use (sometimes quad) with tons of memory - VM density is normally limited by the memory you put in.

In NextIO's example they had approx $200K on the "grunt" server hardware itself (i.e. excluding switches, blade chassis, etc, etc) based on this part of the article:

"the cost of the servers and the Maestro PCI-Express switch together, it costs $303,062, with about a third coming from the Maestro switches and 60 PCI cables."

The "non-compute" blade infrastructure according to the basket you produced had a cost of ~$130K, so I'd be comparing that against the $101K for NextIO - is it enough of a saving? It might be for some people and it is certainly lower cost and doesn't have vendor lock in that blades do.......