When does spindle count, er, count?
Honesty in Commenting: I'm a happy Pillar customer, although I'm also very fond of my EMC storage as well, and Mike's one of the most engaging men I've had dinner with in recent year. However, what follows is my long-standing take on storage, formed more during my days as an Auspex customer.
Spindle count matters (ie you need more of them) when you either have a random read rate greater than the aggregate operation rate of your drives, or you have a sustained write rate ditto over a period of time longer than you can solve with array cache.
Back in the day, central storage took a pounding on the read side, because clients were mostly very memory poor. Read was hard to optimise away: if it's not in cache (and it usually wasn't) you have no choice but to go to spindle, and then you need lots of them. So my Auspexes used to be something like ~70% read, and most of those reads were serviced from disk. 84 x 36GB RAID5 seemed like a good way to provide ~2TB (or, for those with very long memories, 60 x 1.3GB RAID 0+1 seemed like a good way to provide ~40GB).
But these days, a not-dissimilar workload is write heavy. The clients have plenty of RAM which means that they rapidly stabilise at a point where the read load they impose is relatively small, and most of the central disk time is spent coping with writes --- the clients, properly, issue writes as soon as they have dirty pages. Write can always be reduced to a cache operation, so long as you have enough cache, and RAM --- even mirrored ECC RAM with battery back-up --- is dirt cheap.
So yes, if you have a burst of write which exceeds the capacity of your cache to the point that not even the portion that will fit into your cache helps, and not even the write re-ordering that the cache allows helps, you'll need more spindles. But my experience of sizing storage is that most people grossly over-estimate the scale of their problem in terms of duration: you may need oodles of random write performance over a few gigs, or perhaps a few tens of gigs, rarely more.
If you need to do random writes over many hundreds of gigs such that you can't re-order them in any useful way, then you're going to need spindles, and lots of them. Even then it's an open question if in that environment you're better off with N 300GB FC disks or 2N 1TB SATA disks short-stroked to 300GB. Mike would argue the latter, and he'd argue that you can probably get some mileage, if only for snapshots and long-term archive, out of the 700GB that are not part of your short-stroking.
So when I look at my arrays (fairly intensive Oracle workloads on some, Clearcase on others) I don't see the limiting factor being the spindles: I see it being the ability of the controllers to keep up with the shorter bursts. And the easiest way to improve that is to throw more RAM at the problem. Spindles may allow more operations to complete within 7ms, but they still won't improve the speed of any individual operation: if it's going to disk, it's going to take 7ms. RAM allows operations to complete (essentially) in zero time. 100 lorries can carry more goods up the M6 than 1 lorry, but it'll take just the same length of time for a given package to get from Rugby to Carlisle.
Now this analysis doesn't help for read. But if you can do your read in a reasonably sensibly ordered way (as, say, during backup or large table reads), then 100 spindles of 1800rpm ESDI is more than enough bandwidth to be going along with, never mind 100 spindles of anything remotely modern. If you're doing random reads, not so much, and then I agree you need the fastest disks you can get.
But those random reads are again going to be taking 7ms each, and I would seriously question the overall design of a production environment in which you need to sustain 10K random reads per second. Yes, 100 FC disks will do it, and 100 SATA disks will struggle, and yes the latency will be lower with FC, but still...7ms, 10ms, both are lifetimes in terms of CPU speed, and isn't it God's way of telling you to either put some indexes on your tables or just buy the terabyte of RAM you know you want for your application?
ian