Are SANS right for converged stacks
IBMs PureSystem single product (converged stack) systems are built from servers with DAS and a SAN, a Storwize V7000, inside the rack. Okay. We have block access arrays and we have Fibre Channel and iSCSI but, really, do we need a SAN infrastructure to interconnect a box of hard drives and SSDs with a bunch of servers in the same rack? It seems over-kill to me. Would we set out to design a data store for a bunch of servers, with the data store and the servers all in one rack, this way?
Why not add more disk drives to the individual servers and use virtual storage appliance software, like the HP P4000 VSA, to aggregate the disks and provide a logical SAN? Then we can ditch the separate storage network. We also don't need to buy storage enclosures and array controllers; that functionality is provided by the VSA software running in the servers.
The cost of goods go down and the complexity in the system is reduced as well.
We could even use a PCIe fabric to interconnect the servers and the disk/solid state drives if we wished to drop latency down a notch. Using a SAN for a rack area network seems ... well, dumb.
Am I smoking pot?