Re: why? ... Why "expensive" storage gear will live on - ad perpetuum
An important cost driver for enterprise storage is its extensive testing effort (>50% of the cost) and the critical support structure. This is maybe the "missing" part, besides a supposedly higher margin?
Putting some fancy code on generic x86 hardware, adding some SSDs and a GUI is the easy part, getting this combo to five nines of availability or beyond, across a fleet of ten to hundred thousand deployments is a totally different story. Midrange storage has done a good job evolving towards tier 1 availability levels, but software-based deployments on arbitrary hardware are quite often far away from that. Why? Because every deployment is the first of its kind, with a combination of adapters, microcodes, drivers or cables never seen before - and thus with yet unseen error combinations showing up. People who tried heavy-duty storage virtualization in software know what I'm talking about.
But there are also the "good enough" use cases, which are less sensitive to outages or data loss than e.g. bank accounts. A majority of today's cloud-based workloads are of the "good enough" type, and yet cloud service providers run highly standardized environments where the potential multitude of error combinations will be mastered over time. Yet, no cloud bank accounts so far - or only with a hardened storage layer.
Don't be fooled, there is no such thing as bug-free software (or vulnerability-free), and that includes storage software. This is not a "features" or "fancy redundancy algorithm" discussion, it's only measured in hours of operation and field maturity of each and every new 'arbitrary' setup. For my bank account, I want the opposite: not the latest and greatest, but rather as hardened as possible.