pachyderm ... Someone (or something) with thick skin.
>>> Yes, elephants can run, but they can't dance very well, can't really sprint, can't turn somersaults elegantly, can't ...
but try stopping one!
Monolithic storage arrays may well claim that rumours of their death have been exaggerated, but that doesn't mean that they aren't entering the digital care-home for the soon-to-be-departed. These boxes — the mainframes of storage — are magnificent beasts, like do-everything battleships of the storage wars in an era in which …
Why would you want to move to from one (or a number of similar) monolithic arrays to a half dozen different more specialized arrays? You're going to incur higher management costs having all those, plus increase your exposure to failures.
Now in some cases you will save a lot of money, but in other cases chasing a higher peak IOPS that you don't actually have a need for with an all flash array, additional scale out capacity you won't need for at least five years in a product optimized for nearline data, etc. just makes your environment needlessly more complex. But I'm sure EMC will be happy to sell you different products fit for each type to replace your monolithic array - then add all that capability in future versions of VMax and VNX and try to sell you one of those to "simplify" your storage infrastructure...and so the wheel turns!
The first part of this article reads a lot like a Sales Brochure from a Vendor.
Is there any relationship between the Author and a "New" storage vendor.
Also, is the "new" product really New or just a New Name for the Same Old, Same Old?
Or, like the "Cloud" does it translate to:
"Ooops, we forgot Just Where we Stored Your Data but, don't worry, we can still Access it. And we know it's in One of these countries..."
IMO the fragmented storage space (one per use-case) is a temporary thing, and things need to converge
No real reason for All-Flash as a stand alone if Hybrid can get to the same performance and store rarely accessed data or snapshots on much lower cost HDDs.
Why do we need separate scale-out NAS/object and scale-out Block, you can see a bunch of start-ups and open-source projects building a unified scale-out storage system which support multiple storage abstractions.
and in the BigData space no reason to have lower performance object storage with (not so fast and quite limited) Hadoop file systems, when a bunch of vendors present faster scale-out NAS or Object storage offerings, ones where you don't have to add servers just for the sake of adding terabytes, and ones that don't limit the access to co-located Hadoop apps. Can take Microsoft Azure Data Lake as an example for combining cloud storage scale and cost with low-latency and high-throughput, and seamless Hadoop integration
i think going forward its more about co-located (hyper-converged) storage to store local stuff like VM images, private app data, .. and large public/private data lakes to cost-effectively store huge amounts of shared data. both hyper-converged and data-lakes would need to be hybrid (SSD/HDD) and support various abstractions (File/Object/Block).
Two themes are evident in the transitions describe here.
First, the workloads today are very different from the recent past. In the days of monolithic computing systems such as Mainframe-Terminal, or even Client-Server, workloads on the storage were more consistent, predictable, and stable. Not today. Today we have the Third Platform of computing that is driving a variety of applications with different storage workloads. Workloads are changing in real time and storage systems need to respond in real time. Think of the last time you opened an app on your phone to access information that is more than a few hours old. We live in the present.
Second, flash technology is changing the landscape of storage systems. The variety of ways that that flash is deployed in new storage systems make it clear that flash is here to stay. Don't expect flash to replace rotating disks, but rather complement traditional HDD's such as in a hybrid array. Flash technology is one of those inflection points that will change how storage systems are designed and deployed.
So the implications on monolithic storage systems is that they need to evolve or face the transition to new storage architectures that are responsive, efficient, and cost effective.
As Chris states, there is a definite need for storage systems with large capacity, high reliability and the ability to support multiple data types. The problem is the cost, complexity, and size of these systems truly makes them elephants that cant dance. There are alternatives with high density, continuous availability, single rack footprint, unified protocols, and much lower cost. Designed from the ground up for today's storage needs and flexibility, these modern arrays are continuing to make customers dance with joy.
Monolithic storage architectures were built when the undelying storage technology was different and workloads were very different from today. These systems served customer needs well 20-5 years ago and utilized the underlying HDD and server technology. Today’s things are very different. Businesses are challenged to be much more agile and act in real time to business needs. They need to meet the new challenges in a cost-efficient way and the complexity, cost and low performance of monolithic systems truly makes them elephants…
The improvement in the undelying technology brings a real opportunity to build better system that meet today’s and tomorrow’s challenges. In particular, Flash technology is changing the landscape of storage systems. Flash is used in variety of ways in new storage architectures and the fast improvement in cost/density make it a clear winner that is here to stay. In fact, with the introduction of 3D NAND and TLC technology the Flash technology advancement is faster than Moor’s law. This makes systems that are optimized for Flash the most cost-efficient solutions to more and more use cases. Monolithic storage solutions were not designed for this technology advancement and I see even quicker transition to modern scale-out storage architectures that are optimized for Flash and are flexible to allow customers scale their storage/data needs easily and cost efficiently.
Shachar Fienblit, CTO, Kaminario
I'm not as enthusiastic as I was in my youth and so I'm not all swept up in the 'now'. One thing I am certain of though, is when the wheel turns full circle and companies are busy consolidating all the dedicated storage arrays into, what I'm sure will be termed as "simplified, converged" systems, they'll look a lot like the old monolithic arrays that will still be ticking along at the back of the DC.
IT is the hokey cokey. We go round in circles put things in, then out again. What ever isn't the in thing get a make over, wash and brush up and pushed out again by vendors and salesmen with greasy smiles.
Biting the hand that feeds IT © 1998–2022