back to article 3D TLC VIMMs could play Violin's music

Before we talk about 3D TLC VIMMs possibly coming to Violin Memory, let’s talk about network storage. It developed for a reason you know, I was reminded by Violin Memory CTO and co-founder Jon Bennet. Servers and storage develop at different rates and people (naturally) don’t want to get stuck with new servers hobbled by slow …

  1. Anonymous Coward
    Anonymous Coward

    "effective capacity"

    ugh marketing compress my brain

  2. CheesyTheClown

    Why bother?

    Large amounts of flash is only necessary in a poorly designed data center. All flash is for when you don't properly tier your storage. For so many reasons, storage tiering is a necessity. We need it because CPUs and blade memory capacity has now far outstripped our ability to move the data to and from the blades effectively. With a current theoretical maximum bandwidth of 960Gb/s network bandwidth to and from a single rack and a theoretical maximum bandwidth of 160Gb/s to the blade, it is necessary to use more intelligent storage systems than traditional SANs can manage. This gets up to storage tiering and more intelligent storage systems like Windows Storage Spaces or OpenStack Swift, if you really must use nasty old block based SANs (meaning you're actually still using VMware... yuck), Cisco Invicta isn't a terrible idea.

    So, the central storage system running all flash is lame. A three tier storage system made up of 90% 7200rpm spindle and 10% high performance SAN spread across more servers and more drives is optimal.

    The only case where all flash makes any sense is when mining a single massive data set.

    1. Anonymous Coward
      Anonymous Coward

      Re: Why bother?

      "So, the central storage system running all flash is lame. A three tier storage system made up of 90% 7200rpm spindle and 10% high performance SAN spread across more servers and more drives is optimal."

      Have a word with people who've tried that, it's nothing new and it never worked out too well in the past. Nearline (7200rpm) drives are almost useless in any non sequential environment and if you're relying on caching / tiering then neither are fool proof, that coupled with the differential between flash access speeds (sub 1ms) and such drives (20ms) is huge.

      Suddenly active data on the wrong drive at the wrong time, things start queuing and bad things start to happen, it's essentially a feedback loop until something breaks and eventually stops the hurt. At which point you can re-architect the solution with someone who actually understands what they're doing.

    2. chrismevans

      Re: Why bother?

      For most (if not all) enterprises, all-flash is overkill today and most people are not using flash for all of their data.

      However, processor & DRAM speeds continue to increase so the gap between central processing and external storage continues to widen, as HDDs are not increasing their performance at all, and in fact are starting to slow down. The gap has to be filled by something; that something is flash. So although today we don't need all-flash, in 5-10 years we will need all-flash, complemented by even faster memory in the server.

      I think Bennett is wrong to assume that we don't need persistent storage in the server (again), instead it's going to be about how it is implemented as applications evolve. Expect HDDs to eventually be used purely for archive and nothing else.

      1. jcrb

        Re: Why bother? because there is a reason we took the storage out of the server in the first place.

        Persistant storage in the server isn't storage. At least not in the enterprise because it can't be accessed if the server is down, unless its replicated to multiple servers. In which case it becomes networked storage just like the storage in the external array. This isn't to say that there isn't a use for flash in servers it just isn't storage or if it is storage it is slower, less space efficient and harder to manage.

    3. jcrb

      Re: Why bother? because you have it backwards

      All flash makes more and more sense the more distinct data sets you have in the storage, with different apps and access patterns than it does if you have just one data set. And since even a single data set is actually multiple data sets with different access patterns there really is no such thing a single massive data sets.

      With disks and a cache/tier based system your performance is at the mercy of the access patterns, in fact it is tiering that is best when you only have one data set because then the caching system might have some chance of guessing what needs to be moved to the top of the tiers *before* you need it.

      The performance difference between even the best tiered system is huge. Sure a tiered system with a 95% hit rate of 1ms performance access and 5% miss to 20ms 7200 rpm disk looks like it improves the average access to 2ms, which sounds like it isn't that much worse than 0.5ms to a flash array (assuming your array actually delivers sub ms latency).

      But what is missed in that analysis is that it's not just the average latency that matters by the variance and the size of the worst case spikes that effects the application level performance.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon