back to article Flash storage: Has the hype become reality?

Comment Is the flash storage business a hype-filled wonderland or is flash-based technology making real inroads into IT? Flash arrays provide much faster access to data, because of SSD’s lower latency compared to disk drives but they are more expensive to buy. This can be justified by looking at potentially lower total cost of …

  1. Steven Jones

    "The main disadvantage of flash, its cost"

    No, it's the only disadvantage of flash. One every other factor such as density, power consumption, environment, reliability, throughput and latency flash wins out. Even the much trumpeted issue of write endurance is very rarely an issue, and even then that comes down to cost as it's easy enough to hot-swap storage devices. Indeed, it's much quicker to do on an SSD than an HDD as, with the latter, it can take half a day and notable performance degradation.

    Flash is not a matter of "hype". The killer aspect of flash, which HDD can never come close to, is the massive reduction in I/O latency. Yes, you can cache the hell out of your databases, storage arrays and so on, but you eventually reach the point of diminishing returns even after you've thrown a few tens of GB of (very expensive) battery-backed cache into your enterprise array and stuffed your servers with RAM for DB caches.

    The other problem with the "cache your DB to death" issue, is that you might well achieve your 99% cache hit rate (which I've seen), but starting your DB with a "cold" cache can take two or three hours as you have to gradually increase the user load or else the I/O system just saturates with (mostly) random I/O activity until the DB cache has stabilised.

    Of course your I/O system might still struggle if it hasn't been sized properly for the IOP rae required during a large DB startup, but at least an SSD-based storage reduces that time rapidly. That's assuming, of course, your storage controllers have been designed for those IOP rates (as many were rather wanting in that area).

    1. Anonymous Coward
      Anonymous Coward

      For real. The way to sell Flash is to get people, IT executives, etc, to look at their cost per I/O for high performance storage instead of the standard cost per TB. Who cares if the cost per TB is less expensive with disk if you are substantially overbuying capacity to get enough arms to bring your storage latency to acceptable levels. Flash is getting pretty cheap anyway so it probably won't be an issue, at least for OLTP workloads, in the near future. There is really no reason not to have a high performance DB or something of that nature on Flash. Giant near-line drives still have an economic case for archive. You could see a situation where companies only have Flash and ultra cheap per TB online tape with disk totally out of the picture. Flash for the anything that gets hit regularly and tape for all the archive stuff which will probably not be needed again but people don't want to delete... or maybe you go to some cloud archive service instead of tape.

    2. KeithR

      "Flash is not a matter of "hype"."


  2. Mike Shepherd


    Are we supposed to care? It's just storage: we don't need to know how it works. If one type is cheaper next year, we'll buy that. If not, we'll go on buying the old stuff. We don't lie awake at night wondering exactly when that will happen.

    The new people won't hand out fivers to celebrate their win, nor will we shed tears as the others go out of business (or not).

    1. phuzz Silver badge

      Re: Yawn

      Price is only one of the points to consider about storage, but hey, if you don't think so, how about swapping the SSD (I'm guessing is) in your computer right now for this old harddrive I have right here?

      Sure, it's way slower, and also physically larger, noisier and drains more power, but it is cheaper and that's all you care about right?

    2. Anonymous Coward
      Anonymous Coward

      Re: Yawn

      Why don't you send everything to cloud service? Azure or AWS would likely do a much better job at a lower price while allowing you to continue not caring about infrastructure.

    3. KeithR

      Re: Yawn

      "Are we supposed to care? It's just storage: we don't need to know how it works. If one type is cheaper next year, we'll buy that. If not, we'll go on buying the old stuff. We don't lie awake at night wondering exactly when that will happen."

      I'm so happy that I don't have your pathetic aspirations and expectations...

  3. vishmulchand

    Chris I agree with your assessment here on the reality of flash arrays. Customers are reporting the following outcomes: CAPEX savings with predictability in the performance tier of storage as they no longer need hundreds of spindles to reach the requisite performance. They also report CAPEX savings with snapshot copies on flash arrays eliminating the need for full physical copies for test and dev and on separate arrays. They also report OPEX savings with not only power and cooling but also with improved staff productivity (no more complex database-storage tuning required for example). Lastly because of the speed of flash they also report that they can now process significantly more data and analytic workloads are completing faster and they can do more in the same amount of time

    According to IDC’s Marketscape WW AFA In 2015-2016 Vendor Assessment, AFAs are expected to generate $2.53 billion in revenue and grow at a compound annual growth rate (CAGR) of 21.6% to crest $5.5 billion in 2019. At that point, AFA revenue is expected to account for 60-70% of all external primary storage spend.

    This data tells us that the adoption of flash technology in primary storage is in full swing and it’s become more affordable for general purpose workloads, not just those that demand the highest levels of performance.

  4. This post has been deleted by its author

  5. sysconfig

    Hype? Not really. Nice side effects on other workloads

    Of course manufacturers try to push their gear as much as they can. But, as others said before, Flash prices keep falling, and new development will only accelerate that effect, which is great, for general workloads and increasingly consumers, too. I mean, unlike other hyped areas, hardware means actual innovation, not just bullshitting.

    For example I just replaced my main work PC, which now sports a 512GB Samsung 950 Pro (NVMe on PCIe x4). It's a beast with north of 2GB/sec throughput! Applications don't noticably load any more... it's ridiculous how fast it is.

    Data is kept on a RAID-1 pair of 1TB Samsung 850 Pro (SSD, SATA3), which still shifts several hundreds of MB/sec. More than enough, if I'm honest! (I just couldn't resist trying NVMe)

    Sure SSDs of any shape or form are still more expensive than your average 4TB spinning rust, but the massive performance is well worth it already, which in turn may safe costs for bigger workloads where latency matters, as has been pointed out by others.

    Look back a few years (not very far actually): It wasn't technically possible, and when it became possible, it wasn't affordable. I prefer hardware manufacturers pushing (and selling) their extreme gear, including the hypes they try to create, over any other software-based hype (Cloud, Docker, Devops/CI, I'm looking at you... it's just new labels on old ideas there).

  6. Anonymous Coward
    Anonymous Coward

    Are the DX figures right?

    Are these Fujitsu latency numbers correct? If so they are terrible by disk standards let alone flash ...

    "... with write latency of 88 ms and read latency of 180 ms."

    1. FJ-DX

      Re: Are the DX figures right?

      Obviously a typo: must be µs not ms. It's stated right in the data sheet:


  7. Anonymous Coward
    Anonymous Coward

    Yes. This article seems dated. Flash has already gone mainstream.

  8. This post has been deleted by its author

  9. PaulHavs

    Enrico Signoretti Sums it up perfectly

    Enrico Signoretti's summary is perfect - in so far as representing the use cases and requirements for the transition to the all flash datacenter...

    If I flip it around to a capability view. I see:

    1. All flash all the time

    2. Tiered flash / fast SAS / slow SAS

    3. Flash cache acceleration

    4. Flash cache + Tiering

    5. Converged Flash - Datasets on all flash + Datasets on HDD - WITHIN same array

    As far as I know - there is only one solution available to these 5 use cases with one architecture; one common set of native enterprise data services; one management framework, and so on,,,,,

    ,,,,,3PAR StoreServ !

    Anybody disagree ??

    Paul Haverfield, HPE Storage APJ CTO

    1. Anonymous Coward
      Anonymous Coward

      Re: Enrico Signoretti Sums it up perfectly

      I'd agree Paul if 3Par had compression and could dedupe at the flash layer data blocks below 16k, but it can't. We ran a POC on large a P2V Citrix environment and saw zero dedupe. The HP answer was to use double-take to re-size the file system to 16k on all 700 vm's. 1 vm took over 2 hours.

      No thanks, we went for a build from the ground up AFA, not a retro fit job.

      1. This post has been deleted by its author

    2. Anonymous Coward
      Anonymous Coward

      Re: Enrico Signoretti Sums it up perfectly

      Everybody disagrees... just go and see the market share figures yourself.

      And while at it... take your desperate advertising attempt somewhere else, please.

      1. gazthejourno (Written by Reg staff)

        Re: Re: Enrico Signoretti Sums it up perfectly

        Now, now, be nice.

  10. Boyan_StorPool

    Flash will take over from HDD and hybrid systems, for all it's benefits, but it will take 2-4 years to get there. And of course it will not totally displace disk in all cases as we still see tape every now and then.

    There is already considerable intelligence in the software stack, which increases endurance and performance, way above the figures quoted by the manufacturer. This means it reduces cost $/GB, $/IOPS and at some point it will not make sense to run disk. And only this year there will be a considerable amount of new functionality added to the data management software layers that will extend and the capabilities of the raw devices. As 3D XPoint/NVMe add new tiers of devices and software starts managing them intelligently there will be even more fine grained ways to address specific use cases.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like