back to article The storage is alive? Flash lives longer than expected – report

The Tech Report has been running an ‘SSD Endurance Experiment’ utilising consumer SSDs to see how long they last and what their "real world" endurance really is. It seem that pretty much all of the drives are very good and last longer than their manufacturers state. It's a fairly unusual state of affairs – something in IT that …

  1. Simon Harris

    Spinning rust.

    Out of curiosity, how does this endurance compare with a traditional hard drive?

    1. Nigel 11

      Re: Spinning rust.

      That flash RAM fails after sufficient write activity is fundamental to the physics of how it operates. In contrast a magnetic film does not wear out, however many times it is written to, merely as a consequence of being written.

      On the other hand, the moving parts in a hard disk do wear out, usually in a slowly progressive manner. Also any contamination inside the HDA can cause gradual degradation of the read head and magnetic surfaces (and sometimes much less gradual degradation - catastrophic failure known as a head crash, akin to an aeroplane flying into a mountain instead of grains of wind-blown sand)

      Anyway, my experience is that quite a few hard disks "never" fail (ie, they are declared obsolete and junked while still working perfectly after five or ten years in service). Many fail gracefully: you get warning that they are deteriorating through their SMART statistics, and you can hot-replace them proactively if you are using mirroring or RAID or just shut down and clone to a new disk with ddrescue. A good fraction of the rest are rescue-able even after failing hard as far as an operating system is concerned, i.e. you can use ddrescue to copy them to a new hard drive with no loss of data after many re-tries, or with only a few sectors unreadable. Only a smallish percentage go from disk to brick "just like that", and a majority of those inside their first month in service ("infant mortality").

      The controller of a flash drive must surely know how many pages have failed and been replaced from the pool of spares. So what's going on? Are SSD controllers not being honest with their SMART statistics (for example with SMART 182, " Erase Fail count")? Or did the testers simply write until failed, without monitoring the statistics to see whether impending failure was easy to spot? Or are there whole-chip failure modes with flash storage, that make abrupt failure far more likely than with other VLSI chips such as hard disk controllers? (Well, there are 8 or 16 more VLSI chips in an SSD, so maybe 8 to 16 times the risk).

      More research needed.

      1. Charles 9

        Re: Spinning rust.

        "The controller of a flash drive must surely know how many pages have failed and been replaced from the pool of spares. So what's going on? Are SSD controllers not being honest with their SMART statistics (for example with SMART 182, " Erase Fail count")? Or did the testers simply write until failed, without monitoring the statistics to see whether impending failure was easy to spot? Or are there whole-chip failure modes with flash storage, that make abrupt failure far more likely than with other VLSI chips such as hard disk controllers? (Well, there are 8 or 16 more VLSI chips in an SSD, so maybe 8 to 16 times the risk)."

        What's happening is that it's the controller that's failing first, rendering everything else moot.

  2. This post has been deleted by its author

  3. RobHib

    Time will ultimately tell.

    Decades ago when I was at school I recall daring kids to touch my charged capacitors. Some did which instantly deterred others.

    I also remember the charge didn't last very long. Now, SSDs aren't quite the same but the theory of operation's not far off.

    SSDs great devices, I use them all the time, but we're still testing their endurance. A few more years will tell. There's also the question of very long-term storage of decades. The message from similar technologies, EEPROMS etc., is mixed, I've some decades old and perfectly OK whilst a few have carked it inexplicably (but the manufacturing tech is much older of course) .

    1. Trevor_Pott Gold badge

      Re: Time will ultimately tell.

      "SSDs great devices, I use them all the time, but we're still testing their endurance. A few more years will tell."

      In a few more years we will no longer be able to advance SSDs and will be forced to use new technologies. So your solution to the emergence of technologies is to wait a decade or more after everyone else starts using them in mainstream applications, then, when the technology has reached it's absolute limit of advancement do you adopt it. Do you work for NASA designing probes or something? Do you still store you primary production data on mercury delay lines?

      Question: is ADSL an okay technology yet, or are you still just coming to terms with k56 Flex?

  4. Electron Shepherd

    Wasn't this the whole premise behind RAID (excluding RAID 0)? You buy cheap drives, expecting them to fail, but since the data can be rebuilt, it doesn't matter.

    Of course, the caveat is that "it doesn't matter, provided you can rebuild the array before enough drives fail to destroy the overall data integrity", which may be significant in some scenarios.

    Has anyone done any costings of buying a few high reliability eMLC drives vs buying lots of cheap "desktop class" drives, and building a RAID array with lots of hot spares?

    1. This post has been deleted by its author

    2. This post has been deleted by its author

  5. frank ly

    Advance warning?

    I thought that SSDs were overprovisioned and automatically swapped out bad blocks to maintain their stated capacity. If so, isn't there a low level check that can be done to see if an SSD drive is 'on the edge' or approaching it?

    1. John Robson Silver badge

      Re: Advance warning?

      They should be able to tell you how many blocks they've reallocated though.

      And an increase in that number is taken as an early warning...

    2. BlartVersenwaldIII

      Re: Advance warning?

      They do; as far as I'm aware from the drives I have, most SSDs made in the last two years keep track of smart attributes for things like total amount written, the amount of reallocated blocks, and the amount of "spare area" remaining, read errors, etc etc. TR went into quite a lot of detail on the smart monitoring over the course of the test.

      Here's the graph they made from the death of one of the drives:

      http://techreport.com/review/27062/the-ssd-endurance-experiment-only-two-remain-after-1-5pb

      As you can see, the "life left" counter still had plenty of slack left in it but there was a steep change in reallocated sectors and read error rates shortly before failure. The graphs on the next page show smart graphs for the surviving drives.

      Of course, almost no-one keeps a running graph of smart stats (in fact the number of people running a smart monitor of any kind is still very low) but in my own experience smart is more useful for SSDs than it is for platter-based drives. Now if only the counters were more standardised...

      Incidentally, the most interesting aspect of this series was the Intel 335; once it reaches its lifetime write rating it goes into read-only mode, and on the very next power cycle it bricks itself.

  6. Will Godfrey Silver badge

    In order to stagger the RAID array's failures wouldn't it make sense to use a mixture of drives of different makes and slightly different actual capacities?

    1. This post has been deleted by its author

  7. Nate Amsden

    HP posted this info

    HP's latest 3PAR SSDs all come with an unconditional 5 year warranty.

    Oct 9, 2014

    http://h30507.www3.hp.com/t5/Around-the-Storage-Block-Blog/Worried-about-flash-media-wear-out-It-s-never-a-problem-with-HP/ba-p/172690#.VQrfvUSzvns

    "The functional impact of the Adaptive Sparing feature is that it increases the flash media Drive Writes per Day metric (DWPD) substantially. For our largest and most cost-effective 1.92TB SSD media, it is increased such that an individual drive can sustain more than 8PB of writes within a 5-year timeframe before it wears out. To achieve 8PB of total writes in five years requires a sustained write rate over 50MB/sec for every second for five years."

    ("Adaptive Sparing" is a 3PAR feature)

    another post about cMLC in 3PAR:

    Nov 10, 2014

    http://h30507.www3.hp.com/t5/Around-the-Storage-Block-Blog/cMLC-SSDs-in-HP-3PAR-StoreServ-Embrace-with-confidence/ba-p/176624#.VQrfqESzvns

    1. MMull

      Re: HP posted this info

      HP´s 3Par use SANDISK´s SSDs: http://www.techopsguys.com/2014/06/09/3par-june-2014-the-7450-afa-keeps-getting-better/#comments

  8. Snowman

    "It seem that pretty much all of the drives are very good and last longer than their manufacturers state. It's a fairly unusual state of affairs – something in IT that does better than it states on the can"

    Interesting take, from what I had heard the manufactures warranties were often set by the failure rate, so statistically they only have to cover the outliers which died early. Then after the date covered passes the majority will die in a common range, with another small percent carrying on far longer than expected. That they are still going over is probably due to the relatively (for manufactured goods) short time SSDs have been on the market, compounded by how often node shrinks come.

  9. Beridhren the Wise

    About 6 months ago we moved our production database (about 30TB) from disk to SSD. Tests showed that moving to SSD would give us a 30% improvement in I/O performance. We needed that improvement so the solution we purchased not only uses an array of SSD drives, the entire array is mirrored to a backup array, with each array having a hot spare SSD. The device also uses redundant, hot swappable controllers, with a third controller as a hot spare. Even the fans are redundant and hot swappable. Expensive, yes, but speed and availability were much more important than cost. What surprised us was that once we moved to SSD not only are we getting the expected I/O boost, which is reducing the cost of processing each transaction (and we process about 6 million transactions per day so that cost is a serious consideration) but the system uses much less power and requires less cooling then spinning disks so through reduced operating expenses we expect the new solution to pay for itself in about 2 years.

    So in short it is my experience that if done correctly SSD is a much better solution then spinning rust.

    1. Anonymous Coward
      Anonymous Coward

      >What surprised us was that once we moved to SSD ... but the system uses much less power and requires less cooling then spinning disk.

      Wow that comment tickled my salesmen spouting BS meter some. Doing even a rudimentary amount of research would told you that was true which if this was such a big mission critical project would be nearly an HP total lack of due diligence if anyone was surprised by it.

    2. This post has been deleted by its author

  10. Computurd

    These SSDs were never powered off not once. that is the fail in this testing. Endurance is rated for data retention with the power off. Several of the guys who did the initial testing (that this testing is based off of) had SSDs 'die' after one second of power off after they had reached their warrantied writes. If you removed power from any of those SSDs, period, even halfway through that test every single bit of data would be gone.

    1. Nightkiller

      Quite possible.

      According to the TechReport article, several manufacturers' drives automatically go into "Brick Mode" once their rated life is reached.

      1. Justin Clift

        Which manufacturers?

        It'd be useful to know who to avoid.

    2. Sandtitz Silver badge
  11. This post has been deleted by its author

  12. Charles 9

    Perhaps it should be noted that since the most common mode of failure is "sudden catastrophic" the main point of failure is not the flash chips but the controller handling them. I guess for the low price point it would be too much to ask to install a backup or replaceable controller unit for the drive.

    So noted, in SSDs the controller tends to fail before the actual media. Kind of reminds me of a story of someone looking for a used piano bench and finding out they were hard to come by because pianos tend to outlast the benches, meaning many were scrapped and replaced altogether, reducing the supply.

    1. ntevanza

      Me too

      This is my limited experience of SSD failures. After a restart, it just stops identifying itself intelligibly to the host. We've all had days like that. Or maybe it wakes up, sees that it has done something dishonourable to your data, and commits seppuku.

  13. Joe User
    Holmes

    "most drives fail hard when they finally do fail, leaving your data inaccessible"

    In other words, don't forget to make regular backups.

  14. Anonymous Coward
    Anonymous Coward

    Crucial MX100 firmware update available

    Interesting, this just prompted me to check the Crucial website for software/firmware update for my recent SSD's. Turns out there a firmware update from about a week ago for MX100's:

    http://www.crucial.com/usa/en/support-ssd

    Nothing critical sounding, but looks useful for better SMART stats and power/error handing.

  15. Alan Brown Silver badge

    Big point missed

    Other than the cheap drives, EVERY SINGLE one of the SSDs flagged that it was at end of design life a _long_ time before they actually failed.

    This test was based around the question "How long can we run them before they actually fail HARD"

    The SMART data on the drives was returning "Lifetime expired" well before this point (about a year in the case of the 840Pro) so you can't say they hadn't given adequate warning. It's arguable that the SMART data was too conservative.

    WRT "Flash has limitations" - well duh - so does magnetic media as others have pointed out. So far in IO-heavy operation the Intel X25E flash drives used as spool cache on our backup server have outlasted 3 sets of rotating media on the same machine (they've written at least 3 times as much as the 840pros got and are reporting 96% left)

    Phase change media and memristors and other Solid state tech exist. They may or may not eclipse Flash long-term (PCM and memristors are _much_ faster than flash) but in the meantime moving back to 40nm and going to 3D stacking has resulted in flash with greater lifespan and lower latency than the 840pro range (A 10 year warranty on the 85-pro is nothing to sneeze at)

    You're a fool if you don't make regular backups and you're a fool if you run your drives past the point where they tell you they're "expired" without making provision for impending doom.

  16. Anonymous Coward
    Anonymous Coward

    The real specs

    The standard against which the write endurance specs are gauged are described on pages 25-27 of this really excellent presentation: http://www.jedec.org/sites/default/files/Alvin_Cox%20%5BCompatibility%20Mode%5D_0.pdf

    The write endurance spec (TBW, usually) means you can write that many bytes to the drive and expect to read back it's capacity, with (for client drives) a uncorrectable BER of 10**-15, after you left it powered off for a year stored at 30dgC. So a week powered off doesn't get you close to the spec.

    Why does powered off matter? The flash controller in the SSD is continually monitoring the ECC of valid data. When it sees a bit error, it corrects it and moves the data. As the device gets near its final death, the flash controller spends all its time correcting ECC and moving data. I suspect the performance of these geriatric devices suffers immensely at the end.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like