back to article Backblaze report finds SSDs as reliable as HDDs

Backblaze has published the first SSD edition of its regular drive statistics report, which appears to show that flash drives are as reliable as spinning disks, although with surprising failure rates for some models. The cloud storage and backup provider publishes quarterly and annual Drive Stat reports, which focused …

  1. Fred Flintstone Gold badge

    Kudos to Backblaze

    This is IMHO about the best marketing: practical, pragmatic numbers from their business, made available for everyone.

    What a fantastic idea.

    1. Down not across

      Re: Kudos to Backblaze

      Seconded. I've found the provided data incredibly interesting over the years.

      Just a shame manufacturers don't use the attributes in a uniform way especially with SSDs and many attributes are not directly comparable.

      I wonder if 249 could be used to (across vendors) compare amounts written (and calculate how many TDW if adds up to) and then factor that into the failure calculations. I couldn't find the raw SSD data downloadable, but perhaps it is not yet there or I looked in the wrong place.

    2. Anonymous Coward
      Anonymous Coward

      Re: Kudos to Backblaze

      Would be good to see some real world data on the SMR (Shingled Magnetic Recording) WD Reds by Backblaze, just to see how bad they really are.

      WD: How to destroy a trusted brand overnight, pretty much bait and switch. (And yes, you can buy WD RED Plus drives, but the damage has been done, no real way back from what these companies did.

      Found the whole episode of surreptitiously replacing WD Red CMR drives with WD Red SMR drives utterly dishonest, and from that WD have lost out. CMR Ironwolf SATA drives that I wouldn't have bought in a month of Sundays previously, are humming away in NAS devices, and working well, so far.

      1. Anonymous Coward
        Anonymous Coward

        Re: Kudos to Backblaze

        Trusted brand?

        I call them Western Dataloss for a reason. Back when I was regularly replacing failed hard drives, at least 50% of the failed drives I saw were Western Dataloss drives. It finally got to where I'd recommend my clients preemptively replace any Western Dataloss drives that came in new equipment with something, anything else.

        They've made garbage drives for DECADES.

        Trusted to fail, I suppose.

  2. Nate Amsden

    interesting but not too useful

    They seemed to be ignoring many of the bigger names in SSDs whether it's Samsung, Intel, Sandisk. Perhaps this is due to cost as I know Backblaze is going for the best cost they can get.

    I'm guessing they use a lot of Seagate SSDs because they got a special deal on them since they do use a ton of Seagate drives. I did a quick search for SSD marketshare and found a website that claimed Seagate had just 0.3% of the SSD market, so obviously not a big player there. Samsung and Sandisk were by far the biggest players.

    My personal track record over the past 8 years has been zero SSD failures across my personal systems all of which run Samsung or Intel SSDs(one Intel only SSD and a few HP OEM Intel SSDs). Also zero failures on SSDs on my 3PAR arrays(oldest SSD there is from Oct 2014 with 88% write lifetime left). My sample size is small though not even 65 drives total. Certainly have far exceeded my expectations in any case. The bigger MLC SSDs on the 3PAR probably ran north of $20,000/ea when they were new though(not uncommon on enterprise storage).

    Personally I don't touch SSDs on my own systems unless it's Intel or Samsung just because of this good experience. Sandisk comes into play more in enterprise and would expect them to be used more commonly in storage arrays rather than end users buying them directly. Also not interested in touching QLC flash yet anyway.

    Only other SSD I have owned was a Corsair I think probably 10-11 years ago, was not too happy with that one. Samsung 850 Pro was my first real "home" SSD, which according to my email archives I bought around Aug 2014.

  3. Graham Cobb

    Not necessarily "as reliable as HDDs"

    Great data from Backblaze and thanks and kudos for them for publishing it. I already use their services and I recommend others to take a look at them.

    However, I think the article goes a bit far in saying "finds SSDs as reliable as HDDs". The report authors are careful to acknowledge that the two types of devices are used in very different applications within Backblaze. I think a more realistic title would have been "Backblaze report finds SSD system disks as reliable as HDD data disks".

    Looking forward to next year's report, with more data on these SSDs and maybe more opportunity to look at the differences between manufacturers and between technologies. I'm particularly interested in the accuracy of "lifetime writes" numbers and how accurately we can predict SSD failures.

    1. Crypto Monad

      Re: Not necessarily "as reliable as HDDs"

      I think a more realistic title would have been "Backblaze report finds SSDs as unreliable as HDDs"

  4. Will Godfrey Silver badge
    Facepalm

    FWIW

    I have 3 120G SSDs that have been in fairly continuous domestic use for between 6 and 8 years Two are Samsung, and the other is Intel. So far, I've had no loss of data on any of them.

    P.S.

    I probably shouldn't have said that!

    1. Anon

      Re: FWIW

      It's OK, they aren't OCZ Synapse SSDs. Great for the few months they worked... But that's all over and done with now.

    2. John Brown (no body) Silver badge

      Re: FWIW

      The problem isn't small amounts of data loss, and that's the big difference with SSDs. HDDs have a number of failure modes, most of which will allow relatively easy data recovery, either by recovering what is still readable because there are bad sectors or even replacing the controller board with an identical but working one.

      When SSDs fail it's generally one of two failure modes. (1) There are so many bad memory cells, all the spare has been used and the drive fails over to read only mode. Good, you can copy the data off, bad, it seems to be a relatively rare failure type. (2) Other failure modes are catastrophic. The system can no longer even "see" the drive, or the drive isn't responding in a way the BIOS can identify even though it thinks there's "something" there so takes an age to complete the POST then fail to boot. These catastrophic failures seem to be the larger percentage.

      On the other hand, I see a lot more failed motherboards than SSDs and most of our clients are corporates who just stick a new SDD in and re-image it and the user gets all their data access back when they log in. On the whole, I probably see fewer SSD drives fail than we used to get with spinning rust. On the other hand, many more corporates are using remote data with synched user profiles so maybe there's far fewer writes to SSDs over it's normal life.

  5. Lorribot

    It would be interesting to see the same from HP/3Par or Dell/EMC. They must have some really big numbers of disks, also MS, Amazon and Google in their datacenters must be able to give some proper number across many 1000s of disks.

    Having had several SANS from both the above groups with SATA, FATA, SAS, NL-SAS drives of both SSD and HDD I can honestly say the SSDs lasted longer. The HDDs would always have around 1% fail after 6 months then nothing to note up to 3 years then it would ramp up to around 10% fails then die off again until about 7 years where the disk were pretty much toast, well the ones that hadn't been replaced already. Withe the SSDs nada, I think I had a around 1 or 2 over the 7 years I was managing storage.

    I may have been lucky but to be honest in recent times we have had more spinning disk fail in our Data Domains then SSDs in our vSAN (Dell) or 3Par systems which probably see much more daily write and read activity and has about the same number of actual disks in them.

    The difference between rebuilding an array with large SSDs compared to similar sized HDDs is just not worth even discussing.

    Horses for courses and cost needs to be considered, but for primary storage why would you not go SSD?

    If you can lay your hands on some old SAS SSDs from the likes of HP or Dell and get a second hand SAS controller from your favorite online supplier of secondhand goods they make good local back up target or home made NAS device and will last many years.

    I have a pair of 1TB HP drives as a local back up target they were previously used as data drives in a server and they reached me at 4 years old, they are hooked up to a HP P410 6Gb SAS card (£8 from ebay) which is auto recognized by Windows 10/11.

    The HP smart array tool reports the drives as 99.65% usage remaining and "Estimated Life Remaining Based On Workload To Date" as "314538 day(s)" which is close enough to a century to not be worth worrying about. So why wouldn't you?

  6. Kev99 Silver badge

    Just try to recover your data from a SSD that pukes. It ain't gonna happen. I know from experience. Spinning rust may not be the Prom Queen but it works and lets you get your things back.

    1. Lorribot

      I have had more unrecoverable HDD than SSD. But may be I have a bigger sample size than you.

      On an enterprise level I have seen a much higher failure rate on HDDs than SSDs, this is across something like 20000+ HDDs and SSDs drives over the years, in both end user devices and Storage systems.

      Yes if you lose data it is a bit shit but that is why you replicate (Raid1), backup or duplicate (OneDrive etc) data or disks, to protect what is important because shit will always happen.

    2. Will Godfrey Silver badge

      In a sense I'd prefer a total failure to a few odd bits being corrupted here and there. I keep my data files fairly regularly backed up, so the only thing I'd have to rebuild would be the OS, on another drive after I've scrapped the faulty one.

    3. doublelayer Silver badge

      I try to not have to recover. I've had HDDs that likewise failed catastrophically, and that wasn't any better even though the constant clicking made it clear that I could not bother with the normal hardware recovery methods. By the time a drive is failing, there's a good chance it's not going to work at all, so I don't trust that I can get things back at that point. If it's important enough, I have the disks set up in a RAID configuration and have copies of their contents elsewhere.

    4. SImon Hobson Silver badge

      Unless there's a bad block in the middle of that one crucial data file, or it's one of those shihorrible Seagate drives that "just stops responding" when it hits certain types of error (and so you can never recover most fo the undamaged data), or it's just given up altogether, or it's crashed it's heads and turned them into lathe cutting tools to remove the oxide layer instead of reading it, or ...

      All failures I've had with spinning rust.

  7. emfiliane

    I love that statistically, the Crucial drives have between a 1% and 10,930% chance of failing every year, based on the data they had at the time. The magic of little numbers!

  8. 89724102172714182892114I7551670349743096734346773478647892349863592355648544996312855148587659264921

    fscking hell im sticking with western digital blacks

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like