back to article Big Blue's GPFS: The tech's fantastic. Shame about the product

IBM is a great technology company: so many of the technologies we take for granted can be traced to back to Big Blue. And many of today’s implementations still are poorer than the original implementations. And yet the firm is not the dominant force it once was; an organisational behemoth, riven with politics and fiefdoms doesn …


This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    GPFS versus ZFS

    When we, as an education-related department, needed new storage some years ago we were sick and tired of basic hardware RAID rebuilds failing due to bad blocks found only when one disk died and a rebuild was forced. Not only were there block errors, but finding out *what* file (if any) was now corrupted was an absolute pain.

    So when it came to tendering for new storage we had a requirement for file-level integrity checks which, at the time, really meant it was down to Sun's ZFS or IBM's GPFS file systems running on something (TBD by offering company).

    Sadly for us, being funded the way we are, GPFS was a no-go as it had an ongoing license fee irrespective of support, and that is something we could not take. Also we were bid with something like 5 days on-site time to get it running.

    Seriously? If it is that difficult for IBM's experts what hope in Hell do we have to manage it later on?

    So we went with Sun and while ZFS works very well, the whole appliance thing they built to make it a system sucks donkey balls. Now it is Oracle and support is even poorer and much more expensive, we want out and our replacement storage system is likely to be self-built.

    Yes, we won't have any SLA or someone to blame if it goes wrong, but on the other hand we will be able to do *something* rather than waste days of effort chasing up Oracle, etc, to see if decides if they can be bothered to actually do something to fix it.

    1. Justicesays

      Re: GPFS versus ZFS

      Not sure where you developed your requirements from...

      A raid 6 setup with industry standard raid scrubbing would also resolve your issues, and would mean you could have sourced from almost any vendor out there. My personal preference is Network Appliance for a SME scenario, not too expensive and pretty easy to set up and maintain.

      1. JLH

        Re: GPFS versus ZFS

        Regarding RAID scrubbing, only once in five years have I come across this problem - ie. a disk failing, and a second disk on a particular storage array flagging up a bad block.

        I was already running a regular RAID scrub every four weeks, and upped that time to every two weeks.

      2. Anonymous Coward
        Anonymous Coward

        Re: GPFS versus ZFS

        We suffered more than you might have expected because our old hardware lacked a scrub option. Doh! really?

        One issue with a lot of the RAID-6 systems was poor writing speeds due to the read-update-write way of operation, that is something that varies a lot between suppliers.

        The other thing is it is not just reported bad blocks from the HDD which is something RAID should deal with. However you also get 'silent errors' where the HDD reports a good read but the data is not right, less common, but it happens. See for example:

        The original CERN report (PDF file) can be found here:

        Also this study:

        Even if you are not interested in ZFS, that paper covers numerous things that can corrupt your data beyond the expected unreadable disk sectors of a dying HDD.

        1. Justicesays

          Re: GPFS versus ZFS

          RAID 6 gives protection against bad data reads as well, as it has three methods of determining the correct data for a given block on an unfailed array, and can use the two that agree, assuming the specific implementation supports that. Even with just two data points (RAID 5 or single disk failed RAID 6), if its a transitory error, you can repeat the reads until the two agree, mark the block(s) up for special attention by a disk maintenance daemon, and then carry on.

          Some RAID 6 implementations (such as the Netapp one) have a smaller overhead than others, they can use a diagonal disk stripe due to their RAID 4 style underlying layout rather than the computationally intensive standard method (although RAID 4 has its own performance challenges).

          Given the size of current disks and the expected rebuild times, we are likely to need RAID-7 in the next few years (apparently) to maintain the expected data loss chances. At least with spinning rust. If we all move to SSD's then that may be less of an issue due to faster rebuild times.

          1. Alan Brown Silver badge

            Re: GPFS versus ZFS

            "Given the size of current disks and the expected rebuild times, we are likely to need RAID-7 in the next few years (apparently) to maintain the expected data loss chances. "

            Already done - ZFS RaidZ3

            1. Matt Bryant Silver badge

              Re: Alan Brown Re: GPFS versus ZFS

              ".....Already done - ZFS RaidZ3." Yeah, Alan would have posted earlier, but he was waiting for his system to finish one of the many resilvering operations that ZFS causes with annoying regularity. Especially with RAIDZ3 which needs NINE disks (by OpenSlowaris's own recommendations - d_Recommendations), and how many home rigs have nine SATA ports?

              Back to the Land of Sunshine you go, Alan.

    2. Alan Brown Silver badge

      Re: GPFS versus ZFS

      TrueNAS gives you ZFs and support. If you can't afford that, use OpenNAS is working on unifying the non-Oracle ZFS world

      (Running ZFS here on Ubuntu)

  2. Roo

    "If IBM really wants the GSS to be a success, it needs a scaleable and supported NAS gateway in front of it: it needs to be simple to manage. It needs integration with the various virtualisation platforms and IBM needs to simplify the GPFS licence model … when I say simplify, I mean get rid of the client licence cost."

    I haven't had a chance to kick the tires of GPFS yet, but I got the impression it was mounted like a bog standard filesystem, so anything that uses the stock file system API can hook up to it. Running an NFS services & Samba on a host with a GPFS filesystem mounted doesn't seem hard to me. Are you able to provide a bit more detail on what's hard about it ?

    1. Justicesays

      Scalable is the idea

      GPFS gets its performance and scalability from the number of nodes in the set up.

      IBM do of course have a scalable file server offering to go with GPFS. It's called SONAS , and is essentially a load balanced nfs/samba set up fronting a bunch of GPFS nodes. I wouldn't say it is easy to set up either, and I imagine it also costs plenty, most things from IBM do.

      1. Ken 16 Silver badge

        Re: Scalable is the idea

        You're right, neither cheap nor easy to set up (I used it for a grid engine once) but superb results...meaning better than 90% of solutions really need.

        1. Anonymous Coward
          Anonymous Coward

          Re: Scalable is the idea

          When comparing ZFS and GPFS we did like the idea of GPFS scaling in performance and size on clustered computers, but other factors ultimately made us thing again.

  3. GreyWolf

    IBM products, licensing, etc

    26 years at IBM, so I've seen this before. When you the product manager go to Legal before launch, to get the licensing decided, your project can end up being assigned to the nasty paranoid tight-ass, and you end up with an unworkable propositon in the market place. Even at behemoth companies like IBM, it oftens comes down to the attitudes of individuals.

    In my day, it was not unusual to have to have a very large number of Go/NoGo signoffs before being permitted to launch. In one case, 57 signoffs, any one of whom could kiil the product after hundreds of thousands had been spent on developing it.

    1. Anonymous Coward
      Anonymous Coward

      Re: IBM products, licensing, etc

      You forgot to say "and then they sack all of the developers".

  4. David Bell 4

    IBM already make such products: the enterprise-class SONAS and mid-range IBM V7000 Unified. They are just clustered Linux/Samba NAS gateways running GPFS, see: Chapter 7.

    1. Anonymous Coward
      Anonymous Coward

      V7000 Unified

      The V7000 Unified has many shortcomings... here are ten...

      1. The V7000 Unified TSM client is limited (not full), it doesn't allow for backup sets etc.

      2. The number of snapshots is limited (can be limited to a couple per day depending on the rate of change of data), deletion of snapshots can cause performance issues

      3. Support is limited - anyone with any significant knowledge is based in Mainz Germany and you better be a large client to get access to them

      4. NFS version 4 is not supported

      5. SMB 2.1 and SMB signing is not supported

      6. TPC reporting is constrained on the V7000 Unified (if you're after file information, rather than block)

      7. IBM have decimated their UK pre-sales engineering teams and are relying on re-sellers to provide client pre-sales support, this is not working well yet

      8. The product has suffered from data corruption and data loss issues

      9. Try and find a training course - IBM now rely on partners who never seem to be able to get enough people to run a course

      10. There is no SVC equivalent for files on the Unified, so migrations to it can be challenging

      1. Matt Bryant Silver badge

        Re: Anonymous Salesman Re: V7000 Unified

        "The V7000 Unified has many shortcomings... here are ten..."

        WOOT! WOOT! Elmer detected!

        Please don't post such blatant marketing FUD without a disclosure.

        1. Anonymous Coward
          Anonymous Coward

          Re: Anonymous Salesman V7000 Unified

          No disclosure required - is 1st hand experience not good enough ?

          Care to refute any of the above issues ?

          1. Matt Bryant Silver badge

            Re: Anonymous Salesman V7000 Unified

            ".... is 1st hand experience not good enough ?....." Indeed, I very much value the sharing of experience, so please do recount how you personally encountered the issues listed. Otherwise I might just have to assume you just reported a FUD list and actually don't have the experience you claim.

            "....Care to refute any of the above issues ?" Hell no, I don't get paid to write or refute FUD. Go ask IBM. I just use that experience thing you mentioned to allow me to spot it from a mile away.

  5. Peter Gathercole Silver badge

    GPFS is an old-school product. It's been around for a long time (I first heard about it as mmfs about 20 years ago), and as such it is configured like an old-school product.

    But I would say that it seriously benefits from not being set up by a point-and-click GUI. It is a very high performance filesystem, and really benefits from the correct analysis of the expected workload to size and place the vdisks and stripe the filesystems accordingly. It's just one of those systems that is traditionally deployed in high-cost, high function environments where the administrators are used to/prefer to work using a CLI. If it were to appear in more places, it may need to change, but then that is what I thought SONAS was supposed to provide.

    I have been working with GNR and the GPFS disk hospital for the last two years on a P7IH system, and now that the main bugs have been worked out (which were actually mostly in the control code for the P7IH Disk Enclosure which provide 384 disks in 4U of rack space, although it is a wide and deep rack), it really works quite well, although like everything else in GPFS, it's CLI based. But to my mind, that's not a problem. But it is very different, and takes a bit of getting used to, and it could be integrated with AIX's error logging system and device configuration a bit better.

    1. Mostor Astrakan

      "But I would say that it seriously benefits from not being set up by a point-and-click GUI."

      Oh yes. Many things do. You can get an HACMP cluster running in roughly five minutes using the user friendly SmittyWizard. Any idiot can do it. Which leads one to the disadvantage of having something that any idiot can set up: You get a cluster (or in this case a high performance file system), that you are going to trust the weight of your Enterprise to... set up by idiots. Which is why the section of IT bods who are not idiots never go for the easy install option.

  6. Dapprman

    Miss running GPFS systems

    I'm another GPFS fan, however I also fear looking at the costs. I describe it as the sort of product where if you have the requirements and the financial backers then it is worth it, however if you're missing one of those it's just too expensive.

    Back in ~1999/2000 (almsot a decade before I started using it) I remember there were three tiers - a basic very limited free version, a cheap version with no resilience, and the full fat resilient version. Think the first two got dropped as people tried running setups with them then and then complaining that it was a useless sytem when they had a disk failure or a node went/was taken down.

    BTW - with experience you can get it up and running rather quickly, it just depends on what additional complexities you want to introduce.

  7. David Casler

    General Parallel File System, not General Purpose File System

    BTW, GPFS stands for General Parallel File System, not General Purpose File System. It's IBM's supercomputer file system, with lots of other applications.

    1. Milo Tsukroff

      Re: General Parallel File System, not General Purpose File System

      Definitions: RAID stands for Redundant Array of Independent Disks - according to IBM. The "I" word actually is "Inexpensive" but IBM does not want to use that word. Draw your own conclusions. (I have personally met the author of the original paper that defined RAID.)

  8. Anonymous Coward
    Anonymous Coward

    Preaching to the choir

    GPFS is great. Couple it with IBM LTFS and you have the best/least costly archive storage platform around. Throw some Flash into that mix and you have a storage platform which will suit almost everyone's requirements at a fraction of the competitors' costs (people need a little high IOPS/low latency storage and a lot of high capacity/low cost storage). IBM needs to bundle it and make it easy to buy. They have been making strides with LTFS EE (GPFS combined with LTFS).

  9. GPFS Solution Architects

    GPFS Solution Architects

    The OP's comments about GPFS and GSS are generally spot-on. GPFS is primarily a storage tool, yet it's sold by IBM folks who don't have storage backgrounds and don't understand it's competitive advantages (primarily when supporting complex global workflows, or multi-PB capacities, or compute-intensive workflows).

    One of the primary benefits of GPFS is a dramatic cost reduction (both CAPEX and OPEX) for customers using petabytes of Tier-1 disks. If you're buying Tier-1 disk, do the research - you'll be shocked to find out what's possible using a multi-tiered approach using a tape-based storage tier for archiving and includes integrated data-protection (no need to 'duplicate & replicate' for DR).

    As GPFS solution architects, we've made a living being that 'last mile' between the customer and IBM. It's ironic, but the fact that IBM's 'difficult to work with' has allowed us a place to be relevant.

    Also, GPFS has recently undergone a tremendous amount of development.

    For an easy, good read:

    John Aiken

    1. Valheru

      Re: GPFS Solution Architects

      I deployed a V7000 system for IBM and the IBM sales folks are indeed the biggest problem. They promise the world and do not understand what they are talking about.

      The V7000 had serious limitations and bugs when I was setting it up (Q1 2012) and as Peter Gathercole mentioned the GUI interface that tries to hide the complexity of GPFS just makes things worse.

      Add in the support issues John mentioned and it seems Re-Store have a sweet niche helping folks with a genuine need for GPFS.

This topic is closed for new posts.

Other stories you might like