back to article IBM storage crew: Why bury your BEST kit at the back of the larder?

El Reg storage man Chris Mellor’s pieces on IBM’s storage revenues here and here make for some interesting reading. Things are not looking great with the exception of XIV and Storwize products. I am not sure whether Mellor’s analysis is entirely correct as it is hard to get any granularity from IBM. But his take on Big Blue …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    GPFS

    It's an IBM product from the snappy name to the ample documentation and published references (not).

    I've only used it once, for a grid computing solution and aside from the hardware footprint required it was fine. I did try to use it a second time to introduce DB2 Purescale (think BlueRAC) but the storage and DBA guys had a fit at the thought.

  2. Richard Tobin
    Stop

    Truffles

    Exquisite truffles are not found in jars. They must be eaten fresh.

    1. Anonymous Coward
      Anonymous Coward

      Re: Truffles

      and yet an El Reg journalist has nosed one out, what does that say about him?

  3. Anonymous Coward
    Anonymous Coward

    Open source GPFS?

    ...not a snowballs' chance in hell.

    It's too useful/profitable/developed/tested. You might hadly ever come across it unless you're in the HPC space, but if you work in the HPC space on IBM stuff you'll hardly ever find a cluster without it.

    AC above make sa quip about there not being ample documentation. It's here http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=%2Fcom.ibm.cluster.gpfs.doc%2Fgpfsbooks.html

    As for references, many of the large GPFS customers are secretive.

    AC for obvious reasons.

    1. Anonymous Coward
      Anonymous Coward

      Re: Open source GPFS?

      Agree, just because people don't allow their names to be used on IBM's website, doesn't mean that people are not using GPFS. It is generally used in mega implementations with PBs of data. Having written that, I do agree with the general premise that IBM is under utilizing GPFS/SONAS. They are scratching their heads about what to do in the NAS space, other than re-sell NetApp, when they have a great filer technology in the portfolio. They need to bring GPFS down to a mid-range array asap instead of just using it for a select few customers with mega environments.

  4. William Hinshaw

    Yet no mention of ZFS for big data?

    1. Alan Brown Silver badge

      ZFS

      ZFS doesn't cluster. For that you need something to operate on top of ZFS (SAM-QFS springs to mind)

  5. Nate Amsden

    don't understand

    how XIV can be considered good. 180 disks, nearline only, and RAID 1 only. When I first saw it a few years ago it seemed like it had some promise, but IBM really hasn't seen to have done much with it other than add a SSD read cache. I mean that's all they get out of a platform that has basically a dedicated CPU core for every 2 spindles on the system?

    Earlier versions of XIV were of course crippled by the 1Gbps ethernet back end.

    The only folks I can see really buying XIV are already big IBM shops. I know it's easy to manage but its crippled in so many other ways.

    XIV has some good SPC-2 throughput numbers but is soundly trounced by HP P9500 / HDS VSP on both performance and cost.

    V7000 uses software RAID too(for cost reasons), their RAID 5/6 performance is terrible. I was surprised to hear that, would of thought IBM would of done better for what appears to be the technology they are positioning for the future. Maybe some next gen system will fix that failing. Imagine using the V7000 real time compression to save a bunch of space! Only to have to lose that space because you're forced to use RAID 10 for performance.

    Other than that V7000 seems like a very decent flexible platform.

    1. Anonymous Coward
      Anonymous Coward

      Re: don't understand

      XIV isn't RAID1. Blocks of data are mirrored, not disk drives. Yes, there will be a capacity penalty of 100%, but that's the whole point of the high capacity slow HDDs. If it were RAID, it would need faster drives, pushing up the cost and reducing the capacity anyway.

      XIVs are a doddle to use, and rebuild times are tiny compared to RAID. Performance is decent since the backend move to Infiniband.

      As for software RAID, loads of people are using it and it's not just for cost reasons. Modern CPUs are barely stressed when dealing with RAID. Doing it in hardware just makes it more difficult to fix bugs or add features. The bottleneck on V7000 is the SAS interface and if that's an issue, scale out rather than up.

      1. Nate Amsden

        Re: don't understand

        blocks of data are mirrored that is RAID 1. Yes I realize they do sub disk RAID in a similar manor to 3PAR (though 3PAR is of course fully ASIC accelerated)

        The CPUs in the V7000 aren't very powerful hence the big performance impact. I wasn't referring to software raid on the server side, this is software RAID on the array side. IBM engineers say they did it for cost reasons.

        If the CPUs were powerful then there wouldn't be a big performance hit.

      2. Anonymous Coward
        Anonymous Coward

        @Anon 16:25

        XIV had a cost advantage to go with its "high capacity slow HDDs" strategy a few years ago, before SSDs. It was competing against arrays using 15k rpm drives. Now it is competing against tiered storage using SSDs for hot blocks and high capacity slow HDDs for everything else (at least for those who have done performance testing and found that using a middle tier of 15k rpm drives is pointless versus spending that money on more SSDs) Today, XIV's cost advantage is gone.

        True, XIV has faster rebuilds but it would not be a viable product any other way. If the rebuilds were slow you'd have a much longer time where you'd be exposed to the risk of data loss from the failure of the second drive. In practice, the risk is pretty small because the rebuild is so quick, but if you use double parity RAID you don't care so much how long the rebuild takes because you can tolerate three drive failures before you lose data. If you want much faster rebuilds in a RAID environment, there's no reason the array firmware couldn't grab some of the SSD tier and use that as a temporary spare because it can write data far more quickly than a hard drive. Should be pretty quick if you are using a fairly wide set like 14+2. Then leisurely copy the data off the SSDs onto the spare.

        The only place where XIV's faster rebuilds are a real advantage is when comparing with single parity RAID, but there's no reason to use that in an array that supports double parity unless it is less critical data where saving a bit on the storage makes sense.

    2. Anonymous Coward
      Anonymous Coward

      Re: don't understand

      It isn't RAID 1, it is an automated RAID 1+0 derivative, a "grid" based architecture.

      "The only folks I can see really buying XIV are already big IBM shops."

      Not at all, I know of at least two major shops, about 4 PB each, that run XIV Gen3 and nothing else from IBM systems group. There are some Goliath XIV implementations out there, 100 plus PBs. While it is true that you can technically only get 180 drives plus SSD per frame, you can manage as many frames as you want from the same interface, just different "pools" of data. If you need to move data between frames on a regular basis for some reason, you can put SVC or another virtual layer in front of the XIVs. IBM is supposedly working on native frame federation right now, but, in practice, it isn't an issue anyway.

      "soundly trounced by HP P9500 / HDS VSP on both performance and cost."

      Performance, possibly as XIV manages all data the exact same way in the grid whereas the storage systems you mentioned can be sliced and diced any way you like... you could run them in all SSD. Cost, XIV will be much less costly per TB usable than any 3PAR or VSP array. They are just much less costly to build with no ASICs, FPGAs, etc.

      IMO, the reasons people buy XIV are:

      - Nothing easier to manage on the market

      - Cost effective for the performance, all sw is included, no ASIC costs, etc

      - Solid and stable performance, maybe not the fastest on the market but never going to cause performance problems unless. If you need more performance than XIV, you should not be going over a SAN in first place.

      - Very cool snap functionality, snaps on snaps on snaps, etc with no dependencies or tree structures.

      - You can get tier one performance off of NL-SAS because XIV has a boat load of cache and that cache is distributed equally across the frame.

      1. Anonymous Coward
        Anonymous Coward

        Re: don't understand

        and rebuild times, that is the other major factor that I didn't mention, you can rebuild a NL-SAS 1 TB drive in 20 something minutes, assuming it is full. A 3 TB is like an hour, for a fully used 3 TB drive. Compared to days or weeks with traditional RAID striping.

  6. Dyip Blog.OCF

    Go GPFS GO!

    You are right! It is IBM’s crown jewels hidden away and locked in the deepest darkest dungeon! IBM does have literally hundreds of thousands of customers though and we don’t always hear about them. For every SONAS or V7000U system that IBM has sold, they have GPFS in them, so unbeknown to the end user they are using GPFS!

    We work in the HPC space and, as one of the commenters here rightly points out, if you’re in this market you’ll hardly ever find a cluster without GPFS [it is not always sites with multiple PetaBytes of data either, but sites with complex storage environments that require strong data management]

    We have also taken GPFS to a number of customers outside of HPC to dozens of private sector organisations for example WRN, RMS, IMD, Smoke and Mirrors and Landmark Solutions. Plus, many academic institutes such as University of Edinburgh, Technium Pembrokeshire and the University of Westminster. Go GPFS Go! David Yip, OCF

  7. Jmoreno

    GPFS Storage Server

    GPFS based scalable storage solution, GPFS Storage Server. Very cool! As an IBM partner we are seeing a lot of excitement around this. Also includes the GPFS Native RAID technology (I think I remember it from the PERCS project) which eliminates the storage controller. End to end data checksums is very important to to our customers in large deployments.

    http://www-03.ibm.com/systems/x/hardware/largescale/gpfsstorage/index.html

  8. josh.krischer

    Martin,

    Your suggestion to replace the DS8000 with the V7000 in analogy is like installing motorcycle wheels on BMW 7 series.

    There is a huge difference in functionality, performance, scalability and availability between a mid-range and high-end storage. FICON interface doesn't convert storage subsystem for mainframe storage.

    Some (not all) of the System z features:

    * High Performance FICON (zHPF) - zHPF is high-performance data transfer. It has four functions and each function requires cooperation between the System z server and the storage system. Selected zHPF functions are: multitrack (allow reading or writing more than two tracks worth of data by a single transport mode operation), extended distance, format writes, QSAM, BSAM, BPAM, and DB2 list prefetch.

    * Parallel Access Volumes (PAV) and HyperPAV are features which allow using multiple devices or aliases to address a single ECKD disk device.

    * Performance - I/O priorities Supports "importance" and "achievement" information provided by z/OS Workload Manager

    * Performance - DB2 Specialized cache algorithm to optimize DB2 list prefetch operations

    * Performance – IMS Enhanced performance for IMS write-ahead data set (WADS)

    * Performance - zDAC Supports an optimization to improve performance of z/OS Discovery and AutoConfiguration (zDAC)

    * Volume Management Supports dynamic volume expansion for standard (thick) 3390 volumes , Extended Address Volumes (EAV) –supports 3390 volumes up to 1TB capacity

    * Multiple Readers for IBM System Storage z/OS Global Mirror - improved performance and fewer disruptions under heavy write load conditions and as a result experience significantly better performance in particular in busy z/OS environments.

    * GDPS and HyperSwap support

    Since October 2007 IBM has accelerated its development rate, offering enhancements at the fastest pace in the industry. Examples of major enhancements include IBM’s introduction of RAID-6 in August ‘08, high-performance FICON for System z in October ‘08, full-disk encryption and a solid-state drive (SSD) option in February ’09, and thin provisioning July ‘09.

    In April 2010 IBM announced and delivered the IBM System Storage Easy Tier which automates data placement within the DS8000 subsystem. This includes the ability for the system to automatically and non-disruptively relocate data (at the extent level) across drive tiers, and the ability to manually relocate full volumes. This was the first sub-LUN automated data movement.

    IBM's System Storage DS8870 series is a stable multiplatform high-end storage system . IBM, which ten, fifteen years ago lost some of its high-end disk enterprise storage market share, has managed an impressive come-back.

    There are plenty synergies in operation between System z mainframes and DS8870 subsystems in particular in Disaster Recovery and Business Continuity but also with DB2, etc.

    The operation GIU, ported from XIV delivers the most user friendly functionalities in the industry.

    Full redundancy, non-disruptive upgrades and maintenance, hot-swappable components, pre-emptive soft error detection and online microcode changes ensure high availability and data integrity. The advanced remote data replication techniques enable any scheme of disaster recovery deployments.

    The DS8000 is a leader in SPC-1, SPC-2 (non-SSD) performance benchmarks.

  9. Martin Glassborow

    DS8000

    Oh, I know exactly what the DS8Ks can do; in a previous existence, I worked on them and their ancestors..both ESS and VSS. In fact I have a soft spot for them but they exist for one reason only these days and that is generally to support a mainframe workload.

    Yet, architecturally; they are still pretty much bog-stand dual-head array...and if IBM had wanted to, all of the features that you list, could have been incorporated into SVC, if it had started to do so about ten years ago.

    If EMC can move from Power-to-Intel; IBM could have done as well. In fact, arguably moving Enginuity from Power to Intel may well have been harder. The problem for IBM is that they have left it far too long to do so and now they are saddled with multiple array families. One of which, although absolutely key to them, probably suffers from low sales due to particular niche market.

    The reason for the decision are both due to the conservatism of the mainframe user-base but also I suspect to the politics which IBM's storage has been riven with over the past few years with competing fiefdoms and executive sponsors.

  10. josh.krischer

    Power vs Intel

    1) POWER7 cluster is more powerful than Intel

    2) Economy of scale

    3) Why buy from Intel if you can use your own technology

    4) EMC use Intel because they don't have own technology. They are designing the hardware to support the Enginuity generation after generation.

    Do you really think that a mid range system, with limited processing power can handle same functionality as hi-end storage. The DS8000 has more computing power (Power-PC & ASICs) in their front and back-end adapters than the SVC.

    SVC and the V7000 are great products but still not comparable to hi-end.

    The problem of IBM in storage is not the products - it is marketing, confidence and sales attitude.

  11. Anonymous.Username

    The VTL gateways have GPFS inside.

    When I last did an OS upgrade on an IBM VTL gateway, I was intrigued to see that the product is based on GPFS. (Sorry, I don't recall the specific model number.) For years I've wanted to kick the tires on a GPFS solutions, primarily to explore the HSM capabilities of GPFS and TSM on Linux.

    For a while there, every other storage product that IBM brought to market was a flop. How many people remember or run SAN FS, as compared to SVC?

This topic is closed for new posts.

Other stories you might like