back to article HPE bolts hi-capacity SSD support onto StoreServ

HPE is boosting all-flash 3PAR capacity with support for high-capacity SSDs, adding persistent storage for containers, and improving snapshots and file management services. At HPE Discover in Las Vegas, HPE announced StoreServ is getting: Hi-capacity SSD support - 7.68TB and 15.36TB 3D SSDs from, we understand, Samsung using …

  1. CheesyTheClown

    Purpose?

    Where does a product like this fit?

    While I think all flash is nifty, where's the value in array's today?

    Distributed file systems far out scale and outperform arrays in every category. Even with custom ASICs, unless an array has 100Gb/S physical access per server, it will be a gigantic bottleneck feeding data to the servers.

    Oracle and MS SQL have supported sharding for a long time. GlusterFS, ReFS as well as many others also have sharding now. Add data teiring for near-line on a hybrid array and HP's new product isn't just obsolete before even shipping, it's also far more expensive and less efficient than "hyper-converged".

    A few NVMe drives on each server combined with a 160Gb/sec network will far out perform arrays with centralized storage.

    I am training a major British corporation this week in precisely this topic. Shared drives and centralized storage just can't compete with distributed data. The interconnect is too slow, the lack of application support make backup unintelligent. The disk latency is too high. The scalability is nearly non-existent.

    Better to spend money on good, solid, cheap, non-proprietary storage.

    Let me guess... The system is super nifty 32gb/s fibre channel? That's ridiculously slow.

    1. @FLASH

      Re: Purpose?

      What do you think it will cost to add a "few" NVMe drives on 100+ servers. For your info HPE has HyperConverged solotion also. 99% of applications out there don´t need better response time then 0,6 ms.

    2. Anonymous Coward
      Anonymous Coward

      Re: Purpose?

      The low end arrays (8200) have up to 192Gb, (8400) up to 384Gb and their high end (20000 series) arrays have 2.56Tb of bandwidth.

    3. dikrek
      Boffin

      Re: Purpose?

      Ok Cheesy I'll bite.

      In almost every storage-related post (I've been keeping track) you mention these behemoth server-based systems and how fast they are. We get it. You don't like centralized storage and believe server-based is faster. To get this out of the way: properly designed server-based storage can be extremely fast - but not necessarily less expensive.

      However, enterprises care about more than speed. Far more.

      Out of curiosity, does Windows-based storage do things like live, automated SSD firmware updates?

      What about things like figuring out how to automatically stage firmware updates for different components including the server BIOS, HBAs, the OS itself, and other components?

      Or analytics figuring out what patches are safe to apply vs what bugs other customers with similar setups to yours have hit in the field?

      How about analytics figuring out what exact part of your infrastructure is broken? Based on advanced techniques like pattern matching of certain errors?

      Does server-based storage provide comprehensive protection from things like misplaced writes and torn pages? (Hint: checksums alone don't do the job).

      In addition, are you running NVMe over fabrics? Because simply a fast Ethernet switch isn't enough to maintain low latencies. Or are you doing SMB3 RDMA? And what if my application won't work on SMB3? Maybe it's an AIX server needing crazy fast speeds?

      Are the servers perchance mirrored? (Since you need to be able to lose an entire server without issue). If mirrored, doesn't it follow that 50GB/s writes to the cluster will result in an extra 50GB/s of intra-cluster traffic? Isn't that wasteful?

      And if erasure coding is employed, how is latency kept in check? Not the best mechanism to protect at scale NVMe drives.

      Honestly curious to know the answers, maybe server-based really is that cool these days.

      Thx

      D (disclaimer: Nimble Storage employee)

  2. JohnMartin

    wherefore art thou, so low-cap SolidFire?

    -disclosure NetApp Employee-

    -disclosure Blatant Plug-

    -disclosure opinions are still my own even if I am a fanboy of my employers technology-

    Solidfire vs 3PAR is a bit of an apples to oranges comparison.

    They might both use flash and present LUNS, outside of that they are very different beasts from a consumption model point of view. 3PAR is built for old school "Mode-1" datacenter, and Solidfire is built for next generation datacenter (which incidentally is also where HCI plays). Having said that because Solidfire keeps all the metadata in memory, its closer to the way XtremeIO works which in its current incarnation gets about 40TB RAW in 6RU for about 150,000 IOPS. With 6 nodes in 6RU using the new Solidfire 19210, you'd get about 114TB RAW and 600,000 IOPS managed by the worlds best QOS implementation and automation framework, and scales to 100's of nodes ... much much higher than the 8 "bricks" (oh how aptly named) you're limited to with XtremeIO.

    If you're looking for a more comparable NetApp technology, then you'd need to look at ONTAP, because this release shows HP are desperately trying to cram as many ONTAP features into 3PAR as fast as they can. ONTAP 9 which we announced recently already supports the 15TB drives, and layers on high speed compression, inline deduplication, and compaction (vs 3PAR with only coarse grained dedupe that kills their performance) and you can get quotes for those high density drives from NetApp today for delivery in the very near future. Furthermore all the engineering foundations for future support of the even larger upcoming drive sizes is already there (e.g. RAID-TEC).

    According to the most recent IDC numbers, NetApp rocketed past IBM, Pure and HP in the all flash race, this catchup announcement from HP won't change that.

    1. Anonymous Coward
      Anonymous Coward

      Re: wherefore art thou, so low-cap SolidFire?

      What performance hit is this with the dedup? The only difference between dedup on and off is when there is a dedup hit, the 3par will do a bit comparison for the hit to ensure its not a collision, so requiring a read, removing that IO from use for the systems accessing. If the NetApp doesnt ensure its not a collision, thus taking this 'performance hit' also, then that is a reason not to use dedup on a NetApp.

      3par does 'compaction' (thin provisioning) also, compression is coming in the next release also.

      Support for future SSDs is also available, like the NetApp, they need qualification.

      The 3par can also have over 114TB in 6U, before the 15TB drives becoming available. Now it can get ~360TB in 2U and over 1m IOPs.

      According to IDC, the 3par is a leader, but solidfire is a major player: https://redmondmag.com/articles/2016/05/19/idc-compares-flash-arrays.aspx

      1. Anonymous Coward
        Anonymous Coward

        Re: wherefore art thou, so low-cap SolidFire?

        Compaction is not thin provisioning

      2. JohnMartin

        Re: wherefore art thou, so low-cap SolidFire?

        I'll answer point by point ...

        Based on the feedback I've seen from the POCs NetApp has done against 3PAR when HP turns on dedupe, 3PAR doesn't perform anywhere close to the results customers expect.

        This has been explained to me as contention on the 3PAR interconnect imposed by the additional dedupe processing. There are a lot of restrictions around posting competitive performance numbers, but I can say that an AFF8080 consistently smokes 3PAR with dedupe turned in competitive POCs regardless of what the comparative spec sheets might indicate.

        I'd love to see a public benchmark showing 1 million IOPS with an actual write workload that has dedupe turned on

        ONTAP does byte level verification and is able to do that as a combination of both inline and post process which keeps latency at the sub 1millisecond mark through write workloads measured in hundreds of thousands of IOPS or Gigabytes per second of throughput.

        Compaction isn't thin provisioning, ONTAP has thin provisioning as well (a better implementation than 3PARs IMO), compaction is something completely different, its a technology that results in some spectacular savings, particularly for database workloads. Nobody else has it, or indeed anything remotely like it, though I think the approved term may be "inline data coalescing", there will be some detailed briefings come out soon.

        Compression on 3PAR .. I've been hearing about it for a few years now, I'll believe it when I see it.

        If HP are planning 36TB drives without having some form of triple drive failure protection, then god help HP's customers, or maybe 3PAR will go back to mirroring which kind of defeats the point of the bigger capacity drives.

        Don't get me wrong, 3PAR is full of great technology, I respect it enormously, but outside of some promises for RAW density benefits in this release, from an all flash array perspective it is almost entirely outclassed by most of its competitors.

    2. PaulHavs
      Happy

      Re: wherefore art thou, so low-cap SolidFire?

      @JohnMartin,

      You do ake some correct points there John; however it takes two very different architectured product solutions for you to reference to make them. Yes - SolidFire has a good QoS implementation. Yes - Ontap has features (file services) which 3PAR has built out.

      The overwhelming beauty of 3PAR which customers are voting for (re: buying it) is its near perfect balance of Cost effectiveness; Performance; and Enterprise matured data services.

      SolidFire has some of these but not all.

      Ontap / FAS has some of these but not all.

      Customers want to simplfy - not complexify. I'd imagine the NetApp answer to this is a FAS gateway in front of a SolidFire backend? - managed at some point in the future with "an extra pain of glass" ?

      regards,

      Paul Haverfield

      Storage CTO, HPE APJ region

      1. JohnMartin

        Re: wherefore art thou, so low-cap SolidFire?

        <pointless vendor sniping>

        Hey Paul :-)

        AFF with ONTAP is more than sufficient to take on 3PAR all by itself, Netapp has moved beyond the "one architecture to cater for all needs", its odd to see HP playing that song now. As far as 3PAR vs ONTAP head to head, I'm always happy when we go into a POC against HP, our performance is better with storage efficiency features turned on, price per effective TB with efficiencies turned on is better, and our data services are more mature and can be extended all the way through from an Appliance -> VSA - > Cloud, or indeed, if an HP, EMC, or HDS customer wants those data services, we can enable that via software through the FlexArray feature as so many 3PAR customers have already found out.

        ONTAP is a great way of simplifying and consolidating your environment, you really should come around for a demo of the full management suite one day.

        < / pointless vendor sniping >

  3. Anonymous Coward
    Anonymous Coward

    Trying to keep up with an aging architecture.

    Th HP 3Par StorServ needs to adopt these higher density unproven SSD's as early as possible to provide the capacity other AFA's can deliver already, as their arrays don't have any compression and the can't dedupe below 16k. Not very efficient by today's standards.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like