back to article Fresh and fast little flashers from NetApp

NetApp has launched two new all-flash FAS arrays, won a top 3 SPC-1 storage benchmark result, and announced a new flash capacity guarantee programme. The all-flash FAS (AFF) A200 and A700s join the existing A300 and A700, which came into view in September. Place the A200 under the A300 on your mental NetApp AFF positioning map …

  1. Anonymous Coward
    Anonymous Coward


    OK, someone help me out. Doesn't AFF in cDOT have QOS and doesnt cDOT have scale out?

    If so, what was the purpose of blowing loadsa wonga on SoldFire in the first place which semes to be the only thing the NetApp drones talk about when discussing SF? What about the ESeries? Does one company really need three options with three sets of management and three R&D budgets?

    Isn't the AFF just a tarted up version of the standard FAS and therefore isn't really a flash-designed system? Does this even matter?

    As a reseller of many technologies I'm curious about the direction of NetApp. Ive struggled to sell NetApp lately due to simpler and newer systems with better interfaces and smarter management and they are hurting my business. NetApp defend this by saying that with a 'full suite' comes complexity yet much of NetApps older value for my customers has gone away with smarter applications handling much of the heavy lifting.

    NetApp wont go away, that's for sure, but I will be leading with alternative options which are constantly eating my lunch.

    1. bitpushr

      Re: Confused

      Clustered ONTAP (by definition scale-out) has had QoS since 8.2, including regular/hybrid arrays and AFF.

      In terms of your question, SolidFire and AFF do different things in different ways. SolidFire is SAN-only and was designed for service providers, in so far as it's QoS first (including *minimum* limits) and everything being API-driven. It scales up to 100 nodes, as opposed to ONTAP's 24 (or 12 for SAN).

      There's some overlap, but they're different horses for different courses.

      Disclaimer: NetApp employee

      1. Anonymous Coward
        Anonymous Coward

        Re: Confused

        ".... different horses for different courses"

        NetApp went from stable with a champion, to breeding one trick ponies....

        You may rember the sales slides that show 1 FAS doing everything a comptetitor needed 3 product lines for.

        Now its the other way around - The All Flash startups have one product beating NetApp's 3 product lines. NetApp is now aligned to compete in the Flash market and NetApp historical competors (EMC, HDS...) have left NetApp behind.

        So when NetApp competes against the small All Flash players, NetApp likes to advertise their Enterprise Software Features despite never producing Enterprise grade products.

        NetApp tries so hard to come across as "Enterprise" that its awkward to watch their sales people trying to cpnvince themselves of that lie.

        1. bitpushr

          Re: Confused

          Which Flash players have a single product beating all of NetApp's product lines? If that were true, NetApp would be out of business yesterday and all the other players would be posting profits. And yet...

        2. FDavids

          Re: Confused

          "The All Flash startups have one product beating NetApp's 3 product lines."

          Yes, that may explain why Pure dropped from #1 to #5 in Flash market share (#4 if you combine EMC+Dell), Nimble doesn't even register on the IDC flash chart and why NetApp has moved from the #5 position to #2 and very close to DELL+EMC, and in front of HP and IBM on the latest IDC flash share chart. I can see how that can cause confusion. Which flash startups are beating them again? <sarcasnm>

          “NetApp tries so hard to come across as "Enterprise"

          As far as enterprise goes. NetApp has more US Federal Government market share than all other storage vendors combined. Maybe I am wrong, but I bet the US government is running some fairly "enterprise" stuff. NetApp runs the Oracle Database at CERN which are the largest and most complex scientific instruments in the world. NetApp also runs Guinness World’s record for largest SAP data warehouse at 12.1PB at SAP. Maybe these things aren’t “Enterprise”, not sure. <more sarcasm>

  2. SeymourHolz


    Your first question was a technology question. Then you switch to market questions.

    Why does the market need Burger King, McDonald's, Wendy's, and Hardee's? Why do each of these places have 10 different brand names for the same meat paired with the same bread? They aren't appreciably differentiated in any dimension.

    It is because buying decisions are never merely about facts, whether nutritional or technical. Buying decisions satisfy emotional needs too, that's why you hire salespeople with good teeth and pleasant demeanor. There were relevant facts in the SolidFire case, which had a good book of business and an attractive price; the acquisition decision was not based on technology factors. But NTAP also wanted to satisfy an emotional need to 'look modern' by getting a young name on the board. Which satisfies the emotional needs of pundits like Gartner, El Reg, et al.

    Parochialism is not your friend. Just sell whatever has traction, wherever it has traction.

  3. bd911

    Enterprise Ready

    We’re seeing a lot of interest from end-users (including enterprises) with metro cluster deployments – DataCore can deliver a highly available more performant solution often for less than the cost of renewals.

  4. bd911


    Adam states that "we can't speculate on how DataCore got its results..." but in fact SPC-1 tests are peer reviewed so someone in NettApp has a pretty good idea how DataCore did it.

  5. Anonymous Coward
    Anonymous Coward

    "It's like comparing apple's and oranges...", but we decided to show the results anyway because we made it to third place this time...

    1. Anonymous Coward
      Anonymous Coward

      Yep, NetApp !

  6. Anonymous Coward
    Anonymous Coward

    Why do people use SPC-1 for benchmarking AFAs. Overall IOPs are irrelevant for workloads where people choose AFAs. What's important is latency.

    If you get X IOPs from a collection of hardware and software (let's call it "A"), then adding a second "A" (or "2A" if you prefer) will give you 2X IOPs. Double again and you have 4X IOPs. It's easy to increase IOPs without actually improving latency, which is what's actually important.

    All these SPC-1 benchmarks prove is that SPC-1 is a crude and unreliable measurement of performance.

    1. bitpushr

      If you have a benchmark that all (more or less) other storage vendors will agree to run, I'm sure NetApp would be all ears.

      1. Anonymous Coward
        Anonymous Coward

        The more relevant

        question is how many people make decisions based on the SPC1 results?

        Over the last 15 years I've not met a single person who bought because of it. It's a good benchmark for vendors to thump their chests and celebrate.

        SPC1 has become rather irrelevant especially with Flash and newer NVM technologies coming up. It's even more irrelevant when normal operational activities are not taken into consideration. Things like snapshots or snapshot deletes or node failure events to allow people to ascertain the impact. You know...the things that happen in a data center daily.

        Btw...Why didn't NetApp use the newer SPC1 version v3 but elected to post a result on the publishing deadline of the old version?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like

Biting the hand that feeds IT © 1998–2022