back to article Oracle pops cork as cut-price ZFS array creams NetApp rival

An Oracle ZFS 7420 storage array provides 40 per cent more performance than a NetApp FAS6240 at a $700,000 lower price point. That's according to the SPECsfs2008 benchmark, which tests how effective an array is at servicing NFS IO transactions. The chart shows a selection of vendors' SPECsfs2008 IOPS scores in the fewer-than-1 …


This topic is closed for new posts.
  1. Nate Amsden

    SPC-2 too

    someone pointed out to me new results for the SPC-2 for this oracle array, at least from a price/performance perspective the results are impressive

    I'm sure part of the reason is this Oracle 7240 is using 10-core CPUs, a total of 8 of them or 80 CPU cores in their cluster, vs NetApp having only 16 cores.

    Another big part of course is they have 6 times the amount of memory+flash vs NetApp (on the SpecSFS test - the SPC-2 test had no flash)

    Take those combined and the cost - it's pretty impressive indeed.

    1. Kebabbert

      Re: SPC-2 too

      I think the cost difference is interesting. NetApp is expensive for similar performance. Why is that?

      1. Matt Bryant Silver badge

        Re: Re: SPC-2 too

        Because Oracle used 280 disks but only presented slightly over 36TB, which means they short-stroked the disks like crazy, whilst leaving the FAS running fatter and probably more expensive disks - 85.8TB exported off 288 disks, almost three times as much as the Oracle offering from almost the same number of disks. Nobody knows what other steps Oracle took to "optimise" the FAS result but I'm betting there were a few tweaks NetApp probably wouldn't advise. They also failed to include the clean-up costs when your ZFS device falls over.....

        1. Kebabbert

          Re: SPC-2 too

          I think Oracle used 300GB disks, while NetApp used 450GB disks. Oracle also used mirroring, effectively cutting storage in half.

          Anyway, it is interesting that you can buy several Oracle ZFS servers for the price of one, slower NetApp server. Something is not really correctly priced. Either Oracle should be much more expensive, or NetApp should be cheaper. Or both.

  2. Anonymous Coward
    Anonymous Coward

    But can you admin the device?

    Our own experience of the previous Sun/Oracle ZFS array is it delivers good performance and price, but is possibly the most fragile system we have ever used. Even trivial tasks like configuring a network or failing over a clustered system can break it or necessitate a irksome reboot.

    And Oracle have been useless at fixing our bugs years later!

    Anyone else able to comment on their Oracle and/or NetApp "user experience" for comparison?

    1. Anonymous Coward
      Anonymous Coward

      Re: But can you admin the device?

      Well - I have several of these units in production - in fact, this is our only storage vendor having finally canned the old EMC gear - a mix of single head and clustered configurations w/ cross DC replication blah blah; and I've had absolutely no problems in the last year.

      The network stuff isn't too bad, if you have a dual head config the only real gotcha is if you want to administer the"non-active" head in the cluster then you need to sacrifice an additional network port. For example the admin interface may be on nge0 on head 1, and nge1 on head 2 then you can't then use nge1 on head 1 for anything else.

      The clustering and fail over is rock solid for us - we even have oracle dbs sitting on the NFS on these units and the failover isn't noticeable to any end users and things puur along fine.

      The bang for buck is way up there, and they're very simple to administer once you get your head around some of the BUI quirks and the fact that you can't mess around on the shell without risking the warranty our red overlords hand to us.

      Support for them is reasonable, you generally have to shout a bit to get over to the really "technical" teams (all of whom seem to be US based, so if you do have issues then you may be in the office a bit later than you want to be). I recon that's the same for most vendors though.

      Generally speaking; I don't have any plans to deploy anything other than these units for the next year at least.

  3. Anonymous Coward
    Anonymous Coward

    Oracle has not gotten any better. They've lost basically all of the original engineers on the zfs project. There are still talented people, but they're too few to continue the work at the rate it needs to happen.

    NetApp, for the most part *just works*. Everyone has bugs, and they may be a little slow with new features lately, but you don't have to worry about a basic config change taking the whole cluster down.

    1. Scott 29

      Enjoy your storage. Er, how about a database?

      NetApp docs and support are just so much better than Orrible; gawd, I hate

      They really sell a database, and it's okay by them if storage goes out of business. On the other hand, NetApp sells storage. It's not okay with them if storage goes out of business.

      Want to use your appliance with <foo>? Well, here's a best practices paper of how to use it with <foo> when dealing with NetApp, while Oracle is not there at all. They don't write the old Sun Blueprints any more, and they've pulled access to the old ones.

      The talent is now at OpenIndiana. Even the hardware isn't what it used to be. It's prettier but not as innovative as their competition; I've had a lot of new hardware failures both with SPARC and x64 that I wouldn't have had years ago.

      1. dariusz

        Re: Enjoy your storage. Er, how about a database?

        Sorry to hear about your poor oracle support experience. Most customers once they use it, love it.

        They really sell a database? Its ok if storage goes out of business? LOL. you have zero idea what your talking about. What about the 100's of other apps, engineered systems like exadata and supercluster which much of the IP is the storage! LOL. You obviously have never been to Oracle Open World.

        Whitepapers/blueprints: we have many and more or being written all the time.

        All the talent is at openindiana? ROFL. Tell that to the hundreds of engineers working on improving our products daily and allowing us to achieve such high SPC1, SP2 and benchmark numbers. Oracle has invested more in hardware engineering then SUN could dream about. Sun was to busy building science projects and giving away their precious IP versus trying to make money and a living like netapp and emc do. Thank goodness Oracle killed OpenSolaris. Giving away your IP doesn't seem to pay the bills. If it did netapp would make ONTAP opensource and EMC would do the same for their plethora of systems and OS's. How many people would by a DataDomain if emc gave away the code?

  4. Anonymous Coward
    Anonymous Coward

    And SPC-1 results

    Comparison (biased cuz it's done by Oracle) here:

  5. Anonymous Coward
    Anonymous Coward

    Possibly true, but...

    This may be true. NetApp is ridiculously expensive when you include all of the software functionality required. Sun's ZFS arrays do not have anywhere near the feature/function of NetApp, so it is kind of an apples and oranges. A big file server vs. a SAN.

    1. dariusz

      Re: Possibly true, but...

      The comparison doesn't include any of the expensive netapp software other then NFS... Oracle's arrays by the way have a very large stack of included software such as dtrace based analytics which netapp cannot touch and Netapp cannot even support Hybrid Columnar Compression for the Oracle Database.

      1. GitMeMyShootinIrons

        Re: Possibly true, but...

        True, but I'd expect Oracle tin to support all Oracle software to a level exceeding a 3rd party. Unfortunately, most people use more than just Oracle software.

        Oracle support for 3rd party stuff on the other hand is pretty poor. It's VMware integration alone is poor in comparison to NetApp.

        If you want to host Oracle DBs etc, then Oracle tin is a great choice, enjoy! Likewise, if you want an easy to use iSCSI box for Windows servers, you could try Equallogic, while, for a more versatile platform, I'd go NetApp. It all depends on what's the right tool for the job.

        As companies, I'd sooner deal with NetApp than Oracle - far less troublesome in my experience and this is important when something goes bang in the middle of the night.

        1. Anonymous Coward
          Anonymous Coward

          Re: Possibly true, but...

          True, like their DB and products in general, Oracle will only begrudgingly accept VMware, MS, RH, IBM's existence. They are certainly not going to optimize anything for it. Has anyone ever tried to figure out if VMware is supported for Oracle's DB or application servers? Official response: "Sort of, but don't try it"

    2. Anonymous Coward
      Anonymous Coward

      Re: Possibly true, but...

      One reason why we found the ZFS appliance good value was it included a lot of useful stuff for "free" (such as snapshots, NFS/CIFS/FTP access, replication, etc) and most were independent of the size of the stored data, which was most certainly not the case for NetApp (famous for usurious licensing extras).

      But balance in our admin staffs' wasted time looking after the system and it might not be so good.

      Still, ZFS itself is probably the Grand Canine Gonads of file systems and it is a crying shame that Apple dropped it, then Linux/Sun never agreed to a licence to use it outside of FUSE, and that MS didn't graciously accept defeat and use it in place of NTFS.

  6. Gordon 11

    Like for Like?

    Am I missing something?

    Doesn't the NetApp have ~2.5 times as much disk space?

    So you'd have to multiply the Oracle cost by 2.5 to compare costs.

    Still cheaper, but now it's much closer.

    1. dariusz

      Re: Like for Like?

      Umm, not quite!

      it wouldn't be dollar for dollar for more disk space on the ZFSSA. To double the space would cost a whole $78k. Not 2.5x the cost. That would put it still 620k cheaper then the 6240.

      Also the point of this exercise is performance, netapp needs more disk because they are less efficient.

      1. Anonymous Coward
        Anonymous Coward

        Re: Like for Like?

        > Also the point of this exercise is performance, netapp needs more disk because they are less efficient.

        Are there other reasons for buying netapp? Is it a more robust or long-lasting design?

        For example, more disks means fewer r/w per disk which might allow the system to last longer before failure, or flash might need replacing twice as often. Although you may be able to get higher overall performance out of one architecture, another design might allow you to tune to improve performance of hot-spots which may be critical to your particular business. Or you may be looking at db vs filer performance.

        I'm glad we've got some decent competition for NetApp but the devil is in the details and the detailed requirements can vary greatly between companies.

        I've no idea but benchmarks are generally over-simplistic.

  7. thegreatsatan
    Thumb Down

    great if all you run is Oracle

    keep this in mind, none of the oracle ZFS systems support any of the VAAI primitives for VMWare or have any integration into vCenter, let alone hyper-v. how many shops are not putting their virtualization layer on their SAN arrays?

    I've seen sales rep pushing these in tandem with the oracle database appliance (due to the pathetic 4TB usable space on that rig), honestly there are better ZFS solutions available.

    curious, wtf happened to Pillar?

    1. dariusz

      Re: great if all you run is Oracle

      First off your correct, today Oracle does not support VAAI or yet have vcenter integration. But, it has many things to consider for a vmware environment.

      1. Dtrace Analytics - The ZFSSA is able to drill down and show you live or historical detailed information about an individual Virtual Machine on VMware. Such as I have 40 vm's running tell me exactly how many IOPS each is doing? How many MB/s? what is the Read/Write ratio, what is the block size?, what is the latency? I can answer these questions and hundreds of others with analytics in mere seconds and for a single virtual machine. Try and do that with your other storage system. By the way these analytics are included and run 24/7.

      2. Native 40Gb infiniband - ESXi supports native 40Gb infiniband. You can connect a couple of cables from your esx servers to your ZFSSA and never run out of bandwidth. Furthermore you could connect the whole stack to a mellanox 4036 Grid Director and you wouldn't need any other cables to your esx servers but a few management cables.

      3. ZFSSA is fully certified and works extremely well with VMware. You could easily also create Powercli scripts to integrate with snapshots, clones and replication on the ZFSSA array today. Any esx admin worth anything would easily be able to accomplish this with PowerCLI and they would have to pay a storage vendor $$$$$$$ for a GUI that does the same thing.

      Regarding Hyper-V: It is fully supported and on the Windows 2008 WHQL. We also have a nice VSS hardware provider that plays well with Hyper-V and Data Protection Manager. Again, not expensive SAN software license required $$$$$$$.

      It sounds like you really need a real demo of the ZFSSA so you can see its true capabilities.

      Oracle stopped giving away all the intellectual property that Sun gave away so thoughtlessly over a year ago. There are many lines of code and optimization that other ZFS storage vendors will simply never get. You don't see netapp and emc giving away any of their code? There are hundreds of engineers at Oracle working daily to make the ZFSSA one the fastest most stable enterprise storage arrays on the market.

  8. romx

    "cooked" pic is a great tool to make an *right* imprression

    It's a quite handy to show a fine cooked pic with graph in article. Why you choose only Huawei-Symantec, just not to show a NetApp atop with their 24-nodes cluster config?

    Just because it will be unbeatable and 2,5x times faster than Huawei? Hehe.

  9. Twit

    SSD outperforms disk shocker....

  10. King1Con

    RE: great if all you run is Oracle

    tgs> keep this in mind, none of the oracle ZFS systems support any of the VAAI primitives for VMWare or have any integration into...

    With ZFS offering dedup, it seems CRAZY not run virtualization on top of ZFS!

    This is especially the case since the dedup'ed blocks are in memory, disk, and read cache... the more your virtualize on top of ZFS, the higher the IOPS (over non-virtualized loads, since virtualized workloads will have fewer opportunities for ZFS dedup to be used.)

    1. Kebabbert

      Re: RE: great if all you run is Oracle

      Sadly, dedup on ZFS does not really work well. For instance, you need lot of RAM, something like 1GB RAM for each TB disk. I dont know how NetApp handles dedup? Are they more efficient?

      I think that we can summarize as this ZFS server from Oracle is a good step. Now these ZFS server are low-end to mid-end. But high-end still is out of reach, for instance the 140 node isilon cluster.

      Thus, if you need low-end or mid-end servers, then ZFS will do. Later, ZFS will venture into the high-end segment. But that will take time. But for now, there is a new sheriff in town, and his name is ZFS.

      1. Anonymous Coward
        Anonymous Coward

        Re: Kebabbert

        Using a lot of RAM is typical for ZFS and partly why it gives good I/O along with reliability. Our storage heads have 64GB each, so the 1GB/TB ratio is not such a deal in this day and age.

        We don't use de-dupe as our data sets are not suited to it, which is a much bigger aspect to how useful it is. Remember, you don't get something for "nothing" in life, and de-dupe either slows things down and/or needs big memory hash tables to work, so you need to ask yourself if the space saving (and read-cache hit improvement) is worthwhile.

        Lots of VM, probably yes...

  11. King1Con
    Thumb Up

    SSD, RAM, Disk, Dedup

    twit> SSD outperforms disk shocker....

    SSD was only used as cache on ZFS. From the costs the competition had, DRAM was probably used as cache in the other systems. Definitely a shocker that the DRAM/Disk combination was so overcome by SSD/DRAM/Disk combination of ZFS.

    Kebabbert> dedup on ZFS... you need lot of RAM, something like 1GB RAM for each TB disk

    Don't forget your SSD cache. If there are dozens (or hundreds) of VM's running, there is huge savings in hard storage. Want disk efficiency, add a little ram - the world had 64 bit systems for over decade. Best practice is to separate the OS, App, and Data - dedup the OS and App across the VM's, certainly the "best bang for the buck."

  12. This post has been deleted by its author

  13. Matt Bryant Silver badge

    Another pointless vendor benchmark comparison.

    We have some old Texas RAM-SAN devices and we somethimes get marketting emails from their UK partners. One that came in last month makes almost as big an oranges-vs-apples comparison as the Oracle one, except they compare the June 2011 SPC-1 result for the RAM-SAN-630 with the Oracle ZFS 7420c device from November 2011 and claim it means the RAM-SAN gives "nine times the value":

    "....our Ram-San 630 scored 400,503.26 IOPS at a cost of $1.05 per SPC-1 IOP! That's almost three times more performance than the Oracle system (137,066.20 IOPS) for almost a third of the price ($2.99 per SPC-1 IOP)!...."

    Beware of vendor benchmarks and comparisons!

This topic is closed for new posts.

Other stories you might like