back to article EMC Avamar gets jiggy wit Data Domain

EMC's 6.0 release of Avamar protects virtual machines faster and can use a boosted central Data Domain data store, as well as Avamar's own Data Store (now doubled in capacity). EMC says Avamar provides the first integration with Data Domain, implying that more is coming. In an ideal world users might wish to have a single …

COMMENTS

This topic is closed for new posts.
  1. thegreatsatan
    FAIL

    cha ching!

    "usable capacity" code for if you can reach our unreachable dedupe and compression rates for your data.

    1. Anonymous Coward
      Happy

      Don't knock it until you try it!

      We use DDs and we quite often see very good de-dupe rates on our daily backups due to the amount duplicated crap the users want to keep! Currently storing 250TB of NetBackup OST backups in around 30TB of raw DD disk space.

      No one wants to tell the users to bin their unused and duplicated shite, so EMC are making a packet from IT depts like ours who don't need to hassle of nagging users and are happy to spend their way out of it!

      I don't knock it, keeps me in a job!

    2. J.T

      wrong

      No, that's actual physical space. With dedupe it's like 14PB.

      http://www.datadomain.com/pdf/DataDomain-DD800-Series-Datasheet.pdf?bcsi_scan_5162959982CF324B=0&bcsi_scan_filename=DataDomain-DD800-Series-Datasheet.pdf

      Is that 14PB reachable? Depends. But at least have a tiny bit of a clue before you open your mouth and prove that you are in fact, an idiot.

  2. Pahhh
    Stop

    @cha ching...

    ""usable capacity" code for if you can reach our unreachable dedupe and compression rates for your data."

    Dont think that statement is fair. If you expect the de-duplication performance to be because of commonality of data between machines you will be disappointed. You probably wont get much more than you would with compression (de-dupe is after all a compression system that works over very large data sets).

    Why you gain with deduplication because you wont have redudant copies of exactly the same data you've backed up. So if you have full 20 full backups you wont end up with requiring x20 the amount of had on the source.

    This is Cofio's de-duplication calculator but exactly the same principle works for Avamar:

    http://www.cofio.com/Capacity-ROI/

  3. Pahhh
    WTF?

    @Don't knock it until you try it! #

    "We use DDs and we quite often see very good de-dupe rates on our daily backups due to the amount duplicated crap the users want to keep! Currently storing 250TB of NetBackup OST backups in around 30TB of raw DD disk space."

    I somehow dont believe that this saving has anything to do with duplicated crap. I think its very much likely because you have 10 or more full backups that have been de-duplicated.

    Tell us how much source data you have? If you have 250TB of data down to 30TB of backups I will be impressed.....

    1. Rigadon

      @Don't knock it...

      I would suggest that the main benefit of deduping your backup is not how much you can wang on it in one go, but the number of fulls you can retain.

      If you only want to keep one backup on disk then you may as well put it on JBOD. If you want to keep a months worth, then you are going to have to think of something else.

This topic is closed for new posts.

Other stories you might like