back to article Big Blue beats off rivals to push out first LTO-6 tape drive

IBM has announced an LTO6 tape drive, the first one out there. LTO6 is the latest LTO Ultrium tape format, offering 6.25TB compressed, 2.5TB raw, and transfers data at 400MB/sec compressed, 160MB/sec raw. The current LTO-5 generation offers 1.5TB raw capacity and a 140MB/sec transfer rate. The LTO-6 capacity increase comes …

COMMENTS

This topic is closed for new posts.
  1. Richard Boyce
    Thumb Down

    Scaling

    Ridiculous scaling for the bandwidth in the graph. The should've changed that by a factor of 10.

    1. Fuzz

      Re: Scaling

      Also they should have graphed the rate of increase rather than the actual values as that is what the graph is trying to show.

    2. Christian Berger
      WTF?

      It also looks...

      ...like it was done with a spreadsheet.

  2. Anonymous Coward
    Anonymous Coward

    Curious outsider

    I know nothing about large scale backups, but if the media were sufficiently cheap, wouldn't it be possible to use a write once/read many technology? That should make it possible to increase storage density, since the data does not have to be modifiable. In fact, switching away from magnetism could increase the storage density greatly.

    1. Lars Silver badge
      Pint

      Re: Curious outsider

      Try punch tape.

      1. TRT Silver badge

        Re: punch tape

        An optical version might be feasible. A polymer ribbon about an inch wide... what's the surface area of a CD or DVD?

    2. Florence

      Re: Curious outsider

      This does not address your increased density/other types of technology query, but you can get WORM media for Ultrium drives.

    3. Anonymous Coward
      Anonymous Coward

      Re: Curious outsider

      You could at least triple the amount stored by adding thin optical layers on both sides of the tape, with the magnetic layer retained and still functioning. That could also provide a backwards compatibility option.

    4. Anonymous Coward
      Anonymous Coward

      @AC

      Re: Switching away from magnetism

      I too was a little surprised to see tape out of all media still being developed. But I think it makes a lot of sense; not merely for (large) capacity but also for reliability.

      Optical is neat and all, but not always as reliable as it can be. For example; I have several 5.25" floppies dating from the C64 era and I can read all of 'm. I also have several older CD's from the 286 / 386 era and guess what? Some of them are already completely unusable.

      1. Anonymous Coward
        Anonymous Coward

        Re: @AC

        Agreed about optical. I have older CDs from the Pentium 4 era that already unreadable despite their being kept in proper climate, in protective cases with very limited light exposure. CD/DVD burners are not reliable archive tools.

    5. Anonymous Coward
      Anonymous Coward

      Re: Curious outsider

      Most WORM technologies are either magnetic or optical. The magnetic ones wouldn't see an improvement in data density, because they're basically the same as existing tape technologies. Optical require composite layers of reflective and various other materials on a carrier (in this case, presumably tape) the issue here is that these layers are prone - over time - to de-laminate, rendering the medium useless.

  3. Steven Jones

    keeping the beasts fed...

    As many people who've designed backup solutions can attest, the biggest problem with tapes is keeping them fed. That means avoiding saturating storage arrays, SAN links, servers and so on. Quite a difficult balancing act. If you are using host-based backup software (including database backup tools), then they can be stretched to the using just a couple of LTO drives. Once you drop below the streaming speed of tape drives and get into shoe-shining mode, then throughput drops dramatically. Keeping just two of these LTO-6 drives fed is tricky, even if you use multi-streaming from multiple sources (which introduces its own issues).

    1. seven of five

      Re: keeping the beasts fed...

      Absolutely, keeping a dozen LTO-4 in two different libraries happy can easily overwhelm the SAN isl. As soon as your company reaches a sufficient size, the storage admins for the backup infrastructure become logistics experts.

      1. Anonymous Coward
        Anonymous Coward

        Re: keeping the beasts fed...

        This is why it's common to use a separate tape SAN and backup LAN. Modern servers can more often than not easily cope with running a backup load which would saturate the SAN/LAN, so physical separation is really the only way forward, unless you want to allow the backup window to also be an effective production down time.

    2. Mr Atoz
      Alert

      Re: keeping the beasts fed...

      Shoe Shinning is a DLT issue, not LTO. LTO does speed matching by stepping down the streaming speeds to alleviate this problem.

      1. Steven Jones

        Re: keeping the beasts fed...

        @Mr Atoz

        Your clearly not as familiar with LTO drives as you think. LTO does speed match, but only down to a certain level. There is a minimum streaming speed for all LTO drives, and drop below that and they will shoe-shine (although bigger tape buffer memory helps). It's also highly dependent on fata compressibility, as the minimum speed matching is at the raw (compressed level), so for highly compressible data, the feed rate has to be considerably higher than for incompressible data.

        @AC

        Of course keeping a separate tape SAN helps, but if you are performing host-based backup, the server still has to pull all that data over the disk SAN as well. In a true 24 hour centre, this is in addition to "normal" workloads. When you've worked with SANs with many hundreds of servers, some in Oracle clusters working at GB/s levels, and daily backup and archive requirements in one data centre measured in petabytes, you rather get to be familiar with issues such as static load balancing (LTO tape drives do not do dynamic load balancing - at least the ones I came across). Then there's the issue of legacy - most big shops will have a SAN constructed with drives, switches, links and servers of different generations and capabilities and it's far from easy making the required changes whilst keeping up a 24 hr services.

        It's easy to do for a few servers, a whole different ball game in really big shops.

        1. Anonymous Coward
          Anonymous Coward

          Re: keeping the beasts fed...

          @Stephen - It depends how you do it, the last major company I worked for had Prod/DR datacentres, with very fast interlinks, all data was replicated from prod to DR synchronously. The backups were taken at the DR site, whichever that was for the system in question. This meant that the bandwidth consumed from back end array to tape drive was always remote from the primary application. We often used dedicated servers to mount filesystem snaps in order to further control the links which would go "hot" and have them placed appropriately, usually directly off the main "hub" directors..

        2. Mr Atoz

          @Steven Jones Re: keeping the beasts fed...

          Perhaps its you that is not familiar with the term "shoe shining" which was associated with older tape tech like DLT. Of course LTO drives will stop or slow down when ingest speed is slow and buffers get depleted. LTO was much better engineered to handle stop/start situations specifically so that it does not continuously forward and reverse over the same area of media to try and write a small amount of data the way DLT did.

          1. Steven Jones

            Re: @Steven Jones keeping the beasts fed...

            If the data rate falls below the minimum matching rate, the drive will reverse the tape over the area already written in order to get up to sufficient speed to write again. Of course it is better engineered than earlier generations of streaming drives, and larger buffers help. However, whatever you call it - and show-shining will do - it still happens. Also, when you have drives that are designed for multiple 100s of MB/s, buffers get used up very rapidly, and the tape re-wind/write forwards process takes a significant length of time). I'll repeat again - if you are using high speed LTO drives, you need a very well designed backup infrastructure or you'll get a fraction of the specified throughput and stress the drives unnecessarily.

            As for those that use remote site mirroring for DR and backup there (a design strategy which I often used), then fine, but that's not an LTO specific issue (you should still have multi-generational backups or you are prone to common mode failures - that is something that screws up your mirror, including a software bug, will still require recovery). There are many ways (array snapshotting, remote array mirrors, remote DB mirroring etc.) to ease this issue, but I'm of the old fashioned sort that thinks you don't have a valid backup until you have verified multi-generational media conversion option as a last resort.

  4. Anonymous Coward
    Anonymous Coward

    They are useful for archives

    Tape doesn't consume energy while it is parked in the library. Copies can also be unloaded and posted to one's favorite mine shaft.

  5. Anonymous Coward
    Anonymous Coward

    Tape is still very valuable as part of a backup and DR solution.

    Virtual tape (w/dedupe) is great for an intermediate step.

    Virtual tape shipping to a remote data center, and then transfer to tape is quite common in enterprise environments.

    I can see a good future for LTO in terms of backup/archival in large scale environments (Where the real money is anyway). SMB can't justify the cost of tape, and is better off with Dedupe and cloud storage.

    1. Levente Szileszky
      WTF?

      Huh?

      AC wrote it:

      "SMB can't justify the cost of tape, and is better off with Dedupe and cloud storage."

      Huh? It has nothing to do with SMB, it's about size and bandwidth - I'm happily using 1.5TB LTO-5 tapes for weekly backups which then we take off site, 10-12 in two small boxes, after running full backups overnight at a rate over 200GB/hr... how much my bandwidth and cloud service bills would cost if I wanted to sync 1-2TB overnight at this speed to any cloud service? We're talking about half a gigabit bandwidth and 20TB/week space ...

      1. Anonymous Coward
        Anonymous Coward

        Re: Huh?

        Almost every small business I have dealt with is better off with dirvish (which is a hardlink tree, not really dedupe).

        It's not the mount of data, it's the change rate, Dirvish, using rsync and hard links is very efficient, and cost effective. Most of the time I put the backup system at the owner's house.

        Consumer broadband is more then sufficient for this, and the initial sync is onsite, so it goes quickly.

        I have used tape in small business before (SAIT-1), and when we had a double drive failure in the Raid-5 it literally took 2 days to restore it from tape.

        I love tape for archival and a backup of last resort, but for tape to be your primary backup greatly impacts your restoration time.

  6. Paul 139

    /s/80GB/80MB

    Typo, per title

  7. Levente Szileszky
    Go

    Tape is here to stay...

    ...anyone with large data can tell you, be it SMB or large enterprise - it's about the economies of scale: backup size and backup speed.

    My LTO-5 library is running over 200GB/hr every night for ~10hrs, that'd equal to almost half a gigabit in terms of internet pipes plus the ~12 tapes worth of space our weekly offsite backup adds up to, around 20TB on a cloud service that's capable to ingest at 200GB/hr... even if I'd switch to d2d2t for $cost_of_intermediary_SAN and run it continuously during the day it'd still mean over 200Mb connection and the same 20TB cloud space suited for 200Mb/s...

    And the new "direct-to-AWS" fast-pipe options (private connection service to AWS) that several ISPs offer now will not change this significantly because they still charge around 3 cents per GB transfer fee (eg $600 per week for a 20TB) on top of your AWS fees and you still have to pay for your fast local loop/xconnect.

    In essence unless you already have a very fat internet pipe cloud backup makes no sense for large backups when a 2U Dell DL2200 running Simpana w/ a 24-slot 2U LTO-5 library can be picked up somewhere around $12k if you know how to negotiate and an LTO-5 cartridge starts at $40 and you can just send them to a vault in a box or a bag.

  8. Roger Anderson
    FAIL

    GB > MB

    Surely the technology us getting worse. LTO-3 transfer rates were amazing...........

    Quote:

    LTO-6 - 2.5TB and 160MB/sec

    LTO-5 - 1.5TB and 140MB/sec

    LTO-4 - 800GB and 120MB/sec

    LTO-3 - 400GB and 80GB/sec

    1. Martin 37

      Re: GB > MB

      Even though the typo had already been pointed out, in your experience can you fill a complete tape in 5 seconds? Thought not.

      1. Roger Anderson

        Re: GB > MB

        Martin,

        1. The typo hasn't been mentioned when I posted my reply. The curse of doing so at a weekend when it took 24 hours to get the post approved before it went live.

        2. I have never used a tape drive, I do however understand how they work. I also understand that it's highly improbable that the transfer rate would reduce by an order of magnitude from one release to another. It was an obvious typo.

        3. Look up definition of sarcasm. Will help you understand point 2.

        Cheers,

        Roger

  9. ScottME
    WTF?

    Did I work it out right? It would take about a day and a half (32.5 hours) to completely fill one of these cartridges, writing raw data at full speed. Is that practical?

  10. Michael Shaw
    Happy

    4 hours 20 min

    at full speed, 4 hours 20 min.

    But, the real question is, what SAN / backup server setup would you need to actually achieve that rate and still maintain service?

    1. seven of five

      Re: 4 hours 20 min

      Either you have a proper server with dedicated hardware and sizing to handle the load (read: the business knows what its data is worth)

      or

      you first offload from the production machine to a disk buffer and then destage from the disks to a couple of tapes. We (mostly) do the latter, doing the backup via lan which nicely limits the throughput to ~60MB/s anyway and then use the day to stream it all onto two seperate libraries via dedicated infrastructure. The larger SAP are snapped. Yes, this layout could be improved, but for the money we spend it is quite good.

      There never is money for backup, they only want restore...

This topic is closed for new posts.

Other stories you might like