Staging data through disk pools before writing to tape is a technique that can help, but you eventually get to the point where copying the data multiple times before it ends up on tape means you run out of day to carry out the staging.
Many storage systems, like TSM, have a backup server that has the tape libraries attached to them. The multiple systems in an environment send the data across the network to the backup server that can either put the data into a disk pool for later de-staging to tape, or write directly to tape.
The problem is that, unless you have a really fast network, that can often be a bottleneck.
One solution to this is to use what is called LAN-free backup, where the tape drives are zoned to the systems holding the data, and those systems write straight to tape, while the backup system just manages the tape use and the metadata storage for these backups. Many backup systems have this technique. It's normal in large DB environments to do it this way, with normal backups of OS files being written over the network to a disk pool, while the volume backups go LAN-free. An additional advantage is that backup straight to tape ties up at least one tape drive per backup (more if you are doing a multi-stream backup for speed,) which limits the number of backups you can do concurrently by the number of tape drives. But you can have multiple network backups writing into a disk pool concurrently.
So the big data goes LAN-free direct to tape, and the small data gets written to disk, and then moved to tape at a more convenient time.
But there are exceptions. At an HPC site running GPFS over the IBM Power 775 PERCS infrastructure that I worked at, which had a very, very fast multi-plane optical network and significant numbers of storages servers running GPFS, it turned out to be best to allow the TSM server direct access to the GPFS filesystems, and for it to directly manage the backup of all of the large data from GPFS to tape.