I don't think...
It'll be the complete end of tape. Even with de-duplication.
With the mainframes you have colossal VTS libraries which do currently have a small disk cache and then a large back end tape cache. Mainframe tape volumes are generally small in size and also yes high in frequency but with a low retention period for most. And if you look at how HSM has evolved on something like IBM's mainframe, the VTS the way it's been designed is very well suited for the OS.
If you move over to open systems and other platforms this then differs. With de-duplication many saves can be made with a good de-dupe rate. But what if you have a library with say 30 LTO4 drives in it and your tape utilisation regarding space on the tape isn't that great? You've got up to 30 hosts writing at 120Mb/s = 3600Mb/s sustained for potentially hours, but yet say the disk array you need for your virtual tape doesn't require the arm/storage ratio because of the size of data required? You'd need hundreds of arms, even though many arrays are looking to use slow-ish SATA compared to something like 15k rpm FC disk. Then pile on replication of your tape/disk cache and you've got a disk array that's going to have a very intense read/write profile, albeit quite a sequential one.
Yes if you exploit good disk technologies you can replicate your storage and back it up completely separate to the host, but not for all operating systems or all environments. And that sort of disk, software and automation is costly to implement.
Also you have to think of large sized backups with a long retention but a long frequency like yearly multiple terabyte backups. You don't want to keep a 10Tb save on a VTS when it's read profile is near zero and the next iteration next year has changed so much that de-duplication hardly claws you back any savings. You want to farm that sort of thing out to tape, preferrably a tape remotely. A 10Tb save could take up 15-18 Tb maybe within a 13 month cycle. Or about 8 tapes with good compaction.
Virtual tape for certain platforms still needs some work regarding the performance for a library serving multiple hosts against the storage/cache it requires to give the thruput when you look at it to replace large heavily used tape libraries, even if de-duplication is not in-line and is an afterwards-esq process.
I will be keeping a keen eye on how vendors are going to deal with tape regarding it's development or if they thing that storage arrays will become so cheap but yet perfom just as well and networking costs to fall that then maybe.. just maybe tape could be redundant.