Those are quite some claims!
I hope they are borne out in practice.
Primary Data has updated its storage silo-converging DataSphere product, and says it’s ready for mainstream enterprise use, not just test and dev. The new release can save millions of dollars, handle billions of files and accelerate Isilon and NetApp filers. The v1.2 product gives enterprise customers, Primary Data says, the …
Reminds me of a TV-Infomercial: We know you have several hometraining appliances in the garage that didn't work.... but this one really works. It provides and abs-layer between your determination and your efforts.
Why pin your knowledgeworkers and managers down on determining what you really need, when buying layers is so much more fun. :-)
I've been there and done this using Acopia for NFS tiering of data. It's great until you run into problems with A) sheer numbers of files blowing out the cache/OS on the nodes. B) Now your backups are a horrible horrible horrible mess to implement.
Say you have directory A/b/c/d on node 1, but A/b/c/e on Node 2? How do you do backups effectively? You can either use the backend storage's block level replication, because you can use NDMP to dump quickly to tape. But god help you when it comes time to restore... finding that one file across multilple varying backends is not an easy case.
And this link cloud storage is a joke when they're talking Terabytes, much less Petabytes of information.
This is completely focused on scratch volumes and spreading the load of rendering and large compute farms with large IO loads of temp files. Once the data is set, it's obviously moved elsewhere for real backup and retention.
Then, once we kicked Acopia to the curb (don't get me wrong, it was magically when it worked well!) we moved to using CommVault's integrated HSM/Backup solution, which worked better... but still not great. God help you if the index to files got corrupted, or if the link between CommVault and Netapp to intercept accesses to files moved to tape or cheap other disk storage got delayed or just broke. Suddenly... crap took a while to restore by hand.
I've seriously come to believe that this is truly a hard problem to solve. Now if you can do like what the people above are doing, targetting large scratch volumes where people want to keep stuff around for a while but don't care as much if backups are done (or can spend the money to tier off to cheap cheap cheap disk... maybe it will work.
But once you have a requirement to go offsite with tape, you're screwed. Tape is still so cheap and understood and effective. Except for time-to-restore, which sucks.
Too many companies believe that sending fulls offsite each night will magically give them DR when they only have one datacenter. Gah!!!
Sorry, had to get that off my chest. Cheers!