Fell At The First Hurdle
I'm sorry, but I didn't even understand the first sentence.
Can block-level access storage area network data be deduplicated? Data Domain thinks so, but nobody else does. The company also reckons deduping data on solid state drives will solve the SSD price problem. Deduplication is the removal of repeated data patterns in files to drastically shrink the space they occupy; it is used …
When I see it.
If the current generations of de-duplication technology (NA, Diligent et al) can do about 900MB/s on small spindle (150Gb FC) 15k drives with a *shed* load of cache at the front end of a disk array how on earth are they expecting to outperform that with transactional data on SSD?
I mean the logistics of having an SSD array with multiple hosts on it and not having a bottleneck somewhere whilst having a very powerful de-dupe engine working in-line are just staggering.
Whilst it may make SSD more economical eventually I bet it'll mean you'd nearly have as much cache for the processors in the array or the de-dupe as you do storage in SSD.
I am also a bit confused as to why you state de-dupe is done at a file level for archival storage. I thought most companies were looking at a set or variable string length of data at a block level, negating files. I could be downright wrong here though
It's a bit naive to say that deduped arrays will replace tape libraries, but Data Domain love saying it. A properly configured tape drive is faster than any disk, tapes can by cycled off site, hold more data than disks and are cheaper. And a dedup array means all your backup generations are depend one one physical copy of the data. Safe enough for backing up webservers, say, but don't rely on it for your business critical database/repository.