Yes and no. Dedupe and compression ala Pure could work on something like a big old array of SATA drives but I think you underestimate the performance hit. It would essentially relegate the disk to back-up/archive only and even then I question how effectively it could do a restore. As data becomes deduped it changes it's characteristics from sequential to random. Sure you have the possibility of retrieving some of those blocks from cache but unless it is very highly deduped you'll not make up the meta-data overhead and random access penalty. Depending on how compression is implemented there can sometimes be a performance bump but more often then not it consumes a little performance. The two real reasons dedupe and compression work so effectively on flash is that first it has performance to burn. Until recently SSDs were smaller in capacity then there SAS counterparts requiring more drives and the drives themselves are an order of magnitude better, more so in a very random environment. You could give up the same number of IOPs you're using on SAS for backend processing and barely notice. Second the random characteristics don't phase SSDs at all. Two reads from entirely different parts of an SSD to construct a block, no problem, two reads from different parts of a disk, have fun, seek time penalties are going to kill performance.