Its fascinating watching people make pronouncements about what technology will look like in 10 years. Saying flash *will* be dead in the next 5 to 10 years but that spinning rust will hang around for longer is not really a statement that holds up.
Sure, 2D floating gate NAND flash will be dead in a few years, replaced by charge trap NAND, and/or 3D NAND, but it will still look like flash as far as you can tell. At some point 3D NAND gets replaced but ReRAM or some other technology, but it will still look like flash as far as you can tell. At some point ReRAM or whatnot gets replaced by yet another technology that hasn't even been chosen yet, because its so far in the future, but it will probably still look like flash as far as you can tell.
Now as far as spinning rust goes, sure it will be around but its not going to look like the disk drive you are used to. HDDs are running into their own scaling issues, and once they go to shingled magnetic recording (SMR) they turn into something like a cross between flash and a file system. This is not the disk drive you are used to. If anything is going to be *dead* in the data center its the hard drive, too slow, too unreliable, too much power consumption.
As far as flash pricing goes when buying tier-1 storage you are buying performance not GB, if $/GB was all that mattered no one would buy anything but tape, or maybe slow SATA drives. The reason people buy 15K SAS drives is because they understand they are buying performance not bulk storage. And that is why people flash. Even for what some might call tier-2 storage used for VDI and other apps, people buy flash because disks are just too slow.
Enterprise storage is so much more than $/GB, disk storage could be free, but that doesn't matter if it takes more space than you have, more power than you have, more cooling than you have, can't provide performance for the applications that you have, or more importantly the applications you wish you had if only your storage was fast enough to run them.
As far as tiering goes (no I wasn't going to forget about tiering), while I could dispute the cost differentials of storage *systems* as opposed to just looking at component prices, I will just observe that I think saying it will never happen that applications will manage their data is pretty clearly not going to be the case, toss in app aware/guided file systems and/or hypervisors and I think that would definitely not be a safe prediction.
And I'm sure such tiering systems will come in handy for moving your data between your performance SLC storage, your bulk MLC storage, between your bulk MLC storage and your TLC archive/backup storage. Standard mechanisms will probably suffice to move your data to your long term SATA archive storage.
Don't believe that tiering will wind up being between memory types rather than between memory and disk. That's ok, people scoffed at the idea of disk to disk instead of disk to tape backups too.