"revolutionary technology advances"
They will happen.
Microsoft and University of California San Diego researchers have said flash has a bleak future because smaller and more densely packed circuits on the chips' silicon will make it too slow and unreliable. Enterprise flash cost/bit will stagnate and the cutting edge that is flash will become a blunted blade. The boffins …
Anobit are a NAND Flash developer, specificially the fiesystem translation layers that are used in all Flash disks to improve performance and to stop the storage cells dying prematurely. Their claim to fame was being able to produce higher performance from cheap, commodity Flash chips, to allow these to be used in enterprise environments. (The corollary of this, and probably why Apple wanted them, is that their technology also allows a manufacturer to reliably use really cheap flash memory).
Completely new technologies will be needed to address the brick-wall in NAND-flash performance, because the current limit is a consequence of the physics used to implement NAND memory: as you increase density (=capacity), there comes a point when the "cell" is no longer big enough to hold onto its charge anymore.
This problem is inherent to NAND flash, and cannot be solved with an Apple's kind of "revolution": re-packaging, or re-configuring existing NAND designs, or just telling people that it's still better. Instead, it'll need a real technological revolution: replacement of one basic technology with a new, better one.
Only back then we were going to hit the wall without revolutionary breakthroughs in magnetic disk technology, and we should never expect to get past them. At the time I think my Big Ass Drive was in the neighborhood of 200M. I don't expect the SSD guys are any less inventive.
Even if the figures are accurate as shown in the charts (Looking into the future is hazy at best) the answer would be to put a hefty chunk of RAM as a buffer for front end performance while the back end stores the info at a lower rate.
I am with ravenviz and Tom 13. Technology has a way of advancing past problems. Sure, it will create more issues somewhere else, but that is the nature of the beast. Got to leave some things for the kids to figure out. ;-)
Last I checked there were three of four competing technologies waiting on the sidelines. With luck some of them will scale better.
And a nitpick: Apparently "multi" means "two" now, with "TLC" for three-level. Can we please make up our minds and use, say, SLC/BLC/TLC instead? Or at least come up with a convincing TLA where the M stands for two in some obscure language or orther?
IBM boffins have been working on storing data using individual atoms on solid media. Quantum storage boffins promise more storage, with an unbreakable security system (cannot access quantum data without changing it). It's coming - with a big fat profit margin for the lucky company that brings it to market.
"Patents are no longer needed. Everything possible has already been invented". "If man was meant to fly, he would have wings". And of course "I'm from the government, and I'm here to help you". Does mankind ever tire of being wrong ?
15Krpm disks have been around for over a decade and we are still only on capacities of 300GB on a 2.5" drive and 900GB on a 3.5" drive; you need at least a dozen 15Krpm drives to compete on a 50/50 random read/write work load with a typical SSD; even then the disks just can't get the data off quick enough.
Phase Change Memory is my bet.
T
Why could they not have invested more in developing everlasting drives, or is it a "mythical everlasting lightbulb" marketing trick to ensure constant sales?
I use SSDs for thumbnail storage of images for a social networking site, and long term cached files, where read-writes are massively asymmetric. For this they are great.
However, I would like to use a RAID set of them for the databases, but with millions of writes per day I fear I would have to replace the drives every week even with wear levelling.
Therefore I stick with a large bank of trusty traditional drives where lifetime is measured in MTBF and not write cycles.
I guess they are OK, if you have RAID array with a hopper feeder...
ZFS's raidz (available for Solaris, FreeBSD) allows you to have masses of online storage on slow disks, and accelerate reading and writing by adding cache and log devices on SSDs. It gives you all the write speed of an SSD, all the time knowing that your data is backed up and an SSD can be removed or replaced without data loss or issues.
Read speed is a little more problematic. Actually thats BS, sequential read speed is excellent, IOPS suffers if the data is not already in the cache stored on SSDs, and for significant loads you would want as much cache as your working set.
FWIW, on my home filer I have 6 'EcoGreen' 1.5TB drives - aka the slowest cheapest drives I could find - accelerated with one 60G SSD split in two, with 30G for cache and 30G for write log, all using onboard SATA. I get sequential read speeds of about 400MB/s (in cache) and 550MB/s (not in cache), and write speeds of 400MB/s (I've never managed to overflow the intent log).
I guess what my long winded post is saying is that SSDs are genuinely useful, but I only see them as accelerators and cache for real storage.
We need a change on how file systems are implemented. Back in the old days, storage was a file of small sectors (ca. 512 bytes) that could be written largely independently of one another. File systems were designed around that idea.
It is no longer true at the physical level: your basic modern hard drive will re-write the whole track when modifying a sector, since the sectors are so close together that trying to rewrite just one would very likely slop into the next. And if you are running RAID it just gets worse, as you now should really be working in quanta of a RAID stripe. Flash is the same: you have rather large erase blocks that it would be ideal to write as a unit. However, we keep forcing the physical layer to pretend it can write each small sector independently of the others.
Worse yet, the overhead of fetching a small sector vs. grabbing a huge block of data is killing performance on things like SATA, SAS, and PCIe-connected storage. You can spend as much time on overhead as you do on the actual data.
What we need is to have file systems that are designed to work in arbitrarily large blocks - e.g. a single Flash erase block, or a track of a hard disk, or a stripe of an array - and deal with things like file packing to make good use of that. We need the OS to be smarter about grouping updates into blocks. We need the physical layers to CORRECTLY indicate their ideal block size, and to transfer those blocks efficiently. Move "flash translation layer" OFF the media, and into the OS, which has the information to optimally access the data.
Do that, and we can continue to see improvements in flash density (as you can remove much of the overhead of the FTL).
AFAIK most if not all unix-y filesystems come with some way to tune the block size, and ensuring block boundary alignment isn't too hard to fix. As an example, the venerable Berkeley Fast File System comes with choosable block sizes and "fragments" to provide small file (or large file tail) packing. Read up on it if you don't know, it's not redmondian fragments of the needs-defragmenting kind I'm talking here. These fragments are a feature, not a bug. If that isn't what you were after, what then?
Besides, it's the erasing that's the problem, not the writing. And that can be alleviated, at least in part, by making filesystems tell the drive what they no longer need (TRIM) and lots of cleverness in the SSD controller. That controller also needs to take care of wear leveling, which is possibly even harder and requires some background shuffling of data anyway.
If anything, I'd say filesystems need to trust the controller more, not less, and stop optimising for now-outdated characteristics. Instead, simply treat the storage as one big sack of writable blocks. The block numbers have as much relation to physical relation as virtual addresses have to physical memory addresses, these days. Even spinning disks patch up bad blocks from reserve areas resulting in blocks getting shuffled way across the disk. And the OS doesn't know squat about what clever things the SSD manufacturer came up with this week.
Not unless the drive tells it, that is. So they sprout new interfaces to show and tell, meanwhile lying through their teeth over the old interfaces, because if they don't, all the old software up and gets all confuzzled. That is hardly an argument to move more low-level logic to the same easily confuzzled software, now is it?
Rather than increasing density, couldn't you get more space out of an SSD just by allowing a larger physical size? Say put one in a 3.5" form factor instead of 2.whatever. Pretty sure I could fit three 2.5 ones in a 3.5 shape. I doubt you'd get an order of magnitude more space with the same tech, but doubling shouldn't be out of the question.
Newegg comes up with about a half dozen of these, so clearly someone's thought of it before. I'm not sure why it's not more common. Desktops aren't that dead yet, are they?
So making flash denser (i.e. cheaper) also makes it slower. I fail to see the problem. Today we have the same thing, the slow cheap storage is called "hard drives". In the enterprise world we may end up with two tiers of flash, one that's dense, cheap and slow, and another that's less dense, less cheap but faster.
Look at the comparison to the world of enterprise storage before SSDs came onto the scene. We had two tiers of storage, large cheap SATA drives that were slow (100-150 IOPS on a 1 or 2GB spindle) and small expensive 15k rpm SCSI/FC drives that were "fast" by comparison (300-400 IOPS on a 300GB spindle) Basically the expensive stuff had 10x more IOPS per gigabyte.
You don't need much difference in performance between slow cheap flash and fast expensive flash to meet or exceed that 10x IOPS per gigabyte difference that everyone used to think was so big and easily worthy of tiering.
I believe 15k rpm drives no longer have any role in enterprise storage, it makes more sense to have two tiers - flash and SATA since the SCSI/FC tier hardly performs better than the SATA drives while having less capacity, and are basically the same capacity while being orders of magnitude slower than the flash drives, but don't compensate for this by costing orders of magnitude less.
Perhaps we may still have the 3 tiers of storage EMC salespeople keep trying to push on unsuspecting buyers (perhaps to unload their huge stock of now useless 15k rpm drives) Except it'll be one tier of fast expensive SSD, one tier of dense cheap SSD (compensating for the lower lifetime via massive overprovisioning internally, the modern equivalent of short-stroking) and the third tier of 7200 rpm SATA for bulk data.
I came across rather an excellent article this morning that gives a slightly different view in reaction the the paper.
http://pcper.com/reviews/Editorial/NAND-Flash-Memory-Future-Not-So-Bleak-After-All
It makes for interesting reading as while it doesn't deny that there are limitations to NAND Flash Memory it does challenge some of the assumptions in the paper and asks some questions of the motivations of the authors.
Having supported PCs since the 1980's, I was surprised that I have had to do OS rebuilds on BOTH my Granddaughter's netbooks which are fitted with SSD's. Necessary because of apparent disc corruption. I'm not saying it WAS because of the SSD's unreliability, but Dell replaced one of them under warranty in addition to my rebuild work.