Christians are up in arms.
Christian customers are up in arms over this new satanic wizardry and will refuse to buy technology that promotes satanism!
It's rumoured that Fusion-io has become a NAND alchemist and can turn low-cost MLC flash into faster and longer-lived SLC flash. But why, oh why, would it want to do it? SLC, or single-level cell, flash stores one binary bit per cell and is the fastest form of flash with the longest working life; think 10,000 raw program-erase …
Mr Bagley's speculations don't appear to be related to any statements from Fusion-io, so I'll assume they are just his guess. We also have no confirmation that the resulting product has the speed advantage of SLC.
So is it possible that someone has simply used the greater data capacity of MLC to implement some particularly powerful error correction, thereby extending the lifetime but at the cost of bringing the total capacity down to SLC levels?
For 2 bit per cell MLC there are 4 voltage levels which correspond to bit values of 00,01,10,11.
If for example the voltage levels are in the same order then the lowest voltage corresponds to 00 and the highest voltage corresponds to 11. If the only values that are written to the array are 00 and 11 (lowest and highest voltages) then only 1 bit is being stored per cell. When the cell starts to wear out then on reading values 01 or 10 may be read instead of 00 or 11. This can be used to warn the controller that this block is starting to wear out and the data needs to be refreshed or copied to another block. Because there is a large margin between the valid states, the data is still readable when the cell starts to degrade without resorting to performance degrading ECC.
All that is needed for a controller to be able to use MLC memory as SLC is for the correspondance of voltage levels to bit values to be known (so that it only writes the highest and lowest values).
(If 01 is read then the data is 0, if 10 is read then the data is 1 in the above example.)
There is no alchemy involved here at all, and there is nothing secret about it (well nothing secret within the industry), in fact as far as I can tell it is part of the way most vendors operate TLC flash, if wouldn't surprise me that many do it for MLC as well. But like I said its not a secret, how to do it is right there in the data sheets.
I should add that TLC and MLC don't "become" SLC, they just can be used in an SLC-like manner, but MLC as SLC isn't as good as actual SLC, and actually using TLC as SLC would be a waste of money since the TLC chips are rather more complicated than SLC chips. So none of this eliminates the market for three kinds of chips. We use the "SLC mode" to play games and tweek things, but if you actually want real SLC performance and endurance then you should buy SLC.parts.
1 ) I think it broadens the flash market and keeps them on the front end of the margin differential.
2)It seems like this float and yesterday's ThinkEquity rumor where meant to slow the shorts.
I also have two questions for anybody:
1)Is the storage hypervisor that IBM is pushing based on VSL?
2)Isn't the SCSI Express and NVM Express a standards based broadening of inputs closer to the server side guys and consequentially help commoditize storage,deconstruct the mainframe,and spread control of VSL?
1. Storage hypervisor that IBM is pushing: are you referring to the SAN Volume Controller/Storwize V7000 platform? That's definitely not based on VSL; it predates Fusion-io by a few years and was never tied specifically to Flash technology.
2. SCSI Express and NVM Express are emerging standards that push PCIe-connected Flash (cards or drive modules) further down the path of commoditization by standardizing the operation of such PCIe devices as perceived by the host. Neither standard is specifically connected to VSL, though Fusion-io did show a proof-of-concept SCSI Express module at HP Discover last year.
Regarding the "deconstructing the mainframe" idea: both centralized and distributed storage have places in enterprise environments, and the pendulum has swung both ways. (That idea applies to compute power as well.)
One potentially very valid use case for this sort of thing is where there is known to be data with different access patterns, such as file metadata vs file data itself. The former needs to be updated more frequently than the latter, so one could envision a mechanism that tags certain operations as being for "frequently modified" data, and dynamically applies the single-bit-per-MLC-cell trick to that data while retaining the multi-bit capability for the bulk data. This is the sort of thing we've done for years, but using different storage elements for the two varieties (e.g. Fe-RAM + SLC, for example). The messiest part would be tagging the appropriate operations, but in some applications a simple approach such as defining the first 5% (say) of the storage to be for metadata can work well. Remember, with things like journal-ling file systems, a single update can trigger three distinct (sets of) writes: the data, the metadata, and the journal.
Another concept may be to offer users different resilience/capacity options in much the same way as RAID storage systems have tended to do: enable the user to configure one bunch of flash as "endurance" memory, and another bunch of the same stuff as "capacity" storage.
By the way, @Stuart Longland: the likelihood of anyone grouping three TLC cells into an 8+1 arrangement is vanishingly low. But grouping 176 of the three-bit things to create a 528 bit entity offering 512bits +16 bits of ECC protection (i.e. 64 byte entities), and multiples thereof, is, shall we say, rather more likely...
Biting the hand that feeds IT © 1998–2021