back to article The incredible shrinking NAND: I'm MEELLLLTING

NAND is heading to the graveyard, getting closer and closer with every geometry shrink and every added cell bit. Any replacement NV-RAM technology will require controller software rip-and-replace, which could kill one trick pony flash array startups. NAND flash is non-volatile but expensive to make and both ways of making it …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    Stupid farmer?

    While the author has no doubt covered the technology correctly, the idea that current NAND startups will all simply fail to adapt and go out backwards because they are incapable of doing anything else, reminds me of the fallacious arguments that the "Greens" constantly use.

    The idea that companies (or indeed humans) in general will sit idly by and starve to death because what we are "growing" today won't grow in the future is false. We will grow something else. The NAND companies are not peopled by morons. The moment a new tech comes along, they will likely be first out of the gate with products.

  2. FartingHippo
    Stop

    640k

    "We can say with certainty that we will not see 4-bits per cell NAND and that we might not see – in fact probably won't – NAND process geometry going below 10nm, or even below 15nm. It's a dead-end game."

    Oh dear; that's a keeper.

    1. Charles 9

      Wrong comparison

      You're comparing a MINIMUM to a MAXIMUM. Furthermore, the 640K maximum you described was an economic barrier. Once cheaper memory came along, more memory became useful. However, the limitations of NAND are more physical in nature. Packing in more cells makes them more volatile, as does reducing their size (thus the reduced working lives). As you get smaller, atomic and quamtum inconsistencies come into play, and as subatomic particles are a fixed size, there's really nothing you can do about them.

    2. Charles Manning

      128-bit CPUs

      The physics is severely against you moving NAND to tighter geometries, particularly when coupled with more bits/cell.

      To get 1 bit per cell requires 2 voltage bands. That gives a reasonable amount of margin for a few electrons to leak or get trapped and still give a reasonable bit value.

      2 bits per cell requires 4 voltage bands and 4 bits per cell requires 16 voltage bands. That means far fewer electrons need to leak for a state change.

      When you shrink the cells, each cell stores less electrons and leakage and electron trapping becomes easier. That makes it easier for state changes to happen.

      Everything is conspiring against higher density flash.

      There is a continuing market for small players though. As all the big players more towards less robust flash, this opens up a small but growing market for higher reliability SLC flash.

      Many people need this stuff for booting and higher reliability systems and are prepared to pay a higher price for the more reliable flash.

  3. Anonymous Coward
    Anonymous Coward

    doomed, we are all doomed

    A couple of things spring to mind:

    Technologies only die if there is no longer a use for them or when they are replaced by something better that is available at the same price. The article mentions the "promise" of a handful of new memory technologies but gives no real insight into when they will be available and what their specs will be (capacity, $/GB, speed, lifetime, ...). Consequently, predictions about the imminent death of all NAND NV memory products (and the subsequent extinction of the companies that make them) seem somewhat premature.

    Will "whole software stacks" need to be rewritten? Really? The idea behind a software stack is that functionality is partitioned with the result that changes in one layer (say garbage collection or wear levelling) do not result in the need for a complete rewrite. Even if drivers, file systems and the like do need to be rewritten for the new technologies, why does the author seem to think that this will be impossibly costly or onerous? If it is hard to do, that fact would seem to me to prolong the life of the incumbent NAND NV control software rather than bring about its demise.

    If the author has figured all this out, why does he assume that investors and potential acquirers will not be able to do the same? It all sounds like basic due diligence to me.

  4. Brian Miller 1
    Stop

    Economy of Scale?

    Hang on a minute??? Doesn't economy of scale come into it at all? It works like this you see. Making anything is limited by the resources allocated to it's production. As the whole world now actually knows, SSD's may cost more to make than spinning magneto-drives, but this is in large part to there being incumbents in the HD game with huge manufacuring facilities that have already paid for themselves and need minimal change to change to newer technologies (slowly).

    And the fact that people have shown that they are willing to pay more for better storage? So driving cost to the ground isn't really the be all and end all of flash is it? Having more factories set up and taking ever increasing share of the spinning disk market is the future. The "areal density" or equivalent is already reaching parity with spinning discs (they still have room to play with for cramming more in if they go to a 3.5" form factor). It may cost more but people will still buy them and THAT is what is important to BUSINESSES.

  5. Duncan Macdonald

    Flash DOES have minimum size limits

    Just like previous non-volatile memory systems ferrite core and plated wire, there is a minimum size limit for the cells in flash memory. (In this case as smaller cells have an unacceptably low write endurance.)

    The competitor designs for NVRAM (eg phase change) do not reach their minimum size point until much smaller than the minimum size for a usable flash memory. When their production cost ($/GB) reduces below the cost of flash memory then the industry will move to the newer technology.

    Users will still see SSDs with the same external interface (SATA 2) so at the user level the change will be invisible except for longer lifetimes from the newer SSDs.

    As the newer technologies do not need wear leveling or write amplification minimisation, the complex flash controllers such as Sandforce will no longer be needed and much simpler controllers can be used.

    It is firms like Sandforce and Indilinx that will suffer a revenue hit with the new technology, most of the flash ecosystem will be unaffected.

  6. Nick Galloway

    Transitional technology

    So the practical limits of SSDs are defying Moore's law!?

    SSDs might be nice a fast but I think I will persist with my spinning disks a little longer until the solid state technology matures and the interim 'solutions' are rationalised.

  7. Anonymous Coward
    Anonymous Coward

    Is low endurance a problem?

    I foresee low endurance chips being used only for archiving, as a DVD/Bluray replacement.

    The plan here is to release films on an SD card-like device which can be rewritten if needed but is intended to be a read only device.

    However this isn't a problem if the chips are arranged as say a 2TB SD sized module where you simply add data to it until it is "full" and has built in battery and wireless capabilities.

    Ideal for netbooks and phones, you can carry around your entire movie library on these things.

    1. Charles 9

      Re: Is low endurance a problem?

      From what I've read, multi-level-cell NAND flash is more prone to state-changing electron leakage. IOW, it's more prone to "bit rot" which can occur even while it's in an idle state. Dealing with that would require error-correcting circuitry which would need more chip real estate.

  8. Alan Brown Silver badge

    Economics

    Larger flash cells are easier to make, can be fabbed on more lines, have more competition (not to mention have far higher durability and are faster than small flash cells)

    This will push development into a few areas.

    - alternatives to NAND for higher density

    - chip stacking (already being done anyway)

    - "cheap as chips" lower-capacity devices.

    The main disadvantage of using stacked chips is heat/power consumption. Outside of laptop and HPC environments that might be an acceptable tradeoff.

    I certainly wouldn't write off the startups. Most of them don't fab their own silicon, so if the technology changes they'll simply change the way they build things and keep going.

    I also wouldn't write off NAND. It's been available commercially for ~30 years and there's a lot more development which can be done yet. (One thing which springs to mind is low power DRAM SSD with supercaps and a flush-to-nand routine when the power goes off, in order to mitigate the durability issues. It doesn't matter how the technology works as long as it looks like NV storage to the operating system)

  9. Mr Young
    Go

    Cool - the race continues!

    I want a mind connection to improve my human memory right now! Where is it?

  10. Anonymous Coward
    Anonymous Coward

    Good use for obsolete fabs

    If you have a leading edge fab today, rather than spending billions to upgrade it, maybe you keep it online and five years from now are producing flash at a geometry where it still makes sense. A fully depreciated fab could probably compete pretty well...

    This is a problem for the purveyors of all flash arrays or those claiming SANs will go away and be replaced by local flash storage on servers, but for the more reasonable solution that's a mixture of flash for the heavily used data and old fashioned spinning disks for the bulk data, having NAND technology at a standstill is not a showstopper.

    We've been hearing about "next generation" storage technologies since bubble memory in the early 80s, nothing has ever displaced RAM and spinning hard disks, except for NAND, which after replacing floppy drives has grown up to find a comfortable spot between RAM and hard drives. So I'm not holding my breath on any of these technologies, particularly PCM which has been overhyped with nothing to show for it for about a decade now.

  11. Marcel Kleine

    NAND life span

    NAND Flash will clearly not have the same life span as magnetic disk has had so far (40+ years) but it will serve its purpose over the next 5-7 years. The cost per IOPS compared to magnetic disk is so much more favorable and the demand for more IOPS will continue, there's no doubt about that.

    I suspect PCM & MRAM and perhaps HAMR have about that time to grow to maturity. The interesting point is, as Chris mentions in the article, will that mean that current NAND Flash based solutions will have outlived their usefulness and therefor make for a unattractive investment vehicle?

    I'd like to think that having a several year head start in building purpose built high throughput systems will allow them to adapt. Several of the CTO's of All Flash arrays I've spoken to are already brainstorming on how to incorporate PCM/MRAM as a replacement for NAND flash.

    Until there is a replacement, NAND flash is what we have to work with. And I suspect that period will be five years at least, but lacking a crystal ball, time will be the judge.

This topic is closed for new posts.