back to article Micron's 232-layer NAND is a game changer for database workloads

Micron’s newly launched 232-layer TLC NAND modules could be a boon for data-intensive workloads like database operations and analytics. According Gartner analyst Joe Unsworth, the underlying technology will dramatically reduce the cost of deploying ever larger and higher-performance workloads on SSD storage arrays — a win for …

  1. Zenubi

    Not so long ago

    "to pack up to 2TB of capacity into a package smaller than a postage stamp."

    Seems like not so long ago I was swapping out my Amiga 20mb drive for a 400mb IBM monster and wondering how I would ever use all that space.

    (It's been emotional)

    1. Tom 7 Silver badge

      Re: Not so long ago

      I can remember sticking a 3u high 1.6G drive which was the first device I'd seen where it was less than £1 a meg - just. I think it sat on top of my tower for about 5 years. Now I've got something 20 times the disk space in something the size of my little fingernail and about as thick! I've still got the rust drive - some of my backups on ssd have managed to fall through gaps in the filing cabinet but then they are so small you have to put them in an envelop with a description of their contents to avoid several hours of feeding them in one at a time while forgetting what it was you were looking for in the first place!

    2. Yet Another Anonymous coward Silver badge

      Re: Not so long ago

      We had Kodak (!) Unix on PCs because it allowed us to use massive (literally) 330Mb MFM drives when DOS only allowed 32Mb

    3. DS999 Silver badge

      Re: Not so long ago

      Back in 1993 or so I remember installing a 5.25" 1 GB external drive that cost $2500 on an HP-UX workstation and being amazed that much capacity was in such a small area and for such a cheap price. I was also amazed at how insanely fast its 50 MHz PA-RISC processor was to anything on PCs at the time.

      If you had told me 1 TB would fit on a fingernail and cost 1/10th of that 30 years later I'm not sure what my response would have been, but somehow nothing in terms of storage technology has ever hit me like that 1 GB drive did.

      1. Yet Another Anonymous coward Silver badge

        Re: Not so long ago

        When external 1Gb SCSI drives dropped below £1000 we bought one for every SUN in the lab - now we would have all our data directly accessible and never have to bother with tapes again

      2. sniperpaddy

        Re: Not so long ago

        I thought that moving from 1.44MB floppies to 100meg zip drives was the dog's bollox. It wouldn't fit a shitty mp4 film now

        1. David Hicklin Bronze badge

          Re: Not so long ago

          > 100meg zip drives

          with probably a worse endurance than 3D NAND

  2. Duncan Macdonald Silver badge

    Is it worth having so many layers ?

    Each layer will require at least 2 passes through the lithography equipment (silicon deposit, lithography, insulating layer deposit, lithography) which means well over 400 passes through the lithography equipment. At what point does the increased chance of a processing failure outweigh the reduction in size of the final chips ?

    Also given the number of passes needed there will be a very long delay from starting on a wafer to complete chips ready for sale - this implies that production can not be ramped up or down quickly to take advantage of market requirements.

    1. Anonymous Coward
      Anonymous Coward

      Re: Is it worth having so many layers ?

      If they do their jobs properly the risk is reduced, through process characterisation and simulation.

      You can even simulate the lithography itself.

      Reducing die size increases yield: Smaller area means less chance a certain die is taken out by a random particle contamination defect. It's a maths thing.

      1. Tom 7 Silver badge

        Re: Is it worth having so many layers ?

        The maths thing means the number of layers is irrelevant - the chance of particle contamination will be a function of the area of 'work' done. The only real problem multiple layers will give is for heat dissipation but the plastic packaging is going to be more of a problem than a few hundred layers of silicon and oxide of.

      2. Anonymous Coward
        Anonymous Coward

        Re: Is it worth having so many layers ?

        Reducing die size has implications for NAND cell life - the smaller the cell size, the less robust the cell is (after all, storing data in a cell is done by effectively zapping a fairly specific charge into the cell that corresponds with the data you're encoding*).

        That's why we went 3D after TLC.

        Storing more bits per cell in smaller and smaller cells is what has taken SLC flash from 100K P/E cycles (program / erase) to QLC being good for 300 PE cycles by default - but on the plus side, it's also reduced the cost by many orders of magnitude too, but you can see why we're going "high rise" on flash cells......

        *charge/voltage states get somewhat more hairy as the bits per cell increases:

        Single Level Cell (1 bit per cell) = 1 or 0, so full or not full - if it's 70% full, it's a 1 and the cell isn't in great shape.

        Multi Level Cell (2 bits per cell) = 00, 01, 10, 11, so empty, 1/3, 2/3rds, full - if it's 70% full, what does that correspond to - 10 or 11?

        Quad Level Cell (4 bits per cell) = 0000 or 0001,,,,,,, to 1111 or 16 charge states to be discerned with certainty - what does 70% full relate to - the 11th or 12th charge state? Don't get it wrong as you'll give people duff data.....

        I can see why PLC is taking a while......

        1. Yet Another Anonymous coward Silver badge

          Re: Is it worth having so many layers ?

          Interesting trade off though.

          Home user: I don't trust this tiny card with all my photos, I want you to guarantee it will last 100years

          Data center: I'm going to depreciate all this kit in 18months. If you can make it 10% faster and only last 2 years.

    2. Spazturtle Silver badge

      Re: Is it worth having so many layers ?

      The whole point of multi-layer 3D nand is that you don't have to make each layer one at a time, you only start etching once all the layers have been deposited.

      1. Nick Ryan Silver badge

        Re: Is it worth having so many layers ?

        I'm not sure that's how it works as the layers are opaque and non-permeable therefore how could you use radiation or chemicals to etch an obscured layer?

        1. Spazturtle Silver badge

          Re: Is it worth having so many layers ?

          You do a High Aspect Ratio etch though all the layers to create small holes which you fill to create the channels, then you do another HAR etch to create slits to create rows of cells, and then you do a staircase at the edge to do the wiring.

          Image: i.imgur.com/6WDV56B.png

  3. druck Silver badge

    How thick?

    These chips are aimed at SSDs which have quite a few mm of thickness to mount the chips on, but how many layers can a NAND chip have before it is too thick to be used in a micro SD card?

    1. Mishak Silver badge

      Re: How thick?

      Have a look here - looks like layers are 4µm and it would take many more than 232 to use up the space.

      1. druck Silver badge
        Thumb Up

        Re: How thick?

        Thanks! So it looks like they could get a few more layers on yet.

    2. Alan Brown Silver badge

      Re: How thick?

      the SD question is more relevant than people realise - micro SD is where most factory failed SSD chips end up, with control circuitry mapping out the bad blocks

      This keeps the price down. A "failed" device is still normally 96% ok

  4. Tubz Bronze badge

    Hopefully and I know we've all been saying this for a few years, capacity of SSD's will be at a price that buying a slightly larger capacity but cheaper HDD will not make sense and all our homes server scan retire the clunkers, drop a few watts, save some dosh, keep the national grid happy, save some co2, while banging on about how green we are.

    1. Alan Brown Silver badge

      It's been economically more sensible to buy SSDs for home use since Micron released the ION range a couple of years back.

      They beat "NAS drive" endurance whilst using 12-15% of the power and have 5 year (vs 3 year) warranty

  5. Binraider Silver badge

    While the spot price of spinning rust per GB is probably cheaper, the TCO in influenced by system performance, users and reliability obviously makes the end to end decision slightly more complex for those incentivised to look at more than just their own bottom line.

    Lost count of how many decisions and organisations operate with conflicting local objectives.

  6. david 12 Silver badge

    Bring back non-volatile mother-boards?

    NVMe is the PCIe standard designed to work as fast as SSDs. There is latency, but that's not the fault of the interface: it's a characteristic of the SSD.

    Memory has latency of the same nature, and it's not the 'fault' of the memory bus: like NVMe, memory bus latency is how the bus deals with source latency.

    If it was just a question of connecting MS cards to the memory bus, they would have done that rather than connecting to the PCIe bus, and we'd have non-volatile motherboards.

    1. Sandtitz Silver badge
      Boffin

      Re: Bring back non-volatile mother-boards?

      Persistent memory is already available: NVDIMM.

  7. Danny 2 Silver badge
    Joke

    Remember software engineers read here too

    NAND is shorthand for not and. The rules of logic you read about on computers, we built into those computers. i could explain De Morgans law to you, but I couldn't.

  8. Anonymous Coward
    Anonymous Coward

    >Databases rely heavily on consistent, low-latency access to data, making SSD storage a far better alternative to spinning rust. Very few database workloads are still run on hard drives for this very reason...

    This is only kind of true. Anything scan-intensive may well still run on spinning rust, because it can be way more cost effective per byte stored. At petabyte scale those 5-10x cost differences for SSDs over spinning rust really start to hurt.Also because as much as getting gigabytes per second of throughput per disk is a joy with SSDs, you ultimately end up constrained by the total aggregate controller throughput or your network so the overall per-host perf is basically the same no matter the underlying media.

  9. RichardEM

    Back in 1987 I bought an 80 MB hard drive (it was 5.25") that fir under my Mac Plus and every body was asking me what I would do with all that space. It's cost was $1,500.00.

  10. Alan Brown Silver badge

    cost multipliers

    The jumping off point for SSD vs rotational media is normally around 4x the price

    That's the point where power savings alone make switching worthwhile. At 5x the price it's still justifiable as long as the endurance exceeds 0.5DWPD (which is about the endurance of large nearline media) but bear in mind a SSD doesn't incur wear penalties on reads (spinning media does). At 9x the price you need to tradeoff write speeds too

    Realistically your SSDs will have a lifespan of 7-12 years vs 3-5 years for HDDs - so the savings are likely to be higher than is made out. It's a big chunk of change up front but the vastly reduced Opex rapidly adds up (not just direct costs - not having to run the chillers as hard is a power saving (and cost saving) too)

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like