back to article Besides the XPoint: Persistent memory tech is cool, but the price tag... OUCH

The prospects of XPoint and other persistent memory technologies becoming a standard part of servers' design is being held up because the darn stuff costs too much, an analyst has said. That's because it is made in small quantities so no economies of scale occur which would make it cheaper. Object Analysis analyst Jim Handy …

  1. John Smith 19 Gold badge
    Unhappy

    Yes. That sounds about right.

    It's evolution

    if you can't take over another products niche in the chain you're pretty much fooked.

    1. FrankAlphaXII
      Thumb Up

      Re: Yes. That sounds about right.

      Unless there's some kind of niche, kind of like how RD-RAM was during the early 2000's before Rambus decided it was a great idea to sue everyone, you're pretty much correct. Samsung has a good reputation for quality with their SSDs, and are well positioned to exploit Intel's lack of clarity and expense involved with it if Z-SSD doesn't have the vendor lock in problem, can produce a lot of their Z-SSD devices at a low enough cost, and has the same robustness as their 3D NAND SSDs it won't take much to relegate XPoint to a niche.

      I have a feeling XPoint is headed the same direction as Itanium, really useful for certain things but nowhere near the mainstream ubiquity of x86 (though admittedly most of it now is x86_64 which was originally AMD's design) that Intel is so desperate for a repeat performance of, for the corresponding megabucks.

  2. Anonymous Coward
    Anonymous Coward

    Bandwidth != Latency

    That graph seems to be using "bandwidth" as a proxy for latency, when they're not the same thing.

    For example, it shows disk in the "10-100MB/sec" range, and tape in the "1-10MB/sec" range. In reality a modern tape drive has higher bandwidth than a spinning disk (LTO6 160MB/sec, LTO7 300MB/sec, both uncompressed rates). The difference is whether you want to wait 10ms to retrieve your data, or 10 seconds.

    Still, makes for a satisfyingly linear picture.

    1. stephanh

      Re: Bandwidth != Latency

      Amen. Tape has bandwidth aplenty, just drive a truck full of the stuff somewhere. It's latency which costs $$$.

    2. Anonymous Coward
      Anonymous Coward

      Re: Bandwidth != Latency

      If I get my 30 MB file off a HDD within 1 second, that's 30 MB/s

      If I wait for a tape drive to find the file for 9.9 seconds (latency) and then read it within 0.1 second, that's 30 MB/s.

      Bandwidth and latency are related.

      1. Pascal Monett Silver badge

        9.9 seconds + 0.1 seconds = 10 seconds

        So your example turns out at 3MB/s.

        Yes, they are related.

        They are not interchangeable.

      2. eldakka

        Re: Bandwidth != Latency

        > If I get my 30 MB file off a HDD within 1 second, that's 30 MB/s

        > If I wait for a tape drive to find the file for 9.9 seconds (latency) and then read it within 0.1 second, that's 30 MB/s.

        > Bandwidth and latency are related.

        Do the same calculation again when you have to restore a 3TB virtual server image.

        Or have a raw movie file.

        That 10-second seek latency of tape is irrelevant when you are talking 100's GBs and TB+ of sequential data - e.g. a server backup, large scientific data-sets, video, etc.

        Some data access requirements are latency sensitive but bandwidth not-so-much (e.g. loading your word doc), others are more bandwidth sensitive and less so latency.

  3. Anonymous Coward
    Anonymous Coward

    Persistent memory has been failing in the market for decades

    Starting with IBM's bubble memory in the early 80s, the field is littered with the corpses of pretenders to the holy grail of byte addressable persistent memory. That's probably why those of us who have been around long enough were highly skeptical about XPoint despite Intel's hype.

    Samsung's "Z-NAND" is just a fancy marketing name for slightly faster NAND. It isn't a new technology and it isn't byte addressable, so it is exempt from the curse. If Samsung is indeed working on some sort of new persistent memory as alluded to in the article, I will go on the record now as being HIGHLY skeptical. The track record for byte addressable persistent memory is so terrible that I have zero confidence they will succeed where many others have failed over the past 3+ decades. If they're lucky they'll find themselves a tiny little niche to carve out like MRAM did.

    1. Black Betty

      Re: Persistent memory has been failing in the market for decades

      Somebody has to succeed eventually.

      The question really is why bloody servers? Between grid reliability and UPS persistent memory has limited utility.

      On the other hand, portable devices seem like a match made in heaven both utility wise and demand.

      1. Anonymous Coward
        Anonymous Coward

        Re: Persistent memory has been failing in the market for decades

        Maybe the price point problem is even worse for portables? Just guessing from ignorance.

      2. Anonymous Coward
        Anonymous Coward

        Re: Persistent memory has been failing in the market for decades

        > The question really is why bloody servers? Between grid reliability and UPS persistent memory has limited utility.

        That is very true.

        The gap may be for something that's not quite as fast as DRAM but much cheaper - it doesn't need to be non-volatile. Then having 1TB in-RAM databases would become much more feasible.

        Besides, you can always turn a volatile memory into a non-volatile one by adding a battery backup, or by dumping the contents periodically to persistent storage: witness laptop suspend and hibernate.

        Now, the gap in price per GB between SSD and DRAM is perhaps 10-15x, so this is still a pretty tight niche to fit into. Would I want a laptop with 16GB of full-speed DRAM and 64GB of slower secondary RAM, if I could instead have an extra 256GB SSD for the same price as the slower RAM? Probably not. For a database server? Possibly.

        The same applies to Xpoint. Given a choice between 128GB of Xpoint or 512GB of SSD, I think most people would take the SSD. It would only be very specific heavy transaction workloads which would benefit enough from the Xpoint to make the extra cost worthwhile.

      3. eldakka

        Re: Persistent memory has been failing in the market for decades

        > Somebody has to succeed eventually.

        Why?

        As you pointed out with grid power and UPS, a persistent-memory (as in RAM replacement) may never make financial sense over non-persistent alternatives.

  4. Anonymous Coward
    Anonymous Coward

    Why haven't we got persistent memory on are saddled with stuff that fails after a certain number of writes ?

    The Man In The White Suit

  5. Colin Tree

    embedded

    They want to sell top dollar to the server market.

    Want to get economy of scale ?

    Putting Xpoint memory in embedded processors would simplify both hardware and software by replacing registers, eeprom, flash and sram. Slower than sram, but embedded isn't chasing peak performance.

    One memory to rule them all.

  6. emv

    3D Xpoint is 5x slower than DRAM and 100x slower than SRAM... with 100x less endurance. Right now it appears to be ideal for a "superfast SSD niche" ... which means it wont be cost effective as mentioned above.

    once it is in 5% of the laptops or 5% of servers we can talk more... that will be a few years at least according to Intel

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon