Yes. That sounds about right.
if you can't take over another products niche in the chain you're pretty much fooked.
The prospects of XPoint and other persistent memory technologies becoming a standard part of servers' design is being held up because the darn stuff costs too much, an analyst has said. That's because it is made in small quantities so no economies of scale occur which would make it cheaper. Object Analysis analyst Jim Handy …
Unless there's some kind of niche, kind of like how RD-RAM was during the early 2000's before Rambus decided it was a great idea to sue everyone, you're pretty much correct. Samsung has a good reputation for quality with their SSDs, and are well positioned to exploit Intel's lack of clarity and expense involved with it if Z-SSD doesn't have the vendor lock in problem, can produce a lot of their Z-SSD devices at a low enough cost, and has the same robustness as their 3D NAND SSDs it won't take much to relegate XPoint to a niche.
I have a feeling XPoint is headed the same direction as Itanium, really useful for certain things but nowhere near the mainstream ubiquity of x86 (though admittedly most of it now is x86_64 which was originally AMD's design) that Intel is so desperate for a repeat performance of, for the corresponding megabucks.
That graph seems to be using "bandwidth" as a proxy for latency, when they're not the same thing.
For example, it shows disk in the "10-100MB/sec" range, and tape in the "1-10MB/sec" range. In reality a modern tape drive has higher bandwidth than a spinning disk (LTO6 160MB/sec, LTO7 300MB/sec, both uncompressed rates). The difference is whether you want to wait 10ms to retrieve your data, or 10 seconds.
Still, makes for a satisfyingly linear picture.
> If I get my 30 MB file off a HDD within 1 second, that's 30 MB/s
> If I wait for a tape drive to find the file for 9.9 seconds (latency) and then read it within 0.1 second, that's 30 MB/s.
> Bandwidth and latency are related.
Do the same calculation again when you have to restore a 3TB virtual server image.
Or have a raw movie file.
That 10-second seek latency of tape is irrelevant when you are talking 100's GBs and TB+ of sequential data - e.g. a server backup, large scientific data-sets, video, etc.
Some data access requirements are latency sensitive but bandwidth not-so-much (e.g. loading your word doc), others are more bandwidth sensitive and less so latency.
Starting with IBM's bubble memory in the early 80s, the field is littered with the corpses of pretenders to the holy grail of byte addressable persistent memory. That's probably why those of us who have been around long enough were highly skeptical about XPoint despite Intel's hype.
Samsung's "Z-NAND" is just a fancy marketing name for slightly faster NAND. It isn't a new technology and it isn't byte addressable, so it is exempt from the curse. If Samsung is indeed working on some sort of new persistent memory as alluded to in the article, I will go on the record now as being HIGHLY skeptical. The track record for byte addressable persistent memory is so terrible that I have zero confidence they will succeed where many others have failed over the past 3+ decades. If they're lucky they'll find themselves a tiny little niche to carve out like MRAM did.
Somebody has to succeed eventually.
The question really is why bloody servers? Between grid reliability and UPS persistent memory has limited utility.
On the other hand, portable devices seem like a match made in heaven both utility wise and demand.
> The question really is why bloody servers? Between grid reliability and UPS persistent memory has limited utility.
That is very true.
The gap may be for something that's not quite as fast as DRAM but much cheaper - it doesn't need to be non-volatile. Then having 1TB in-RAM databases would become much more feasible.
Besides, you can always turn a volatile memory into a non-volatile one by adding a battery backup, or by dumping the contents periodically to persistent storage: witness laptop suspend and hibernate.
Now, the gap in price per GB between SSD and DRAM is perhaps 10-15x, so this is still a pretty tight niche to fit into. Would I want a laptop with 16GB of full-speed DRAM and 64GB of slower secondary RAM, if I could instead have an extra 256GB SSD for the same price as the slower RAM? Probably not. For a database server? Possibly.
The same applies to Xpoint. Given a choice between 128GB of Xpoint or 512GB of SSD, I think most people would take the SSD. It would only be very specific heavy transaction workloads which would benefit enough from the Xpoint to make the extra cost worthwhile.
They want to sell top dollar to the server market.
Want to get economy of scale ?
Putting Xpoint memory in embedded processors would simplify both hardware and software by replacing registers, eeprom, flash and sram. Slower than sram, but embedded isn't chasing peak performance.
One memory to rule them all.
3D Xpoint is 5x slower than DRAM and 100x slower than SRAM... with 100x less endurance. Right now it appears to be ideal for a "superfast SSD niche" ... which means it wont be cost effective as mentioned above.
once it is in 5% of the laptops or 5% of servers we can talk more... that will be a few years at least according to Intel