Re: 3D Xpoint as 'DRAM' & cache for NVME
> It looks like an ideal ram & cache replacement for 3 or 4 bit per cell nvme ssd.
Sure, but how many applications actually benefit from this additional layer of cache?
Most datasets are skewed so a very large proportion is "hot" and the rest is not - this is how caching works in the first place. So increasing the size of your cache has a proportionately lower return. If you have (say) 64GB of RAM then your hottest data will be there already. If you add another tier with (say) 256GB of Xpoint, then only warm data is there. Your cache hit rate might go from say 80% to 85%. That translates into a very low actual improvement in overall application performance.
Furthermore, if this cache is fronting some slower primary storage, i.e. spinning disk, then you could add 256GB of NAND, at a fraction of the cost, and get almost identical performance improvement.
Another option might be to try to use Xpoint to reduce cost: reduce your RAM to (say) 16GB whilst adding 64GB of Xpoint. The overall cost saving will be low, and the performance will actually decrease, since the Xpoint is not as fast as RAM, and you are pushing hotter data into a slower tier, and there will be increased movement of hot data between the tiers. On top of that, you've increased the complexity and failure modes of your server.
So the only use case for Xpoint I can see is when your main storage tier is NAND, and NAND is not fast enough, *and* your dataset has so much hot data that it would be uneconomical to put enough RAM in cache it effectively.
This to me sounds like a very niche area indeed.