Just bring back core memory
Persistent memory to replace DRAM, but it could take a decade
Persistent memories can or will soon match DRAM in terms of speed, which could see it eventually replaced in many applications if one of these technologies can scale up and bring the costs down. In a recent webinar, the Compute, Memory, and Storage Initiative (CMSI) of the Storage Networking Industry Association (SNIA) …
COMMENTS
-
-
Tuesday 20th February 2024 19:52 GMT DS999
Its gonna be hard to supplant DRAM
If it costs several times more per bit it'll never have a chance, because persistent memory is of little use to most people. DRAM manufacturing has been refined for decades, even if "in theory" some new memory type could compete price-wise it won't have a chance because you can't just spend tens of billions building fabs to make it on the assumption that after 50 years the tech world will be easily convinced to abandon DRAM.
-
Tuesday 20th February 2024 21:17 GMT katrinab
Re: Its gonna be hard to supplant DRAM
If you can execute straight off the "hard drive", surely that could work out being faster than having to read it into RAM first?
Of course that would mean you have to think about programming it in a completely different way, and you would still have to "open" the file if it is on the other end of a network link, because the limitations of the speed of light mean that this is always going to be slower than some sticks of memory sitting about 5cm away from the CPU.
-
-
Tuesday 20th February 2024 22:21 GMT doublelayer
Re: Its gonna be hard to supplant DRAM
It would be faster, but the same could be accomplished by building a faster SSD with the same stuff. Basically, it's Optane again. You could save a bit of time by changing the executable format you're using from one that loads itself into memory and does a bunch of initialization to one that has already initialized and just stores and loads that state, but the programs will still have to load any configurable state from somewhere persistent which will likely still involve loading files from some kind of storage and processing their contents. I'm not sure loading speed would be different enough for people to care if the price is much higher.
-
Wednesday 21st February 2024 02:13 GMT HuBo
Re: Its gonna be hard to supplant DRAM
I think that the revolutionary bit in this is to map all bytes of storage (including persistent mass storage) directly into the 64-bit linear address space of the computational device (rather than going through serdes approaches as found in SD/MMC, SAS, SCSI, etc ... where the storage device's interface is in your memory map, but data within the storage unit is in its own separate address space, outside of the system's linear map). That being said, DRAM is hardly fast enough today (memory access bottleneck) so I don't see persistent memory replacing it (but it is useful for replacing some mass-storage IMHO).
-
Wednesday 21st February 2024 20:29 GMT Michael Wojcik
Re: Its gonna be hard to supplant DRAM
the revolutionary bit in this is to map all bytes of storage (including persistent mass storage) directly into the 64-bit linear address space of the computational device
The 1980s called and OS/400 would like its Single-Level Store idea back.
The fact is that there's nothing "revolutionary" about any of this. CXL? Yeah, we've had unified virtual-memory managers for decades. For that matter, battery-backed-up DRAM was a thing for a while, before it was largely supplanted by current NVDIMM technologies (because people don't like batteries).
Persistent-memory technologies will continue to improve, barring the collapse of civilization. That's a thing that happens. Whether they'll improve enough to supplement DRAM is an open question, because DRAM is already pretty fit for purpose and will also improve.
-
-
Sunday 3rd March 2024 18:11 GMT Jou (Mxyzptlk)
Re: Its gonna be hard to supplant DRAM
That is not the fault of the supercaps per se, I know many which have been in operation for more than ten years. The Zeusram simply used them wrong. Wrong dimension for the usage (voltage without reserve) or too fast charge/discharge are not taken well. A side effect of being able to store that much energy.
-
-
-
Wednesday 21st February 2024 21:47 GMT doublelayer
Re: Its gonna be hard to supplant DRAM
They can do that, but it won't change how mass storage is used. For example, when a file needs to be expanded, it's still going to be fragmented rather than treated as a big string in memory, and to manage that, they'll still need to track which areas of mass storage can be written to across programs. And we've reinvented the filesystem.
-
-
Wednesday 21st February 2024 02:33 GMT aerogems
Re: Its gonna be hard to supplant DRAM
That all sounds a lot like the JVM concept, or other language runtimes. The first time you load the runtime it's slow AF because you're having to load everything, but after that it should be a lot faster. Which also sounds a lot like the article some Reg hack did on old computers that were written in LISP and ran on specialized hardware. If, just sticking with Java as the example, you had an entire OS written in Java, and the hypervisor was basically a JVM, the boot times might be long, but after that you'd have all the runtime libs and everything loaded into memory already, so there wouldn't be a need to load specific bits from various DLLs.
-
Wednesday 21st February 2024 21:50 GMT doublelayer
Re: Its gonna be hard to supplant DRAM
That's not really what I intended. That's RAM caching, which is already available and works fine although if you don't have a lot of RAM, the caches will be rebuilt frequently. I was trying to talk about the initialization process of a typical program today, which involves copying its machine code into memory, then creating a lot of runtime data. Much of that data is loaded verbatim from files on disk, but some of it is calculated at startup. Theoretically, you could precalculate any static data and store that in the binary instead of computing it when the binary starts, which would decrease loading times a little. However, it's not going to reduce it too much and it is already an option.
-
-
Wednesday 21st February 2024 09:25 GMT katrinab
Re: Its gonna be hard to supplant DRAM
How long does it take to open a program? Depends on the program obviously, but usually it is a noticeable non-zero amount of time.
How long does it take to bring a minimised program to the foreground? Again depends on the program, but basically it is the amount of time it takes to redraw the screen which is pretty much instant.
That is the sort of speed increases we could potentially see.
-
Sunday 3rd March 2024 18:00 GMT Alan Brown
Re: Its gonna be hard to supplant DRAM
"Basically, it's Optane again"
Yup. The problem isn't matching what exists NOW but matching what exists when you finally get the product to market - and beat it on price
Optane, reram, mram and others have been playing this catchup game since the 1980s.
That's not to say it's a hopeless endeavour but 2030s might be a tad optimistic, given NAND can be made faster/higher endurance by stepping back to larger cells and whilst VNAND takes a lot of that cell size pressure away, it paves the way for VDRAM too
-
-
Wednesday 21st February 2024 06:59 GMT DS999
Re: Its gonna be hard to supplant DRAM
If you can execute straight off the "hard drive"
You're making the incorrect assumption that this will also supplant NAND. To do that it has to be VASTLY cheaper than DRAM. Besides modern SSDs can load at gigabytes per second, that's hardly a factor in execution time unless you are running programs that complete in a fraction of a second. It would have been a better argument in the hard drive days when stuff like rotation latency and fragmentation meant something large could take noticeable seconds, or in the case of really big applications minutes, to load.
-
-
Tuesday 20th February 2024 22:56 GMT Sandtitz
Re: Its gonna be hard to supplant DRAM
"because persistent memory is of little use to most people"
Intel tried to sell the Xpoint memory for awhile in many configurations. The Optane NVMe drives had lowest random access latencies available but most people were already pretty happy with the move from HDD to SSD (and NVMe) that the difference wasn't worth the price difference.
The Optane NVDIMM modules are/were in a different class - the latency being something like 1 / 100th of NVMe drives. The sequential read speeds were only regular NVMe class (even if the memory bus allows way more) but write speeds were much lower.
Too bad those NVDIMM's worked only on Intel Xeon platforms, were expensive (duh!) and initially they were available in small 16/32GB modules, and you could populate maybe three of them per CPU (or per system? getting hazy...) so not that big. I think the later NVDIMM generatiosn allowed much bigger systems.
I'm sure there are applications that will greatly benefit from those super low latencies, but as you say, little use to most people.
-
Wednesday 21st February 2024 06:30 GMT Anonymous Coward
Re: Its gonna be hard to supplant DRAM
> the difference wasn't worth the price difference.
I suspect we'll find that neatly summarizes why many, if not most, new tech fails in the marketplace.
People are often dazzled by the new shiny, but end of the day it takes something truly special for them to write the proverbial Big Check. Otherwise, "better than last year's model" with an attractive price tag tends to win out.
-
Wednesday 21st February 2024 15:30 GMT Doctor Syntax
Re: Its gonna be hard to supplant DRAM
"People are often dazzled by the new shiny, but end of the day it takes something truly special for them to write the proverbial Big Check."
It depends. Down at our level, that's mostly true. At board level big shiny and expensive golf days, etc. win out.
-
Thursday 22nd February 2024 01:27 GMT Anonymous Coward
Re: Its gonna be hard to supplant DRAM
Perhaps. But if writing the Big Check might eat into the Big Wheels' Big Bonus, then even new shiny will probably lose.
Even luxury golf trips and box seats and such tend to take a back seat when executive types are eyeing their bonuses. Considering the (appalling) size of some of them, this isn't very surprising.
-
-
-
-
-
Tuesday 20th February 2024 20:09 GMT I am David Jones
And security?
Isn’t persistent RAM a security threat?
AFAIK there is at least one attack which takes advantage of the little persistence that normal RAM has, so this would make the attack a lot easier (“shall we have a cup of tea first?”).
Unless they keep some volatile RAM and somehow manage to keep all the juicy stuff there.
-
Tuesday 20th February 2024 23:28 GMT Lee D
Re: And security?
All you would need to do is store a security key elsewhere (e.g. a TPM) and use it to encrypt memory data as it streams back and forth.
Otherwise removing a chip and putting it into another machine would, indeed, allow you to modify that RAM and then put it back in the original machine to reveal whatever access/information you wanted it to.
-
Tuesday 20th February 2024 23:47 GMT doublelayer
Re: And security?
The problem with that is that there isn't a great place to do that. If this hardware takes the place of your main RAM, it's not like decrypting disk data and caching it in part of RAM. You can decrypt it as it goes from RAM to cache, but you'll find yourself using a lot of processing time just encrypting and decrypting over and over again since your CPU cache is really small in comparison. If you had an encryption coprocessor doing that, maybe it would work a bit better, but we're getting to a point where I have to ask what benefit will justify the price of adding those things and taking the performance hit. I'm not sure what you get by going to that effort, and if the buyers aren't either, it won't sell.
-
-
Wednesday 21st February 2024 08:11 GMT Lee D
Re: And security?
The TPM is only ever big enough to store keys and process information with those keys. That's exactly what it's for.
It's not for "storing data", it's for storing enough information that it can encrypt and decrypt your data while being just tough enough to crack to make it not worth the effort to recover those keys.
You're using a TPM now - somewhere along the way. Almost every server or modern Windows client is using the TPM chip for things like Bitlocker, hence the requirement of a TPM for versions of WIndows 10, a stricter requirement for Windows 11, and even stricter ones for server versions and planned for the future.
Basically, if you wanted to encrypt RAM, you already have the tools in your machine, it's just a matter of joining them up so the memory controller portions encrypt/decrypt using the keys in the TPM "for free" without you having to use processor time to do it.
-
Wednesday 21st February 2024 20:26 GMT 43300
Re: And security?
"You're using a TPM now - somewhere along the way. Almost every server or modern Windows client is using the TPM chip for things like Bitlocker, hence the requirement of a TPM for versions of WIndows 10, a stricter requirement for Windows 11, and even stricter ones for server versions and planned for the future."
The server versions actually have a lower requirement. They don't require a TPM generally (although will do if you want to use Bitlocker). Microsoft has also said that Server 2025 won't enforce this either, although they recommend a TPM.
-
-
-
-
-
-
Wednesday 21st February 2024 15:08 GMT Anonymous Coward
Snark aside, this is a good point and the reason I came to the comments on the article.
One of the big reasons to reboot a computer is to zero the RAM before loading everything freshly into it. In theory, it should have no impact, as well-behaved programs will only read or write their own memory and only at the correct locations. In the real world, however, it's easy to have a pointer to the wrong place (in C, for instance), so you're reading or writing data that either isn't what you think it is (bad) or isn't even for your program (really, really bad). On a fresh boot, at least you'd be reading zeros.
In Ye Olde Days, back when power was controlled with a SWITCH not a button, telling the computer to reboot didn't actually power down (and thus clear) the RAM. More than once I saw a bug persist across "reboots", where a proper power-off, wait 5 seconds, power-on would make it disappear.
-
Tuesday 20th February 2024 21:55 GMT Jou (Mxyzptlk)
Isn't MRAM wear resistant?
The article mentions wear problem, which is, as far as I know, non existent for MRAM. Proven for more than five decades by its predecessors known as "core memory".
And as far as Optane goes: That is Intel management fuck up. Why limit the RAM-Module version only to the most expensive CPUs for no good reason... If they'd opened it up for all Xeons it would have been used a lot, even for "small" only 16 GB variants. Countless usages come to my mind. Those m2 cards were a bit late too.
-
Wednesday 21st February 2024 02:40 GMT aerogems
Re: Isn't MRAM wear resistant?
Because it's Intel. They still think that they can dictate the direction of the market the way they could in the heyday of around the 486 to P4. The first cracks in that idea came when Intel was forced to adopt AMD's 64-bit extensions to x86, and now with the proliferation of ARM and RISC-V to a lesser extent, and people wanting chips that use less electricity, x86 just can't compete. But, there's still apparently plenty of execs at the company who are still living in the 90s to early aughts, who keep trying to force their proprietary crap on the market as if Intel were still effectively the only game in town. Intel still refuses to let consumer chips use ECC RAM for... reasons.
-
-
Wednesday 21st February 2024 01:49 GMT Anonymous Coward
"Bring the popcorn" issues.
The OS ties itself in a knot but memory is persistent. So BIOS mods are needed at a minimum to force clear memory on boot.
Faster boot is promised. Won't happen. The memory will need to be checked and with a decent hash, just reading memory to do that will take long enough that instant boot won't happen. Faster maybe, but not fast enough.
"Clear the browser cache" issues. Law enforcement/blackmailers/hackers wet dream. So encrypted memory is pretty much essential for this to be useful. We have that now, but it keeps being broken - so encryption that's actually secure rather than theater is also needed.
-
-
Wednesday 21st February 2024 14:02 GMT Jou (Mxyzptlk)
Check your BIOS advanced memory timings. There you can see the refresh cycles. With DDR4 somewhere between each 300 and 800 cycles. Divide the RAM frequency by that number, and you have the refresh cycles per second. Like @pmugabi said: Currently several thousand times per second. You can increase that number to get more performance, but I strongly recommend real ECC memory for such tests. Else you never know whether bits flew away until it is too late.
-
Wednesday 21st February 2024 18:55 GMT Justthefacts
Not quite right
The RAM refresh is done in sections. The overall actual refresh interval for DDR4 is 64ms, from when a particular row is refreshed, to when it is done again. We can ask a separate question: if we hooked some separate refresh and power circuitry to the DDR4, such that when the CPU was powered down, the refresh still ran….well the RAM itself is taking maybe 3W when fully active, but the refresh is only maybe 0.5% duty cycle. I don’t have exact figures, but it might consume only 15mW in refresh-only. That’s still more than you can hold in a capacitor, but it’s easily in the range of a small on-module Li-ion battery for a week or more, if one chose to do it.
And that’s the thing: you don’t need any complex technology developments, just a bit of engineering on the PCB. The fact that nobody *does* choose to do it, says to me that “persistent memory” in and of itself is not really useful to anyone.
-
-
Thursday 22nd February 2024 09:45 GMT Justthefacts
Re: Not quite right
Ah, I see I’ve re-invented the wheel. Shouldn’t have surprised me, as it was an obvious development, and there are always *some* niches for the non-standard requirement.
I revise my comment to: in those niches where non-volatile RAM is considered a benefit, there’s an easy solution already out there which doesn’t require a big tech development. It’s a niche, so the costs are a bit more. But if the uses became more general, the product would have a larger manufacturing volume and the prices would drop. I still don’t see a niche for a *new* non-volatile technology. MRAM also already exists, and fills yet another niche perfectly adequately on the low-size end.
-
-
-
-
Thursday 22nd February 2024 16:54 GMT GBE
That's what DRAM is
couldn't you just add some capacitors or something to hold the charge on the DRAM and thus preserve its state, at least for a short-time
Capacitors that preserve state for a short time: That's exactly what DRAM already is.
You can increase the time it can preserve state by increasing the size of the capacitors. Right now, that time is 10s of milliseconds. Want to make it 10s of seconds? Multiply size (therfore cost) by 1000. Want to make it 10s of years?
-
Wednesday 21st February 2024 06:20 GMT kuiash
ARRRGHHHH!
This is the sort of thing I read about when in BYTE when I was a wee-nipper and I've wanted it ever since. JUST DO IT! I've been waiting for MRAM systems for decades know! From a software/systems perspective it's a total no-brainer. Fast? Non-volatile? OMG - sign me up... here's a kidney as advance payment (joke).
-
Wednesday 21st February 2024 14:12 GMT dakra
Replace DRAM? Think bigger! Replace SAN!
This will be a major big deal for large systems, especially with a petabyte or exabyte of register addressible memory, some pooled and some shared within a cluster.
CXL will do for aggregated, shared, and pooled large memory what SAN did for storage.
SANlock semantics will provide for locking within shared memory.
CXL Switches will provide intersite memory replication for disaster recovery. That will be analogous to SAN switch-based intersite storage replication.
Programs could access shared files through virtual memory mapping semantics while others use the old I/O syntax, libraries, and system calls.
IBM:
* IBM could extend its lead in clustered shared memory with Parallel Sysplex and Geographically Dispersed Parallel Sysplex for files in memory. Some will imagine this as clustered System/38 Single Level Storage.
_ _ Mainframe Coupling Facilities, VSAM, DB2, spooling, MQ, DiV, GRS, GDPS etc. could be enhanced to exploit and support CXL persistent memory.
* IBM could ignore the possibilities and simply let the rest of the industry catch up to and then surpass what mainframe clustering can do.
-
Wednesday 21st February 2024 15:56 GMT Electronics'R'Us
Use cases
There are a lot of use cases, but the majority of them are not in general purpose computing.
1. I have done a lot of avionics and the start up time requirements can be very stringent. Shortening that can be the difference between success and failure in getting the contract.
2. Edge / IoT. Many applications in this arena need to power down, and often only power back up on an event. Without needing to maintain power to the memory during power down, power is reduced. That can be a very important factor in the design of such things.
3. Where non volatile memory is required (there are so many use cases here such as 'cause of last shutdown'). In the past, one would use a Dallas semiconductor (now Maxim, which is now part of Analog Devices) device that had a small battery. Those things cost a lot, but with the newer non volatile devices, more mainstream approaches can be used.
Note that although these could use EEPROM / Flash, the write times (and erase for that matter) take quite a while; these things operate at standard memory speed and if you are in an unexpected power off event there will be time to do an orderly shutdown and write the fact of power failure to a log file and also give that an orderly shutdown - make sure the write enables are off prior to voltages going below required states. That prevents data corruption.
Will it make it's way into mainstream computing? I have no idea.
There are plenty of places where this stuff is used.