A MAJOR breakthrough,
this product will be priced rather high for a long time due to unmet demand. Embedded into controllers, drives, phones, motherboards, graphics cards for HEDP,... the mind boggles.
Next year, the world will start using products with an amazing new type of memory in them - or so say Intel and Micron, allied to produce the 3D XPoint chippery. This is, we're told, a radical resistive-RAM technology that is bit-addressable, non-volatile, and forms a new memory tier between DRAM and NAND. That means software …
Why would this be embedded in phones? Phones aren't limited by flash's speed or lifetime, so there is no reason to add something more expensive to get better performance on either metric.
Until it can be produced in large enough quantities to allow its price to drop to where it replaces flash entirely, I think its usefulness will mainly to be to replace flash in applications where flash is either too slow or its lifetime is too short - basically meaning enterprise SSDs will be replaced by these. Consumer laptops don't need this, nor do phones, nor wireless routers, nor USB sticks, nor the BIOS on your PC's motherboard or many other places where flash is currently used.
Those products won't switch from flash until this Micron thing is cheaper - which may not take that long since flash is running into some limitations for density increase this technology won't face. They've worked around it with "3D" NAND but the ability to add layers in that fashion is quite limited so it won't help for long.
Why would this be embedded in phones? Phones aren't limited by flash's speed or lifetime, so there is no reason to add something more expensive to get better performance on either metric.
I think you're facing the wrong direction. If I were building phones, I'd be considering this as a potential replacement for mobile DRAM:
Lower per GB cost
Nonvolatile could* mean less power usage
Lower speed should be ok given phone processors compared to server/PC processors.
*unless, of course, the power needed to change a value is significantly higher than mobile DRAM. That's one of the pieces suspiciously missing from the article.
It will only work as a replacement for DRAM if it is addressable in a similar manner. We'll have to learn more about it to see if it is bit addressable. If it is block addressable like flash the block size must be no larger than your L3 cache line size or it can't replace DRAM.
There's a lot of embedded systems which still use supercap/battery-backed SRAM, and a bit-addressable NVRAM would be a perfect replacement.
Even at higher chip cost, because ultracaps and batteries are very expensive.
Many of them don't use Flash due to the block-erase problem - lose power at the wrong moment while updating settings and the data is lost forever.
" A MAJOR breakthrough"
No .. if fact this idea was conceived back in the 70's by Leon Chau at UCal Berkeley. And in 1980 HP (Stanley Williams) produced memory elements that could shift resistance and hold their state and was reported in the magazine Nature. They were called 'memristors'. I wish someone would report the research from scientists and engineers rather than the companies which make the money from these ideas.
As a side note, people are working on 'memcapacitors' and meminductors'.
El Reg regularly reports on research, and regularly research goes nowhere, or takes decades to reach the consumer. Much of the excited research announcements turn out to be impossible to turn into a product that can be manufactured reliably, at scale and at a sane cost (see all of the articles on new battery technologies over the last 15 years).
So, the companies that make the money aren't just taking ideas and pressing the magic 'sell one of these' button, but putting in immense product development effort to deliver them to end users.
This post has been deleted by its author
"I believe that's called a memristor." (scoff...) No. That's called RRAM, PCRAM, MRAM, or a dozen other restistive memory types that people have been working on for the last 40 years, some of which have been commercially available for a long time. Looks like Intel and Micron have stuck a warp drive on one of those, but we won't know the details until the product hits the shelves.
@lambda_beta: try reading the first sentence of the article before claiming that it doesn't say that it's not the same as HP's resistance-based hopelessly over-running memory project based on Chua's research and Williamson's laoratory device (memristor - not sure why you dropped the "r" in you second post).
Hopelessly over-running: HP in Oct 2011 announced commercial availability by April 2013; current HP estimate of commercial availability is 2018.
I wonder if this will allow apps to be run in-situ as it were - no more loading them into RAM, just a add a pointer in the active running list.. RAM would be the computer's scratch pad for stuff it is working on.
This could really have a dramatic effect on what we think of as the typical architecture of a PC* - or maybe we are too set in our ways - interesting times ahead if it takes off.
*I mean, CPU+RAM+STORAGE+I/O
What you're talking about is NVRAM.
The article talks of read time and writing algorithms, so this does not sound like NVRAM. It is still flash-like.
It is also not clear if the devices would support random access (like NOR flash) or page access (like NAND flash). I suspect it will be page access like NAND flash.
NAND flash takes a while to read because the read command has to wait until the read logic settles. This can take a reasonably long time with the long NAND chains. After that there is still the overhead of doing the actual data transfer.
Having been deeply embroiled with flash for the last 20+ years, it is interesting to see some competitors emerging.
Charles Manning wrote:
It is also not clear if the devices would support random access (like NOR flash) or page access (like NAND flash). I suspect it will be page access like NAND flash.
Did you not read the article? It clearly says the memory is bit addressable, so supports random access.
Intel/Micron Introduces Revolutionary 3D XPoint Technology: Memory And Storage Combined
There were some early signs that Intel and Micron were going to shake up both the memory and storage market when Intel introduced commands for persistent memory, CLWB and PCOMMIT, last November... Intel indicated the new memory would connect to the host system via the PCIe bus, which is yet another reason that Intel and Micron have been vocal proponents of NVMe. The NVMe protocol was designed from the ground up for non-volatile memory technologies, and not NAND in particular. Now it is apparent that Intel and Micron were laying the groundwork for something more as they developed the new protocol.
@Gordon 10 - Didn't make myself clear enough. The thought was that if you could load up your entire data lake persistent store into intel/micron hybrid memory, ala SAP HANA, you wouldn't need to replicate data into different storage locations to reduce storage contention as hadoop does.
I'm thinking this hybrid store could be a boon to big data analytics.
"The article says the new memory is faster and more durable. I didn't see anything saying it was more dense."
Actually, the article said they were planning on making it just as dense as a flash chip for the first generation, but that the technology would scale better because it doesn't have the overhead of transistors.
"An XPoint memory chip will store 128Gbits (16GB), meaning a 3D XPoint SSD will have the same capacity range as a current SSD using 128Gbit NAND chips that are 1 bit/cell, SLC, or single level cell NAND."
"Memory cells are accessed and written or read by varying the amount of voltage sent to a selector. This eliminates the need for transistors, we're told, increasing capacity while reducing cost."
I'd heard that memristor from HP/Hynix was a done deal, simply waiting for market conditions to be right. Never sell your best if you can sell your old product line for a while longer....
Whilst 1000 times as many write cycles is better than FLASH, it's not as good as memristor which reportedly has no perceptable wear life problem at all. This makes Intel's solution unsuitable as a DRAM replacement. HP's memristor is also faster than DRAM, making it desirable as a DRAM replacement.
If HP are ever going to do it, now's as good a time as any. Memristor can change computer architectures completely (by replace both DRAM and SSD with just Memristor), but it will take time for anyone to realise that they can make those changes and that the result would be better. Why bother with an SSD at the end of a PCIe bus when it could simply be an area of your CPU's RAM that just happens to be non-volatile?
In contrast, Intel are essentially offering a much faster SSD type technology but still with the embuggerance of having to manange wear life. Good, but not the total revolution that HP's memristor could bring. I wouldn't be surprised if that small-step, same-as-before-but-faster quality of Intel's idea means that it dominates the market.
If XPoint is going to be used as "cheaper and non-volatile" DRAM then it does have endurance problem. This can be made manageable by memory controllers, but that's another step to increased memory latency.
What we have now appears to be simply another tier for large, fast and safe buffer of NAND writes (article does mention NVME on PCIe), which is great but it does not exactly warrant abandoning research into DRAM replacement technologies.
For example, if such "DRAM replacement" technology (e.g. memristor) had latencies at two orders of magnitude lower than current DRAM (and apparently XPoint), it would enable immense jump in CPU (and GPU) performance by dropping the requirement for cache (see The Platform).
I wish them luck with REX but it isn't a new idea (functionally it looks a lot like the 2D array of T800s I had fun with in the early 90s). I'm not convinced the REX folks have done their home work on the various tradeoffs involved. Case in point: driving fast edges down long wires (on or off-chip) is burns a lot of juice, and arrays of memory have long wires. That is why several layers of cache actually makes sense today - and I believe will continue to make sense until folks work out how to drive signals down long wires with zero latency.
It sounds like you could do the same thing with the XPoint tech. It might be 50% slower than conventional RAM, but I'm pretty sure most of us wouldn't exceed the "40TB/day* for 5 years" limit that XPoint is supposed to have. Sure, your HPC cluster or HA server would probably want something faster and more durable, but the overall winner is likely to come down to which one is cheaper to produce. Memristors might get some slack for durability, but they're not going to be able to command a substantial price difference if you can replace the XPoint chips on failure for less than the memristor chips.
Another thing is that motherboard architecture is going to need reworking to get the available speed out of either kind of chip. We can minimize the latency if we put either technology into an existing RAM slot, but at those speeds, they would easily max out a PCIe x16 slot.
*40 TB/day ≈ 485 MB/s: A rather high bar to pass on average.
Not sure why you're ac...
But to your point...
First, how close is this to Crossbar and their tech? (Cue the lawyers...)
But more to the point... yes you can start to see the radical shift in terms of Server and potentially PC build.
Imagine that they can get a greater density.
So how small could you build a 'SoC' like chip with the power of an i7 or greater, 32GB DDR3+ memory, and then 4 of these or crossbar's chips with a density of 1TB each? I would imagine it about the size of an old Apple 4S, but not as thick. Now imagine an I/O power bus and then build a back plane / chassis that has 20 of them in a 4U box. HPC? maybe, but also a Hadoop/Spark/etc cluster in a box that can then still join w other boxes.
You still need RAM, but less worry about swap to these.
Things could get very interesting...
That is a crossbar lattice, but Crossbar, Inc. didn't invent crossbar lattices, which are ancient. The magic is in the physics, materials, and manufacturing process... but none of my google searches have revealed what physics Intel/Micron are actually using. All I've seen are rumors that it's PCM, based on performance criteria.
>I'd heard that memristor from HP/Hynix was a done deal
You know pump and dump is illegal right? (why the AC huh) If that was the case they probably wouldn't have hung their The Machine crew out to look like ass hats with nothing to show recently. Of course knowing HP dysfunction maybe they just really hate those guys.
>It might be 50% slower than conventional RAM
That is also the best case scenario speed wise i have heard for memristor tech as well.
"I'd heard that memristor from HP/Hynix was a done deal, simply waiting for market conditions to be right. Never sell your best if you can sell your old product line for a while longer...."
That has never worked well in the computing business, if that really is their master plan their failboat has already started it's one-way descent to the seabed.
"I'd heard that memristor from HP/Hynix was a done deal, simply waiting for market conditions to be right. Never sell your best if you can sell your old product line for a while longer...."
That's a fair strategy for evolutionary tech where the competition can choose to leapfrog you and go two steps ahead instead. Not so for revolutionary tech that can result in a paradigm shift, meaning your existing tech can be obsoleted cutting off your revenues. In the latter case, who dares wins since they gain the critical advantage of the first mover. If the market develops to be such that it can't support a lot of suppliers, you definitely don't want to be left behind.
Until the price gets low and SSDs with this new tech won't be available at anything more than a +30% to +50% higher than current Flash NAND SSDs then there will be no mass market for it.
The flawed SSDs NAND will keep selling unfortunately.
I'm more than willing to pay a 50% premium for this technology. Since I like living on the bleeding edge, +400% is not something that I'll even blink at paying. If I can get a couple in memory stick form factor, I have 10 or more machines to stick them in to play with. Ooooh!
I also like new toys, and am willing to pay more to play.
However, in this case, the speed of the beast is such that the usual interfaces, including USB, SATA are much to slow. It'd be a waste to put it in a memory stick with an USB interface. It would be like putting a formula 1 car on rutted country road.
To really get a feel for the speed, you'd need PCIe (gen 3, perferably, or Thunderbolt 3), that or NVMe.
Damn right. I don't care if people think that "consumer desktops don't need the speed" - I want that speed along with its reliability.
I have 2 120GB SSDs on my main rig right now - I am ready and willing to replace them with similar-sized SSDs based on this new tech in the blink of an eye.
I have a 4-slot NAS with 4 3TB disks in it, and I am positively drooling at the idea of replacing them with 3TB disks with this tech - of course, that's not going to happen next year, but it will.
Ah, technology is marvelous.
We have had fusion since the early 1950s (obviously).
Technically, we've had the benefits of fusion energy for a little longer than that.
I admit, I work in IT so can't vouch for this first-hand, but I'm told there's a large fusion reactor glowing yellow in the sky.
You know. Outside.
Everytime I see the 'NEW BREAKTHROUGH" of whatever tech is popular at the moment, there is no mention of a date. It could offer a million time faster access and still be a lie. I wish the authors had stopped for just a minute and asked 'show me'.
In the 1950's I was promised flying cars. In the 1960's flying rocket packs. ........In the 2015's I was promised faster memory!!
I recall a conversation about 25 years ago where a colleague and I agreed that flash memory was going to make spinning rust obsolete. It's happening now, but took a lot longer than we expected because rotating drives kept getting smaller, denser, cheaper and more reliable. The adoption curve for post-NAND technology may be similar, though I hope not.
Yes, I remember giant magneto resistive technology for the read/write heads, followed by colossal magneto resistive technology, then super-colossal. I lost track at that point, as the naming convention started reminding me too much of the size-grading of olives and shrimp.
It is called 3D memory because bits can be stacked. As shown in your drawing, it is presently at 2 bits high. (This is what they meant by "two layers" not that there are only two layers in the fabrication process.) Bit stacking can be done because the storage is not making use of the single crystal silicon substrate. As long as they can access the various layers of word and bid lines, they can keep stacking the bits.
There ARE transistors on the chip. They are NOT in the memory cells but will be around the perimeter to decode the address, read the signals from the bits and convert them into the appropriate output voltage and send the appropriate signals to write bits. (Although the chip is random access, they will almost certainly read and write Words and not single bits.)
The reason a resistive memory cell can be scaled down farther than present memories:
•DRAM storage depends on the area of the capacitor to store charge. If the area gets too small the amount of charge becomes harder to detect.
•Flash depends on charge of a floating gate. Again, as the gate gets smaller the amount of charge is limited and you have to move it closer to the channel of the transistor (thinner insulator) to modify the threshold voltage. If the gate is too small and the insulator too thin, small amounts of leakage can degrade the storage time.
•A variable memory material will have the same ratio of Low to High resistance no matter how small the bit is as long as the cross sectional area and thickness of the storage volume keep the same ratio as it scales down. It is possible, depending on the resistance change mechanism, to have a lower size limit. For example, phase change memory could be limited by how large the bit must be to exhibit crystalline characteristics, since the surfaces of the volumetric bit will be influenced by the materials on which or to which it is in contact and not be crystalline for some skin depth.
•A variable memory material will have the same ratio of Low to High resistance no matter how small the bit is as long as the cross sectional area and thickness of the storage volume keep the same ratio as it scales down.
Won't scaling it down (and depending of the frequency) a lot make the resistors act like capacitors in some cases?
Resistance = (Resistivity of the material) ÷ (cross sectional area of the resistor perpendicular to current flow) x (length of the resistor in direction of the current flow) [I could write the equation, but have no idea how to get the Greek letter rho for resistivity into my text.)
If you keep the same aspect ratio and scale all dimensions proportionally by a factor "s", the resistance will go change 1/s (you have s/s^2). So if everything is made 1/2 the size the resistance will go up by a factor of 2.
But notice that, when you calculate the ratio of a given resistor in its high and low resistance states, the area and the length cancel out, leaving only the ratio of the two resistivities, which only depend on the materials parameters and not on the actual size.
As you get really small, the sense current will decrease for a given length and area. If you can then shorten the length (i.e. make the layer of resistance material thinner) you can bring the resistance back down and the sense current back up. Luckily, it is much easier to shrink a deposited thin film thickness than it is to change the dimensions of the photolithograpicly patterned area, so there is a very long way to go before this is a limit.
This post has been deleted by its author
Perhaps "Xpoint or memrister?" is the wrong question.
What if production grade memrister exists (big assumption but for the sake or argument), it's super fast but costs the same or even double DRAM?
Wouldn't that open the window for machines with primary store being 'expensive' memrister with the secondary, 'cheaper' but relatively slower storage being Xpoint? Lower power, *much* more speed both to primary and secondary storage, higher board density, no wear levelling problems on the primary store and effectively none on the secondary ... basically a screamer of a memory system for a few dollars more (at production) or, as it's a 'premium' system double the cost (for the user) making nice sales figures for everyone concerned.
Everyone wins apart from DRAM, NAND and spinning rust manufacturers.
You can say the same thing of 3D Flash. It always takes time for production to ramp up. Thing is, this new tech appears to be lagging 3D Flash only be a few months. If it really is everything it claims to be, it has the potential to strangle 3D Flash in the cradle, before it can really break out into the mainstream.
This probably won't do anything bad to 3D Flash. This technology is more expensive that NAND, you are not going to get economy of scale with imaginary terabyte-sized drives in this technology (without of flash). What you might get instead is terabyte-sized drives made with 3D Flash and with insane write speeds/IOPS thanks to large (tens of GB) write buffer in XPoint.
Why do I get the feeling the biggest problem with this technology will be making enough of it?
Well, if there's one thing Intel are very good at, it's large-scale semiconductor manufacture. And once a fab plant's built, the #1 concern is to keep it busy.
So I don't have too many concerns in that area.
The perimeter circuitry will be standard CMOS and does not have to push state of the art FET design because it will occupy such a small percentage of the chip. That makes it easy to manufacture the electronics.
The manufacturing problem will involve whatever the exotic material is that they use for the resistors. Certain elements can, even in very small quantities, contaminate CMOS enough to cause problems like junction leakage. If the resistor material contains such elements they will have to use a dedicated manufacturing line to avoid contaminating other products. If the contamination is critical enough they will have to use a barrier layer to avoid contaminating the FETs out at the perimeter. Luckily, depending on how the material is deposited, it is probably at a fairly low temperature and will have a low diffusion constant.
There is little point in putting this new NVM technology on an SSD or any other I/O device. I/Os are too slow and too high latency to take advantage of this technology. Instead, it should be put on the memory bus and treated as slow memory. Intel will need to build a new memory bus, or Micron will need to make DIMMs with hybrid DRAM/NVM.
The 'revolution' that this technology enables will only be realized when the programming paradigm changes. Is there an OS out there that can handle non-uniform memory access efficiently? Is there a programming language out there that can begin to help engineers write code with multiple classes of memory, including NVM?
A LOT of stuff needs to be re-thought/rewritten. Better get started - big bucks await those who succeed.
Not really. NVMe and PCI(e) memory mapping provide a good transitional step. By memory-mapping the NVM, it can already treated like a slow memory under specific conditions (the 64-bit address space assures we shouldn't run out of mapping space anytime soon). It's just not done that way right now due to "habit". Once more devices see the NVM as less a drive and more a memory, then the OS logic will be in place and the hardware can take the next step to move it from an I/O bus to a memory bus (preferably distinct from the DRAM bus so they're not locked together). From there, it's just a matter of the OS supporting it after it's already used to the idea.
Is there an OS out there that can handle non-uniform memory access efficiently? Is there a programming language out there that can begin to help engineers write code with multiple classes of memory, including NVM?
it can be done currently with linux (or windows/solaris) and any language that allows file mapping - mmap(2)[0]. The memory has be mounted as file system similar to how ram disks /dev/shm
mmap is the standard way to allocate memory in linux, so support in C is just natural. Java[1] supports memory mapped files and so on.
In short the mechanism is there already for a very long time.
[0]: http://linux.die.net/man/2/mmap
[1]: http://docs.oracle.com/javase/7/docs/api/java/nio/channels/FileChannel.html#map(java.nio.channels.FileChannel.MapMode,%20long,%20long)
But it's only ten times denser than DRAM!
Actually, that's not a bad density - but it is significantly less dense than flash. So being less dense, and more expensive, per bit, it won't make flash memory totally obsolete. Although I like the fact that it will survive more write cycles.
It will be very useful, I agree.