back to article Why Intel killed its Optane memory business

Intel CEO Pat Gelsinger has confirmed that Intel will quit its Optane business, ending its attempt to create and promote a tier of memory that's a little slower than RAM but had the virtues of persistence and high IOPS. The news should not, however, come as a surprise. The division has been on life support for some time …

  1. Old Used Programmer

    Sigh...

    And here I was hoping that Optane, with high write counts, would make it's way into micro-SD cards. The specs looked pretty good for boot devices for Raspberry Pis.

    1. Flocke Kroes Silver badge

      Re: Sigh...

      The high IOPS looks really good until you spot the Pi4's SDHC interface is limited to 50MB/s. Connect an SSD via USB3 and you will get 300MB/s - if you can keep the CPU cool. A Pi4 is plenty fast enough for emails and web browsing. For more demanding tasks you pay for that low price with memory and IO bandwidth constraints.

      1. Old Used Programmer

        Re: Sigh...

        When I need either more than about 32GB of mass storage, or I'm concerned about I/O access speed, I run my Pi4s with an attached SSD.

        For ones operating below those constraints, I'm using A1 class uSD cards. What I was hoping for from Optane (based on the early claims for it) was not the better speed, though that's not bad to get, but greatly increased number of times you can write to any given cell. Simply put, SD cards wear out relatively quickly if they are being actively used. Longer endurance in use as a root file system would be a desirable trait.

        1. Flocke Kroes Silver badge

          Re: Longer endurance

          I get there by over-specifying capacity. Optane is stuck in the usual new tech catch 22: at its current volume it cannot compete on price with too much flash. As it cannot compete with too much flash it is stuck at low volume.

  2. DS999 Silver badge

    Squeezed from both ends

    There just wasn't a big enough niche between the cost per bit of DRAM and the performance of NAND, and both improved faster than Optane did so the niche became smaller over time.

    1. Crypto Monad Silver badge

      Re: Squeezed from both ends

      In other words: nobody was clamouring for a new type of memory which is slower than DRAM, and more expensive than SSD.

      1. John Brown (no body) Silver badge

        Re: Squeezed from both ends

        Optane, the new RamBus?

    2. druck Silver badge

      Re: Squeezed from both ends

      Plus Intel massively overhyped the technology before launch, and the initial product was a massive disappointment, from which it has never recovered.

  3. Numen
    Headmaster

    Optane wasn't the first, or even the second, try at adding a memory hierarchy. The IBM 360/91 (I think it was) had the memory channel in the late 1960's, the IBM 3081 had expanded memory in the early 1980's. Not a lot of good use cases then either.

    Interesting technology, but Optane always seemed like a technology solution in search of a problem.

    CXL is a different solution, for a different set of problems. It should do better, but it's likely to be some time before it makes it to on-premise data centers.

  4. bazza Silver badge

    CXL...

    ... Sounds like a return to the days when one could buy a memory expansion card and plug it into an ISA slot, and get a whole extra Megabyte for one's DOS PC. Heady stuff!

    It could get very confusing for some highly optimised apps, if sometimes they're getting allocations out of fast on board memory, and sometimes they're given some slow memory on an expansion board. For example, the FFTW library depends on profiling the machine and how best to execute an FFT, stores that optimum recipe, and that's what gets used at runtime. That's going to be rubbish if (in effect) the memory performance the app experiences is randomised by CXL...

  5. Aitor 1

    Firesale

    Pretty surprised they did not sell the inventory at half or third or list price..

    1. bazza Silver badge

      Re: Firesale

      Yes, disappointing to see yet more unproductive landfill...

      My guess is that when it gets to this point, the costs of running it are prohibitive. Every part sold from now on comes with ongoing costs of support, warranty, etc. Some of these overheads are not optional, a firesale does not necessarily mean that all the associated costs can be dispensed with. Chopping those costs short is a big part of getting out of the business.

      1. jmch Silver badge

        Re: Firesale

        Firstly, writing them off the inventory is a budgetary exercise, it doesn't mean someone is going to pile them in a field and set fire to the lot. Secondly, it is perfectly possible to sell such parts as end-of-series or 'used' parts, expressly excluding any service or warranty. eBay/Amazon is full of such stuff. I think from the point of view of Intel they would not want to dilute their brand by offering any product without support or warranty, so they will be quietly shipped to some 3rd-party eBay vendor for a pittance.

        That, or some Intel BOFH is going to have a big bonfire in a field, no doubt to get rid of some other evidence, while making a killing on the 'destroyed' drives on the side!

        1. bazza Silver badge
          Pint

          Re: Firesale

          I think my guess has been trumped by some well thought out reasoning and knowledge!

  6. Detective Emil
    Paris Hilton

    Alternate history

    I've wondered whether, if Intel & Micron had widely licensed the technolgy & process early, rather than keeping them resolutely proprietary, the price would have come down and the performance would have gone up to make it more attractive. But Intel's good at hubris. Or maybe they touted if for licensing, but found no takers. If that's the case, prospective licensees must now be patting themselves on the back.

  7. elwe

    Flawed design

    Optane was touted as persistent storage, but there were no RAID like redundancy options. Thus it could not be treated as persistent storage, it could only be a faster cache in front of real persistent storage. Using it this way would require software changes, which would be more expensive than throwing RAM at the issue in most cases. So it was doomed.

    Had there been RAID options, and demonstrations/documentation on how to migrate the DIMMs to another system and recover data, then it might have stood a chance. As it was we took a look and wrote it off immediately.

    Of course adding RAID would mean altering the memory controllers in the processors to implement RAID, they would no longer be able to write to it like DRAM. The simple doubling of writes/halving of RAM already in the memory controllers would work to a point, but as RAID 1 you get the issue that you can tell one side is wrong, but have no idea which of the two is wrong.

    1. TechnicalVault

      Re: Flawed design

      You're just talking about the DIMM form factor, what about the SSD form factor? Nothing stopping you software RAIDing that?

    2. prof_peter

      Re: Flawed design

      I would argue it's not the lack of RAID. In modern systems it doesn't matter if data is persistent and replicated on a single system - it's not considered durable until it's backed up to another machine. (preferably in a different rack or even data center) If you're going to go over the PCIe bus for the NIC, you don't save a lot by keeping storage on the memory bus.

  8. Korev Silver badge
    Pint

    Thank you Mr Mann for a nice in-depth and well-researched article.

    As it's Friday -->

  9. elregidente

    m.2. 110mm

    I would have killed to get one of these in my laptop.

    I wanted it so bad. Money no object - this stuff was the *best*. The sustained IO was phenomenal.

    The problem was the m.2. form factor was 110mm.

    I have a 13 inch laptop, always - absolutely no 13 inch laptops go to 110mm.

  10. Doctor Syntax Silver badge

    Bubbles. Memistors. Optane.

    We see them come, flatter to deceive and then go. It would be nice to be able to spot in advance the ones that are actually going to actually stick.

    1. that one in the corner Silver badge

      Bubbles

      did at least make it out into the wild (even had some on an IBM PC card for a short while).

      Last heard of as a Hackaday article in 2020 as some madman tries to make it live again

      https://hackaday.com/2020/04/19/magnetic-bubble-memory-farewell-tour/

  11. Steve Channell
    Unhappy

    OS Software was the issue

    While SSD can be scaled with storage devices, mirrored and replicated; Optane can't

    For Optane to succeed OS's needed to provide kernel support like IBM's mainframe Hiperspace/Dataspace or Expanded storage, but the use-cases were to narrow and performance too marginal.

    Intel's own test showed a performance advantage only if RAM was constrained and SSD did not have non-volatile memory cache (battery for DRAM).

  12. Hiya

    Total arrogance and lack of understanding from Intel

    Worked in the Optane memory division for a while - what a shambles

    Total arrogance and lack of understanding from Intel with their proposition - "Hey use our PMem (those few who have an environment where you can!) with our latest CPU's (but not those cheaper, more efficient CPU's from those pesky competitors like AMD or Arm-based solutions). Division over-run with a herd of Intel lifers who couldn't sell a shovel to a grave digger

    Intel's head so far up its arse that failed to see the issues with product positioning - let alone its lack of mass market applicability in the real-life user environments

    Not all bad though - got some healthy contributions into my Pension from Intel while I suffered their shit - show from the inside

    Deserved to fail

  13. Snake Silver badge

    RE: "memory flopped as rivals offered faster and more open alternatives"

    https://forums.theregister.com/forum/all/2021/07/30/cma_intel_sk_hynix/#c_4306800

    You don't say??

  14. Anonymous Coward
    Anonymous Coward

    Category error

    "Well, there's actually an alternative to Optane memory on the horizon. It's called compute express link (CXL)"

    Hardly. Imagine that Optane is beer. Then CXL is a phrasebook that explains how to talk to the bar person to order beer.

    That is useful, but not the same thing.

  15. talk_is_cheap

    Intel trying to use it as a market 'advantage' did not help.

    By making it an Intel only solution the only customers who could even consider it was those that were certain that their long-term plans would be based around Intel CPUs - any uncertainty on your future CPU needs and Optane based memory layers within your system design became a non-starter.

  16. GraXXoR

    A bit sad...

    I bought one second hand in Akihabara a few years ago and put Star Citizen on it which is known to have a horrendously long boot time. That one game filled over a quarter the drive but Holy Smoking bits, Bat Man, was it fast. Load times were about 1/3 of my PCI SSD despite having much slower throughput on paper.

    It's a shame that this technology remained a proprietary niche rather than being licensed out to other companies.

  17. Henry Wertz 1 Gold badge

    Pricing

    I think pricing is what killed it. I looked into Optane once, and found

    a) I needed expensive server hardware to put it in. It would not go into my ordinary system. (I assume the correct chipset and possibly BIOS support was needed.) I suppose Ubuntu probably has the OS support in place to support Optane.

    b) Optane cost less than high-capacity ($$$), ECC ($$$), server memory ($$$), but cost more than the ordinary types of DIMMs I would have gotten for my system. The pricing probably could have come down quite a bit and still given Intel a profit, but they were selling it to a relatively high-margin market so the pricing reflected it.

  18. Henry Wertz 1 Gold badge

    numa

    "It could get very confusing for some highly optimised apps, if sometimes they're getting allocations out of fast on board memory, and sometimes they're given some slow memory on an expansion board. For example, the FFTW library depends on profiling the machine and how best to execute an FFT, stores that optimum recipe, and that's what gets used at runtime. That's going to be rubbish if (in effect) the memory performance the app experiences is randomised by CXL..."

    Just to point out, Linux actually has full support for NUMA (Non Uniform Memory Access), don't know what Windows will do but Linux is fully prepared to be told some RAM is far slower than the rest. And (in addition to some NUMA-related commands that can be used to force what goes where) it does support migrating tasks around in memory, on the traditional NUMA systems, it would migrate tasks into RAM that is faster relative to the CPU that is running the code.

    1. Steve Channell

      Re: numa

      Both Windows and Linux have NUMA support, but Optane was different in two respects: [1] It It required on-chip memory controller, so confined to high-end CPU [2] It really needed OS support for a third memory tear (e.g. Cluster/OS state for restart sync) and database/Hypervisor checkpoint store.

  19. SteveH64

    So what's the impact on software houses like SAP, Oracle, SQL Server, that developed versions of offerings, e.g. HANA, to utilise Optane persistent memory?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like