Sigh...
And here I was hoping that Optane, with high write counts, would make it's way into micro-SD cards. The specs looked pretty good for boot devices for Raspberry Pis.
Intel CEO Pat Gelsinger has confirmed that Intel will quit its Optane business, ending its attempt to create and promote a tier of memory that's a little slower than RAM but had the virtues of persistence and high IOPS. The news should not, however, come as a surprise. The division has been on life support for some time …
The high IOPS looks really good until you spot the Pi4's SDHC interface is limited to 50MB/s. Connect an SSD via USB3 and you will get 300MB/s - if you can keep the CPU cool. A Pi4 is plenty fast enough for emails and web browsing. For more demanding tasks you pay for that low price with memory and IO bandwidth constraints.
When I need either more than about 32GB of mass storage, or I'm concerned about I/O access speed, I run my Pi4s with an attached SSD.
For ones operating below those constraints, I'm using A1 class uSD cards. What I was hoping for from Optane (based on the early claims for it) was not the better speed, though that's not bad to get, but greatly increased number of times you can write to any given cell. Simply put, SD cards wear out relatively quickly if they are being actively used. Longer endurance in use as a root file system would be a desirable trait.
Optane wasn't the first, or even the second, try at adding a memory hierarchy. The IBM 360/91 (I think it was) had the memory channel in the late 1960's, the IBM 3081 had expanded memory in the early 1980's. Not a lot of good use cases then either.
Interesting technology, but Optane always seemed like a technology solution in search of a problem.
CXL is a different solution, for a different set of problems. It should do better, but it's likely to be some time before it makes it to on-premise data centers.
... Sounds like a return to the days when one could buy a memory expansion card and plug it into an ISA slot, and get a whole extra Megabyte for one's DOS PC. Heady stuff!
It could get very confusing for some highly optimised apps, if sometimes they're getting allocations out of fast on board memory, and sometimes they're given some slow memory on an expansion board. For example, the FFTW library depends on profiling the machine and how best to execute an FFT, stores that optimum recipe, and that's what gets used at runtime. That's going to be rubbish if (in effect) the memory performance the app experiences is randomised by CXL...
Yes, disappointing to see yet more unproductive landfill...
My guess is that when it gets to this point, the costs of running it are prohibitive. Every part sold from now on comes with ongoing costs of support, warranty, etc. Some of these overheads are not optional, a firesale does not necessarily mean that all the associated costs can be dispensed with. Chopping those costs short is a big part of getting out of the business.
Firstly, writing them off the inventory is a budgetary exercise, it doesn't mean someone is going to pile them in a field and set fire to the lot. Secondly, it is perfectly possible to sell such parts as end-of-series or 'used' parts, expressly excluding any service or warranty. eBay/Amazon is full of such stuff. I think from the point of view of Intel they would not want to dilute their brand by offering any product without support or warranty, so they will be quietly shipped to some 3rd-party eBay vendor for a pittance.
That, or some Intel BOFH is going to have a big bonfire in a field, no doubt to get rid of some other evidence, while making a killing on the 'destroyed' drives on the side!
I've wondered whether, if Intel & Micron had widely licensed the technolgy & process early, rather than keeping them resolutely proprietary, the price would have come down and the performance would have gone up to make it more attractive. But Intel's good at hubris. Or maybe they touted if for licensing, but found no takers. If that's the case, prospective licensees must now be patting themselves on the back.
Optane was touted as persistent storage, but there were no RAID like redundancy options. Thus it could not be treated as persistent storage, it could only be a faster cache in front of real persistent storage. Using it this way would require software changes, which would be more expensive than throwing RAM at the issue in most cases. So it was doomed.
Had there been RAID options, and demonstrations/documentation on how to migrate the DIMMs to another system and recover data, then it might have stood a chance. As it was we took a look and wrote it off immediately.
Of course adding RAID would mean altering the memory controllers in the processors to implement RAID, they would no longer be able to write to it like DRAM. The simple doubling of writes/halving of RAM already in the memory controllers would work to a point, but as RAID 1 you get the issue that you can tell one side is wrong, but have no idea which of the two is wrong.
I would argue it's not the lack of RAID. In modern systems it doesn't matter if data is persistent and replicated on a single system - it's not considered durable until it's backed up to another machine. (preferably in a different rack or even data center) If you're going to go over the PCIe bus for the NIC, you don't save a lot by keeping storage on the memory bus.
While SSD can be scaled with storage devices, mirrored and replicated; Optane can't
For Optane to succeed OS's needed to provide kernel support like IBM's mainframe Hiperspace/Dataspace or Expanded storage, but the use-cases were to narrow and performance too marginal.
Intel's own test showed a performance advantage only if RAM was constrained and SSD did not have non-volatile memory cache (battery for DRAM).
Worked in the Optane memory division for a while - what a shambles
Total arrogance and lack of understanding from Intel with their proposition - "Hey use our PMem (those few who have an environment where you can!) with our latest CPU's (but not those cheaper, more efficient CPU's from those pesky competitors like AMD or Arm-based solutions). Division over-run with a herd of Intel lifers who couldn't sell a shovel to a grave digger
Intel's head so far up its arse that failed to see the issues with product positioning - let alone its lack of mass market applicability in the real-life user environments
Not all bad though - got some healthy contributions into my Pension from Intel while I suffered their shit - show from the inside
Deserved to fail
By making it an Intel only solution the only customers who could even consider it was those that were certain that their long-term plans would be based around Intel CPUs - any uncertainty on your future CPU needs and Optane based memory layers within your system design became a non-starter.
I bought one second hand in Akihabara a few years ago and put Star Citizen on it which is known to have a horrendously long boot time. That one game filled over a quarter the drive but Holy Smoking bits, Bat Man, was it fast. Load times were about 1/3 of my PCI SSD despite having much slower throughput on paper.
It's a shame that this technology remained a proprietary niche rather than being licensed out to other companies.
I think pricing is what killed it. I looked into Optane once, and found
a) I needed expensive server hardware to put it in. It would not go into my ordinary system. (I assume the correct chipset and possibly BIOS support was needed.) I suppose Ubuntu probably has the OS support in place to support Optane.
b) Optane cost less than high-capacity ($$$), ECC ($$$), server memory ($$$), but cost more than the ordinary types of DIMMs I would have gotten for my system. The pricing probably could have come down quite a bit and still given Intel a profit, but they were selling it to a relatively high-margin market so the pricing reflected it.
"It could get very confusing for some highly optimised apps, if sometimes they're getting allocations out of fast on board memory, and sometimes they're given some slow memory on an expansion board. For example, the FFTW library depends on profiling the machine and how best to execute an FFT, stores that optimum recipe, and that's what gets used at runtime. That's going to be rubbish if (in effect) the memory performance the app experiences is randomised by CXL..."
Just to point out, Linux actually has full support for NUMA (Non Uniform Memory Access), don't know what Windows will do but Linux is fully prepared to be told some RAM is far slower than the rest. And (in addition to some NUMA-related commands that can be used to force what goes where) it does support migrating tasks around in memory, on the traditional NUMA systems, it would migrate tasks into RAM that is faster relative to the CPU that is running the code.
Both Windows and Linux have NUMA support, but Optane was different in two respects: [1] It It required on-chip memory controller, so confined to high-end CPU [2] It really needed OS support for a third memory tear (e.g. Cluster/OS state for restart sync) and database/Hypervisor checkpoint store.