back to article Why the end of Optane is bad news for all IT

Intel is ending its Optane product line of persistent memory and that is more disastrous for the industry than is visible on the surface. The influence of ideas from the late 1960s and early 1970s is now so pervasive that almost nobody can imagine anything else, and the best ideas from the following generation are mostly …

  1. LosD

    Amazing... But also a bit stupid

    It's of course an amazing thing. But putting hardware on the market without a clear way towards using that hardware is rather dumb. You can't expect all abstractions to change immediately just because your have this REALLY COOL THING.

    It was always going to bomb.

    1. BOFH in Training

      Re: Amazing... But also a bit stupid

      Another big problem is Intel restricted optane to only their CPUs, and only a subset of them actually, by restricting it to certain chipsets.

      If they have allowed ARM, AMD, RISC, everyone use it, and did not play stupid restriction games, Optane may still be around.

      You want to push out a totally new concept to the average person, you should be trying to push it to everyone, so maybe more people will find it useful.

      Play stupid restriction games, win a stupid prize.

      1. EricB123 Silver badge

        Re: Amazing... But also a bit stupid

        Betamax!

    2. Stuart Castle Silver badge

      Re: Amazing... But also a bit stupid

      True. It is an amazing technology, perhaps hampered by bad marketing.

      The problem is, the current idea of having a computer with primary and secondary storage and everything treated as files works. It's not perfect, but it works, sometimes well.

      Introducing something cool and amazing isn't going to persuade people to spend potentially a lot of money replacing and upgrading existing systems unless those systems don't work at all, or unless you can show there is a real benefit. Especially if that something requires a fundamental rethink of how the computer works, If your applications work well on whatever OS they use (be it Windows, Linux, macOS or whatever), how will they work with an OS that stores everything in what is effectively RAM? Do they need updating? If not, how good is any emulation offered?

      1. FIA Silver badge

        Re: Amazing... But also a bit stupid

        Yeah, I don't see what's wrong with a filing system.

        A filing system is just that, a system for filing.

        It doesn't have to be on a 'traditional' block device. Running a program would memory map the read only sections and provide pages in traditional RAM for variables and volatile data.

        A paging system that knows certain ram is non volatile could easly ensure idle tasks are 'swapped' to persistent memory. (Most OSs have the concept of NUMA and memory locality).

        It feels like we're trying to remove the wrong abstraction here.

        I do agree that Optane should've lived on though. It does seem the logical conclusion.

        1. Liam Proven (Written by Reg staff) Silver badge

          Re: Amazing... But also a bit stupid

          This seems to throw up two questions to me.

          1. If you have, say, a terabyte of non-volatile RAM, why do you need disks or paging at all?

          2. There was a time when RAMdisks made sense. When it was doable to have a meg of RAM in a computer whose OS used 5% of it, but you only had one floppy drive. That meant you could temporarily stick things in the RAMdrive and not need to switch disks to access it.

          But modern computers mostly don't have removable media at all. Why do you want to emulate a 1980s version of a disk in memory?

          No current OS organizes its RAM as a filesystem. Right now I am typing in a browser window. That browser has, no doubt, many allocated areas of RAM. They are not files. There is no directory.

          Why do you want to impose an emulated disk drive on a machine which doesn't need disk drives at all?

          I submit the answer is "because that's the design I am familiar with. It's the way I know."

          But personally I switched OS half a dozen times. Spectrum BASIC to VAX/VMS to CP/M to RISC OS to OS/2 to Windows 95 to Windows NT to Linux to Linux and Mac OS X. (Not counting all the many ones I used at work.)

          All of those had different abstractions. Some had filesystems; some didn't. Some had subdirectories; some didn't. Some had file types, some just had clunky three-letter file extensions.

          I am proposing another such switch. Why do you want to bring over one of the clunkiest bits of 1960s tech to this hypothetical new system?

          1. BOFH in Training

            Re: Amazing... But also a bit stupid

            Just a couple of thoughts on your above points.

            1) This seems similar to saying 640kb is enough for everyone. I only know that both my and other's usage of computing resources has grown over time. And I don't know if even 1tb or 1pb or whatever amount of nvram (or just plain old ram) will ever be enough. We can't predict the future, but I think it's safe to say that computing resource requirements will always grow.

            2) With the above said, it's always good to explore new concepts to see what works better / more suitable. Not all new concepts will be found suitable, but that does not mean there will be nothing else new ever. And what was old is new again sometimes and work better the 2nd or 3rd time round cos of other improvements available.Example of dumb terminals come to mind, with better networking and other computing resources allowing more orgnisations to implement dumb terminals. And they don't have to look like a monochrome text only screen anymore. People don't even realise that some of the systems they are using are basically souped up dumb terminals.

          2. nijam Silver badge

            Re: Amazing... But also a bit stupid

            > 1. If you have, say, a terabyte of non-volatile RAM, why do you need disks or paging at all?

            You don't, but you still have to organise non-volatile memory somehow.

            Just saying "I don't need files, because ... RAM!" doesn't achieve that.

            The disk management model works. Optane didn't bring an alternative, better or worse.

            > No current OS organizes its RAM as a filesystem.

            ZFS kind of does (or perhaps, vice versa).

            > Why do you want to bring over one of the clunkiest bits of 1960s tech to this hypothetical new system?

            Unfortunately, your example. of systems without filesystems aren't a particular good adverisement for the benefits of doing away with them.

            1. Liam Proven (Written by Reg staff) Silver badge

              Re: Amazing... But also a bit stupid

              That's fair enough. I am not an OS architect and I don't even play one on TV (or, more saliently, on stage).

              However, every multiprogramming OS already does it somehow.

              (It's a long-forgotten distinction in modern systems, but "multiprogramming" means an OS that can juggle multiple programs in memory, but only one at a time can run. Multitasking is an extension of this.)

              It's a solved problem. It may not be solved very _well_ but when I presented this idea at FOSDEM -- a link to the video is there in the article -- many people objected saying this was impossible or very hard. It isn't. It's been done.

              MS-DOS 5 with DOSShell did it. It too is forgotten with the rise of Windows, even Windows 3, but in DOS 5 with DOSShell, you could switch back to the DOSShell menu from a running app, and load a different app, then switch between them. They didn't run in the background; they were suspended. But they were suspended as live snapshots, with files loaded etc.

              If MS-DOS could do this, it's not rocket science.

              Filesystems are an incredibly useful, powerful abstraction that has enabled all kinds of amazing things... but they remain an abstraction over the primary/secondary storage split. If you only have primary storage, they can go away.

              When Windows NT came out, Unix types ridiculed it because you couldn't attach a terminal or run a remote X session, and by Unix standards, the original Windows shell, CMD, is very feeble.

              Citrix built a business adding remote-screen functionality back in.

              But you don't need it with NT, and it remains an obscure fringe function, mostly used for support.

              Unix is built around the notions of a powerful, programmable shell, and of multiple terminal sessions, which can be local or remote.

              Yet Win NT didn't bother to implement these things at all, and yet, for 30 years, Microsoft made hundreds of billions out of this OS that, by Unix perceptions and standards, is profoundly crippled.

              Apple built its multi-trillion-dollar business empire on MacOS, an OS that didn't have a command line at all, not even a crippled one, and had zero human-readable config files either.

              iOS has no way to save a file from one app and open it in another. It is deeply sandboxed and apps can't see one another.

              We all assume this stuff is necessary because it's in everything we know.

              I propose that the concept of a filesystem, of files, of disks, of secondary storage, is an unnecessary hangover of 1960s designs, and we can do away with it now.

              1. Roland6 Silver badge

                Re: Amazing... But also a bit stupid

                >I propose that the concept of a filesystem, of files, of disks, of secondary storage, is an unnecessary hangover of 1960s designs, and we can do away with it now.

                What is a file?

                A file I suggest is simply a wrapper around a data structure turning it into a location-independent object - so it can be moved and another application or system can look at it and select the right tools to unpack and manipulate the data structure within it. Optane doesn't remove the need to move objects between systems or even applications running on the same system (remember iOS allows files to be transferred to other applications, so the user can decide whether a 'book' is opened in iBooks or some other reader application).

                How those files are stored is an implementation matter, Write on iOS for example, presents me with its views of the documents I've been working on, I the user don't know how these documents are mapped on to the storage (although the Files app does give a filesystem view of these files/objects). However, this does constrain useabliilty. For example a project will have files distributed across multiple applications, with a filesystem (or document management system) it is easy to group these files together into a project.

                What is secondary storage?

                I suggest it is simply a repository for files (with a DB being a particular branch of the file system). Putting encryption to one side, secondary storage is also largely machine-independent and transportable. The question is if an Optane memory module was so configured (ie. removable and readable in another system) would it be classified as secondary storage?

              2. Malcolm Weir

                Re: Amazing... But also a bit stupid

                Fascinating discussion, Liam!

                However... I submit you've missed two critical points:

                A huge proportion of contemporary data is not stored in files or filesystems.... it's stored in tables. Databases have replaced the concept of files/filesystems with entities that have multiple associations (i.e. relations). While there is digital data that is well-served by the concept of a file (e.g. movies), there is much much more (by quantity, if not volume) that is better served by a database. To borrow your observation about Unix: everything is a file in Unix but the method to pull data out of the file is undefined... by contrast, the Windows Registry is an entity with a method to access data, but the fact that it's stored in a file on a filesystem is entirely transparent, and could easily be implemented using large NV system memory.

                The second point involves scaling and distribution. The 1:1 mapping between a CPU and a storage device (which hasn't actually been 1:1 for decades, but we still think of it like that and it's usually "a small number of CPUs per storage device") is fine for limited applications where you can contain your whole application in a single box (as it were), but as soon as you get to global datacenters of the AWS/Google/Azure scale, you immediately see the benefit of processing nodes vs storage nodes. And since any given processing node may never previously have encountered the dataset stored on any given storage node, some kind of indexing and data management is necessary. Granted, the paradigm used might be, say, "table of contents" and "storage chunk" versus "directory" and "file", but the hierarchy and access methods are effectively equivalent regardless of terminology.

                There's also the "640K" problem: however much storage you think you need is less than the amount of storage that you could use if you had it! The abstraction of "hot data" / "warm data" (main memory/files) is extended by "cold data" (offline storage), and while it's trendy to pretend that offline storage is old fashioned, El Reg is full of accounts of what happens when you don't have it!

                Finally, a small observation: you claim modern computers don't have removable storage any more. This is blatantly false: USB storage sticks are very much alive and annoying sysadmins every day!

                1. Roland6 Silver badge

                  Re: Amazing... But also a bit stupid

                  >Finally, a small observation: you claim modern computers don't have removable storage any more. This is blatantly false: USB storage sticks are very much alive and annoying sysadmins every day!

                  I suggest network drives and cloud drives such as Dropbox, Onedrove etc. are also effectively removable storage.

              3. Anomalous Cowshed

                Re: Amazing... But also a bit stupid

                You propose this, and it sounds fine, but what do you propose to replace it with? How will data be organised? Accesed? Managed? Without an attractive, workable alternative system, it's hyperbole.

          3. jtaylor

            Re: Amazing... But also a bit stupid

            You present some fascinating ideas. What struck me the most is

            "because that's the design I am familiar with. It's the way I know." ...Spectrum BASIC to VAX/VMS to CP/M to RISC OS to OS/2 to Windows 95 to Windows NT to Linux...and Mac OS X.

            Yes, I think we do tend to solidify our ideas based on experience. In Domain/OS, libraries were loaded globally; you didn't link to a file, you just called an OS function. In VMS, "default (current) directory" was merely a variable, not a verified location in the filesystem. Those examples make it easier to imagine OS features as independent from a file system.

            1. If you have, say, a terabyte of non-volatile RAM, why do you need disks or paging at all?

            Horses for courses. For example, a database runs in memory. One could argue that's the only place it really exists. If you export data to use in another program (e.g. financial reports), that might be easier as a named file rather than a memory segment.

            No current OS organizes its RAM as a filesystem.

            Fair point. That has advantages and disadvantages. I hope to never see another Linux OOM Killer.

            I suggest that Object Storage is an example of an alternative to traditional filesystems, and might help imagine other ways to address data. I'd love to see a mail relay use Optane.

          4. FIA Silver badge

            Re: Amazing... But also a bit stupid

            1. If you have, say, a terabyte of non-volatile RAM, why do you need disks or paging at all?

            You need paging because it's currently how computers map memory, if you remove that you have to solve the relocatable code problem and a few others.

            You don't needs disks.

            Why do you want to impose an emulated disk drive on a machine which doesn't need disk drives at all?

            I don't.

            But I do still want a way to file and organise my data.

            A memory FS would have to work differently, but I don't see why you wouldn't want a system to allow you organise your data. Sure, if you load a program from the filing system you'd just be executing the code where it is in memory (non volatile data aside).

            The reason I mentioned paging is that volatile data. It wouldn't be a huge extension for a current OS to move little used volatile data to persistant RAM. This would mean a shutdown would be 'copy the rest of the volatile data and turn off', with apropriate 'restarting from RAM' support in whatever firmware you had would give you the 'always on' computer.

            Unless you're proposing fixing another class of problems programs will still need at least the concept of a running instance, you still need something to kill when it goes wrong. This means you will still need the abstraction of 'the thing needed to run a program' vs 'a running instance of that program'.

            No current OS organizes its RAM as a filesystem.

            And nor should they, but if all your data is in RAM there's still a requirement to mark bits of RAM as 'that picture from holiday last year', and maybe group bits of RAM as 'Last years holiday pictures'.

            Also, most modern OSs organise their RAM hierachically, based on CPU locality for example. (NUMA) So they do have the concept of different 'types' of RAM. This would be required to properly utilise static RAM, until the static RAM can withstand the (basically) infinite re-write cycles of DRAM.

            Right now I am typing in a browser window. That browser has, no doubt, many allocated areas of RAM. They are not files. There is no directory.

            It also has many tempory files, and many resources it loads (icons, language translations). It still needs a way to locate and refer to these.

            Why do you want to impose an emulated disk drive on a machine which doesn't need disk drives at all?

            I don't, I want a system to file and organise data, but the lower abstraction wouldn't be talking to a disk drive, it would be using the persistent memory in the computer.

            I submit the answer is "because that's the design I am familiar with. It's the way I know."

            You may well be correct here; generally most people are blind to a new way of thinking until it's presented to them complete.

            But personally I switched OS half a dozen times. Spectrum BASIC to VAX/VMS to CP/M to RISC OS to OS/2 to Windows 95 to Windows NT to Linux to Linux and Mac OS X. (Not counting all the many ones I used at work.)

            All of those had different abstractions. Some had filesystems; some didn't. Some had subdirectories; some didn't. Some had file types, some just had clunky three-letter file extensions.

            They all had filesystems. My ZX81 could load a program by name, this requires at least some organisation on the tape. That's a simple filesystem. I've not used VAX/VMS or CP/M, but I have used the rest you list.

            RISC OS is a decent example, it has modules, which are basically extensions of the ROM modules in the BBC MOS, yet with ResourceFS in RISC OS 3 modules would still write entries to ResourceFS, so they could access sections of their memory as if they were files. They didn't take up more memory, there's no 'HD emulation' (ResourceFS literally is told 'make this chunk of RMA appear as a file in your namespace'). But it does allow you to access resources easly, and also replace them.

          5. doublelayer Silver badge

            Re: Amazing... But also a bit stupid

            "If you have, say, a terabyte of non-volatile RAM, why do you need disks or paging at all?"

            Until you need more than a terabyte of stuff stored, you don't. One kind of storage is a lot like another, so a terabyte of optane will function just as well (no, better) than a terabyte of SSD. It will also cost more. The question is whether you need the speed of optane enough to justify its price. Either way, you'll be using it for the same purposes, whether you use a memory byte-mapped model or a filesystem block-model approach to organizing it.

            "There was a time when RAMdisks made sense. When it was doable to have a meg of RAM in a computer whose OS used 5% of it, but you only had one floppy drive. That meant you could temporarily stick things in the RAMdrive and not need to switch disks to access it. Why do you want to emulate a 1980s version of a disk in memory?"

            Because the filesystem is the way for me to move data from one program to another. I can't take a downloaded audio file from the browser's memory and tell my audio editor to edit this. If it makes the file bigger, the browser will get very confused about why the contiguous block of bits either became noncontiguous or moved. I can't even find that chunk of memory by its address without pulling out a debugger and mapping between the OS's memory regions. We could remove the isolation of processes' memory areas to let this happen, but meanwhile, I can use the filesystem to obtain a set of bits and point something at that set to change it, using a filename I provided for that set. Ramdisks are sometimes useful to keep that useful operation in the fastest memory available to me.

            "No current OS organizes its RAM as a filesystem. Right now I am typing in a browser window. That browser has, no doubt, many allocated areas of RAM. They are not files. There is no directory."

            They are already mapped objects, but there is an organizational structure. There are objects holding other objects as members in a hierarchical layout. The members have names. The members themselves have members. A member can refer to its parent. That's a lot like a filesystem, but since the type of each thing is known and only the browser's code is operating on them, they don't all have to be bit streams.

            1. Liam Proven (Written by Reg staff) Silver badge

              Re: Amazing... But also a bit stupid

              There are already computers on sale where you can replace memory in a running system. This has been on sale in things like Tandem Nonstop machines for decades.

              https://en.wikipedia.org/wiki/NonStop_(server_computers)

              There are ways to do this now.

              Fill up your PMEM? Shut down, stick in another couple of NV-DIMMs, turn back on.

              Yes, if a DIMM goes bad, you'll lose your data. Currently, if a disk goes bad, you lose your data. No change there.

              OTOH I have seen dozens of disks fail over my career. I think I have had 1 (one) DIMM go bad on me this century.

          6. Roland6 Silver badge

            Re: Amazing... But also a bit stupid

            >1. If you have, say, a terabyte of non-volatile RAM, why do you need disks or paging at all?

            This is just an argument about the benefits of large memory, which was being exploited decades back with in-memory DB's as the price of RAM fell and motherboards became capable of supporting large amounts of memory.

            For this to be an argument in favour of Optane, Optane would need to be available in 1TB modules say, but priced to compete with 8~16GB RAM modules, so system builder would actually fit 1TB and not 16GB+250GB SSD.

            As for the point about why you need disks - are you totally happy for your bank to only keep the details of your account in an Optane module. Which brings up another potential limitation of Optane - redundancy. Whilst I can easily replicate transactions between systems, to allow the sorts of duplication provided by SANs (or Tandem non-stop computers) would require some workarounds.

          7. Dominic Sweetman

            Re: Amazing... But also a bit stupid

            Files and file systems may well have been invented like the writer said. But...

            Now, a file is what represents all your work when the application's not running. This is useful because you can back up your file, restore it and your work comes back. You can email it to your friend you're working with, if she has a compatible app. It's the thing you keep in git (or other version control system).

            That is, there are lots of things you do with data other than look at it through the lens of a specific app.

            Apple IOS (at least for phones and tablets) have files, but pretend not to. Andl that means that every app has to have a share button, and backup systems are mysterious and out of my control. Fine for toy computing. I end up with a program which dumps my contact list into CSV and emails it to me. Bit of a kludge, but at least when Apple go bust I'll still have my contact list. But why make it hard?

            We'd need files regardless of the nature of the non-volatile medium needed to store them.

            And once you've invented them, they help you install programs and interface printers and ... well, more or less everything.

            Whether there's space for a form of memory which is non-volatile but fairly fast? I don't know. There's a gap between flash memory and DRAM with just enough power to keep it tickled ; but it's not a really big gap...

          8. Anonymous Coward
            Anonymous Coward

            Re: Amazing... But also a bit stupid

            Possibly, just possibly, to provide an upgrade path for people with existing systems, so they could use this new technology without having to rewrite their whole stack?

            It doesn't seem like it would have taken too much effort to create a filesystem abstraction over a portion of the memory map to provide a "known" interface. But apparently they were more concerned about producing proprietary technology than actually moving the industry forward. Color me shocked.

            I've been in the product development side of the industry since 1979 and have seen far more promising projects killed because a marketing team couldn't figure out how to sell it, as opposed to those that have failed for technical reasons.

            But with everyone more concerned about short term profits than having any sort of long term vision no one seems to care much about "innovation" or "progress" anymore.

            1. doublelayer Silver badge

              Re: Amazing... But also a bit stupid

              "It doesn't seem like it would have taken too much effort to create a filesystem abstraction over a portion of the memory map to provide a "known" interface."

              It wouldn't have been a problem at all. We already have ramdisks, and since this one is nonvolatile, we just have to properly mark it so the disk comes back when it's booted. No problem with backward compatibility at all. That ended up being what Optane did in most cases: act as secondary storage.

              The problems arise when we're told that we should forget about having secondary storage or filesystems, but no clear reason that they're no longer needed. Just as we have ways of using primary memory like secondary, we have that for using secondary like primary. This includes swapfiles, but also transparent writing through to storage devices without swapping in (which for most devices has a bad speed result but probably works pretty well for Optane if you're not in need of the fastest speeds available. Mapping Optane like a lot of RAM, then putting a disk on part of it ends up working like mapping it as a disk, then using part of that as RAM. Both are supported by current operating systems with relatively little effort, and it seems most users didn't consider the benefits to be worth the increased cost of the hardware.

          9. midgepad

            Re: Amazing... But also a bit stupid

            1. For Petabytes of data, according to the article.

        2. Ian Johnston Silver badge

          Re: Amazing... But also a bit stupid

          A filing system is just that, a system for filing.

          One of the many stupidities of the original OLPC/Xo design was the deliberate omission of a structured file system of any sort. Everything was an application. This, of course, ignored the possibility that users might have lots of discrete pieces of work and rather like the idea of a way of organising them.

          I could not care less how my computer stores the collection of documents I have grouped together as "Insurance" and the collection of documents I have grouped together as "Musical scores", but it is extremely useful to me to have separate places to keep them.

          A flat storage system for the ... checks ... 166375 files I currently have under /home/ian would not be terribly practical.

    3. JoeCool Silver badge

      Re: Amazing... But also a bit stupid

      Sure, I can accept that Optane failed because it couldn't "find a use case" but that's not a given; that's the challenge of intoducing new technology. Some make it, some don't. Some make it then blink out after a few years.

    4. John Smith 19 Gold badge
      Unhappy

      Hmm. So basically execute-in-place but with better implementation technology

      That was a thing with the raft of pocket PCs back in the 90's

      The obious usage is to define a "Workspace"

      But there is something inherently "nice" about a storage space that (in theory) has unlimited length.

      Even though there are actual (and hard) limits on file size under any known OS. More importatnly storage hardware is actually big enough to store such a file sizee (if you want it).

    5. Anonymous Coward
      Anonymous Coward

      Re: Amazing... But also a bit stupid

      There was a clear way to use that hardware. For NDA reasons I can't say much about it, but Intel partnered with a major vendor who was going to use the memory technology to build SSD array cards that held enormous amounts of data, were lightning fast, supported server virtualization. Due to internal corruption in the vendor's division for this (incompetence, lies, and IP theft), the company fired all the staff and took a writeoff. Had this not happened, there would have been amazing memory products out there. PS. It is sometimes a mistake to staff entirely with people from certain global cultures that combine mediocrity, dishonesty, and authoritarianism.

  2. Anonymous Coward
    Anonymous Coward

    Optane was a big idea but it wasn't a good one.

    Adding another layer of memory just introduced another layer of complexity - and a pretty massive one at that - for at best marginal value. All the while the inexorable progress of RAM, SSDs and distributed computing pinched out and then quickly obliterated the niche Optane was meant to occupy.

    Farewell Optane, you won't really be missed. You never really existed.

    1. Doctor Syntax Silver badge

      It probably was a good idea. It just wasn't useful enough. To make it useful it would have needed a big change in OS design. Without that the niche didn't really exist.

      1. Anonymous Coward
        Anonymous Coward

        >It probably was a good idea. It just wasn't useful enough.

        Labeling things falling into this category "not good ideas" is a hill I am happy to die on. After all, every idea has some merit.

        This one just didn't have very much in the real world.

        1. redpawn

          "a hill I am happy to die on", in anonymity

          1. yetanotheraoc Silver badge

            Need a flower icon

            Tomb of the unknown commenter.

            1. SCP

              Re: Need a flower icon

              Surely - "Tome of the unknown keyboard warrior."

      2. Roland6 Silver badge

        >To make it useful it would have needed a big change in OS design.

        Would it?

        Taking the idea of persistent RAM ie. RAM that holds its contents in the absence of power and if as the authors says can simply be plugged into a motherboards DRAM slots. The question has to be raised as to why it didn't find a home in Windows laptops.

        Given the sleep and hibernate states Windows supports, it would seem these could be readily supported in a slightly different and simpler way with Optane.

    2. Liam Proven (Written by Reg staff) Silver badge

      It's not another layer.

      Well, OK, it can be if you want, but the point of the article was that it can entirely eliminate multiple entire layers of storage architecture, if embraced for what it can do.

      It is not some sort of faster SSD. That is like taking a jumbo jet and using it as a funny-shaped warehouse, and never flying the thing.

      With Optane it becomes affordable and fast to have all your storage as primary storage, and eliminate secondary storage altogether. Be that SSDs or HDDs or both.

      1. Malcolm Weir

        "It is not some sort of faster SSD. That is like taking a jumbo jet and using it as a funny-shaped warehouse, and never flying the thing."

        No, it is _also_ for some sort of faster SSD. It's like taking a jumbo jet and replacing it with a small pointy thing that could fly from New York to London in 2h52m59s... YOU may not be able to justify the cost/benefit analysis, but I can!

    3. Malcolm Weir

      Funny thing: throughout these discussions, the word "Optane" is being used as "non-volatile main memory".

      However, my use of Optane has been for stonkingly fast SSDs _with two orders of magnitude better endurance_.

      Stick my hot data on Optane SSDs, my warmish stuff on flash SSDs and/or HDDs, and my cool stuff on... something. Tape? AWS buckets? Don't really care by this stage... as long as I know how to get to it!

    4. aki009

      I noticed a bunch of people thumbed you down. Must be some sort of reflex not liking to hear what's really going on.

      To add to to your post, I believe you left out one more reason for Optane always being destined to go the way of the Dodo bird: OS support.

      In order for Optane to make a splash, it'd have to neatly fit into something that takes advantage of it. When it comes to providing support for truly new hardware capabilities, Microsoft moves slower than molasses in January. Or if it tries to move faster, say at a snail's pace, it'll introduce such low quality code that monkeys hitting keys randomly could do better.

      Yes, other OS's are out there, but it seems those did not present a big enough business case to justify the continuing investment in Optane.

  3. Doctor Syntax Silver badge

    The thing about files is that the provide pieces of storage with a purpose.

    This collection of bytes is a cat picture, that is a letter and the one over there is a spreadsheet. I want to delete the cat picture and email the letter (or vice versa). Even as regards programs, this is the desktop manager and is in use pretty well all the time. That' one is dia and I only use it every few weeks or even months when I need to edit a diagram. By allocating names to them we're able to manage them.

    Files are an essential part of managing data. Even without secondary storage we had files. They were boxes of cards or (probably) card images on tape.

    1. Anonymous Coward
      Anonymous Coward

      One thing may be data files...

      ... but application files could be handled different.

      Optane could keep the OS and applications "in memory" as some old computer did with them in ROM. Remember when you turned on a Spectrum or Commodore? It didn't need to "load the OS from disk" - the OS was already there. Some had applications too. Their limitation was it was ROM, so not changeable - Optane would sill allow to update the OS and applications, and load new ones.

      Of course it becomes a different space and the old load/execute workflow is no longer useful. It becomes just a matter of mapping virtual addresses to memory addresses for execution. Because of that probably you need also security protections like those designed again forty years ago that allowed Intel CPU to have executable memory that was not readable by applications, and not writable. Again, ignored by OS designed for older CPU designs.

      I agree with the author that IT made a great leap backwards in the past twenty years. People started stubbornly to look back instead of forward. Unluckily Unix was what was used in most academic institutions instead of more commercial oriented-systems like VMS - so most people wanted that and are still stuck in the 1970s - and its mantras - most of which really outdated today.

      1. doublelayer Silver badge

        Re: One thing may be data files...

        We have that. Write a memory contents file to whatever storage device you like, which can be optane, and restore it. In the older computers, it happened that that file was the only thing on the ROM chip and so you didn't have to use a filesystem to find it, but you can do the same with this if you want; have a partition at a specified offset and find it that way. If you want to use this as direct memory, that works too. The functionality to do that is already available and doesn't require but can use optane to accomplish the goal.

        1. Anonymous Coward
          Anonymous Coward

          "Write a memory contents file to whatever storage device you like [...] and restore it"

          You're utterly missing the point. You don't need to the load/execute/ unload sequence. The application is already in memory and stays there - you just need to map it into virtual address space - a step not needed in older PC where there were no virtual addresses.

          1. doublelayer Silver badge

            Re: "Write a memory contents file to whatever storage device you like [...] and restore it"

            No, that's exactly what I said. Starting an application from disk requires initialization. You could take an image of it after initialization and store that, so next time you return to the program, it has an exact copy of its running memory state. You don't need optane to do that. All optane does is that, instead of copying from RAM to SSD, you worked in optane and left it there. There's a speed boost doing it that way, but it's not guaranteed to be useful.

            In addition, unless you eliminated your RAM entirely and always worked from optane, you'd still have to occasionally copy the stuff from RAM over to it. Again, that's faster than copying down to disk, but is a necessary step if you want to freeze programs so they can be restored from persistent storage. Eliminating RAM would make a lot of things worse, and the article didn't suggest doing that, so your OS would end up looking very similar.

          2. Roland6 Silver badge

            Re: "Write a memory contents file to whatever storage device you like [...] and restore it"

            >The application is already in memory and stays there

            For this to happen, it needs to be loaded.

            In the case of ROM/EEPROM this is done outside of the runtime environment, in the development environment.

            A similar effect can be achieved by structuring memory so that third-party boards could have their ROM's located in the PC's memory space and the system detecting these ROMs and executing code from them. However, this would break the Optane memory concept. Obviously, the model can get more sophisticated and so become like the USB interface, so that ROMs can be dynamically added and removed at runtime.

      2. Doctor Syntax Silver badge

        Re: One thing may be data files...

        Boot a Linux system and it probes for peripherals. I assume that Windows does too and has done since the Plug and Pray came into use. If it finds something it loads the driver. If you initially boot your persistent memory device, use it, switch off and later restart having plugged in or unplugged something how does it cope with that? If persistent memory means, for example, it can be just switched off and on again it will continue without making the necessary driver adjustments.

        I suppose you could have all the drivers loaded all the time but then there are probably many more times the drivers that need to be there and the kernel needs to manage all those. What worked on a Spectrum doesn't necessarily play well with modern hardware.

        To make use of a very different memory model requires a different approach to software; I think we're all agreed on that. The question is, whose responsibility is it to develop that. Intel seems to have put the hardware together on the basis of "Build it and they will come.". They didn't. I'm sure that Intel will have contributed drivers to enable Optane to work in as a conventional filesystem device or as a cache but it doesn't appear that they have done the development work for the novel approach that would be needed to make it a persistent memory device.

        When I bought my current laptop I had the option of configuring it with an Optane or SSD. I looked at that, thought I'd vaguely heard of Optane but didn't really know what it was I was being offered or how I might be able to use it and it seemed expensive for the the size. So I plumped for a 2Tb hard drive for /home & /srv and an SSD big enough to hold the rest at least a couple of times over.

        Optane didn't have the price per Gb to make it a mainstream storage device nor the software support to make it a mainstream persistent memory device. The former was probably an insuperable problem. The latter might have been a possibility but it would have needed at least a proof of concept OS to take advantage of it and enough time for that to be built on.

        1. John Brown (no body) Silver badge

          Re: Have you tried switching it off and on again?

          "If persistent memory means, for example, it can be just switched off and on again it will continue without making the necessary driver adjustments."

          Probably the HellDesk didn't want it as it would put many of them out of work.

          Have you switched it off and on again? Whatdymean, the BSOD just came straight back up?

          Windows, and many apps, still suffer from memory leaks. How do you deal with them when everything is always "running", even after a power cycle? New ideas and especially radically different paradigms in hardware is severely restricted by most of the world being locked into Windows. If Windows "breaks", doesn't support the new hardware properly or simply can't be changed to support it without breaking other stuff, then it's only ever going to be niche. After all, Windows can barely support existing changes and updates without constantly breaking stuff that is already well understood :-)

          1. Anonymous Coward
            Anonymous Coward

            "Windows, and many apps, still suffer from memory leaks."

            What would be in non-volatile memory is the application code - the usual RAM would be used for volatile data. You would be able to shutdown the system and restart it "clean" - it just means when it starts the code is already in memory, no need to load it again from disk. Volatile RAM would be initialized again.

            Anyway, many of these answer just show how IT is becoming a conservative sector where people are utterly afraid of changes, and would like to keep on living in the "old golden age" - while bashing Windows because it wasn't designed in the 1960s and has a GUI by default....

            1. Roland6 Silver badge

              Re: "Windows, and many apps, still suffer from memory leaks."

              >What would be in non-volatile memory is the application code - the usual RAM would be used for volatile data.

              Just like iOS and Android...

              I think whilst the author has some ideas they are missing the fundamental:

              algorithms + data structures = programs

              A file system is just a particular type of data structure for the storing and access of Blobs that has been mapped on to a specific hardware model. A system built around Optane will be using a "file system" data structure to organise the memory and enable access. Remember whereas you or I can look into a box of random stuff and pull out individual objects, computers can't do that yet, so your applications and files/Blobs will be held within a data structure; objects within the Optane memory that aren't in the file system/data structure will be invisible to the system.

              No with everything held in memory and with high-speed interconnects, the concept of files/Blobs begins to change and you end up with things like Linda spaces.

            2. doublelayer Silver badge

              Re: "Windows, and many apps, still suffer from memory leaks."

              "What would be in non-volatile memory is the application code - the usual RAM would be used for volatile data. You would be able to shutdown the system and restart it "clean" - it just means when it starts the code is already in memory, no need to load it again from disk. Volatile RAM would be initialized again."

              That's not new and you don't need optane to do it. The concept of "cache this thing so you don't have to build it again" is well-known. So is "store the cached thing on a faster kind of memory so it's easier to get at it". Windows already shuts down by storing a memory image on the disk. What optane does to change this is provide a way to address the image as if it was RAM instead of requiring it to be copied to a disk. It speeds up what is already available and well understood.

              Of course, the first thing that would happen when booting an image like this is that the OS would copy the most important things off of the optane image into normal RAM again because normal RAM is faster and the OS wants its frequently-used cached objects in fast memory. Optane can, in this case, become a mixture between disk and memory: you page to optane instead of disk because it's faster, you cache nonvolatile objects in optane instead of disk also because it's faster, and you store more objects in optane instead of memory because it's bigger. All sensible actions that can benefit from a suitable memory type, but actions you can do on an SSD if you have no optane at the cost of a slight increase in latency.

              1. Roland6 Silver badge

                Re: "Windows, and many apps, still suffer from memory leaks."

                From your description, it seems the best use for Optane would be as the storage area for page files and temp files namely the Unix swap drive that Windows didn't support for reasons reasonable back in 1990, and sort of supports with ReadyBoost and ReadyDrive.

                I suspect a potential use case would be in cloud servers where you are wanting to maximize CPU utilisation and thus would want to keep loads close to the CPU. But then we now have PCIe 5.0 NVMe's...

                Also suspect given how recent generations of Intel processors are suspectable to side channel attacks then Optane - developed concurrently with these processor, is also wide open to such attacks.

          2. ScissorHands

            Re: Have you tried switching it off and on again?

            This is rich, considering the article is explaining how everything being a file (a POSIX concept which the main author of NT railed against on the record) prevented Optane from being really useful because it HAD to be a file and HAD to be handled as a filesystem (secondary storage).

            1. Roland6 Silver badge

              Re: Have you tried switching it off and on again?

              > a POSIX concept which the main author of NT railed against on the record

              Not been able to find a copy the case that was being put and what they were proposing instead; but note there were performance issues with POSIX I/O in part because of the "everything is a file" philosophy.

          3. This is my handle

            Re: Have you tried switching it off and on again?

            I seem to recall having this problem with OS/2. Windows at the time got a pretty clean slate when you restarted (or as often as not in those days it restarted itself) but OS/2 seemed to come back in the same story state it was in when you rebooted it -- replete with cached keystrokes & mouse clicks you feverishly created trying to get it to respond.

        2. Anonymous Coward
          Anonymous Coward

          "how does it cope with that?"

          Loading an OS from disk and running it, or mapping it from memory and running it, It will still bootstrap and perform initial configuration (if it was shutdown). Just if most part of what is needed is already in memory, it will be far faster. New drivers if needed may be still loaded from storage (or the network) and written to memory, and no longer needed ones moved to storage or deleted.

          "What worked on a Spectrum doesn't necessarily play well with modern hardware"

          That is true for Unix as well - what was good design in the 1970s is not necessarily a good design fifty years later on far more powerful and different hardware.

      3. nijam Silver badge

        Re: One thing may be data files...

        > ... so most people wanted that and are still stuck in the 1970s ...

        Having used both VMS and Unix in the 1980s, my overriding recollection is that most VMS features that distinguished it from Unix were arbitrary and quite complex restrictions, imposed only because DEC software designers were stuck in the 1970s.

      4. Anonymous Coward
        Anonymous Coward

        Re: One thing may be data files...

        Yes OS in Rom was the standard for microcomputers, some allowed programming language and utilities to be added by plugging an additional ROM or EPROM physically into main circuit board. The PC BIOS is still stored in the same way and allows for BIOS upgrades. So Intel's offering was hardly novel.

        As to getting rid of filing systems, this is less of a benefit than you suggest since modern FS load is fast. It is all very well having nonvolatile code in memory that needs only to be called for it to be executed but if it can also be written to directly as part of RAM then malware need only to get around memory protection systems to infect a machine. Whilst most AV do ram scans, not allowing an infection to gain a beachhead in RAM is preferable to trying to remove it afterwards as malware can protect itself by having multiple copies of itself that revert any changes to the version in nonvolatile RAM. The EPROM idea works fine the OS in nonvolotile comes with all the same problems as a malware infected bios ROM i.e.swap it out or throw away the motherboard.

    2. Michael Wojcik Silver badge

      The thing about files is that the provide pieces of storage with a purpose.

      Exactly. The story of information technology, since long before there were mechanical computing machines of any sort, much less digital ones, is a story of partitioning and organizing information. Written language was invented specifically to label bullae and represent their contents. "Files" are contemporaneous with the earliest efforts to store and manipulate data.

      Works like Yates' Control Through Communication illustrate the evolution of this process. If POSIX filesystems bear a certain resemblance to earlier technologies such as pigeonholes, flat filing, and vertical filing, that's because those systems did the job.

      Personally, I'm far from convinced of the benefits of non-volatile RAM, variations of which we've had for ages. Keeping everything in memory is a whopping great increase in the attack surface, for one thing. Having multi-level storage and control over my working set is just fine with me, thanks anyway.

      OS/400 famously had its "single-level store", where all objects were mapped into a large virtual address space. Obviously the implementation had to make use of virtual memory and paging because the hardware couldn't physically support all the data the system had access to, but it implemented just the sort of "everything's an address" metaphor that Liam is asking for. It was OK. It was not revolutionary.

      There are other ways of organizing user information which make more sense for particular use cases, like Sugar's journal mechanism for children learning to use computing technology. The filesystem metaphor isn't necessarily optimal for every use case. But I'm struggling to think of a use case where single-level storage really conveys any significant benefit.

  4. elsergiovolador Silver badge

    Insane

    The power of Optane, that nobody seems to be talking about, was the ability to access small files with magnitude lower latency than regular SSDs.

    Tasks that were I/O bound because required an access to large number of small files were on the next level more performant.

    A laptop with an average CPU but equipped with Optane memory (I am talking about full Optane storage, not a regular SSD with Optane cache), could smoke a beefy desktop workstation (provided it didn't run Optane too ;-) ).

    1. Tom7

      Re: Insane

      In a way, I think Optane was a good idea poorly timed.

      Ten years ago we all had spinning disks in our laptops and how transformative it was to replace the spinning disk with an SSD five years or so ago. Workloads had been disk-bound for decades while everything else on the system got orders of magnitude faster; suddenly, storage caught up several orders of magnitude. For most people, most of the time, their systems are now fast enough for their needs. Most people now look at their laptop and see how much slicker it is than five or seven years ago; the idea that storage could improve by another order of magnitude just doesn't hold that much attraction. If we'd had another ten years to get used to SSDs, we might be feeling the limits a bit more and faster storage would be more attractive.

      To interact a bit with the author's ideas, they write this as though we could have jumped straight back to a 1960s paradigm because Octane appeared. Never mind that back then software amounted to hundreds of bytes and running a programme was expected to take hours or days; the idea of having more than one programme running at once simply didn't make sense to people then. Attacking the filesystem as an abstraction for managing storage is all very well, but unless your software is going to go back to being a single process of a few hundred bytes, you have to have *some* sort of abstraction for managing it. No-one really seems to have done any work towards figuring out what that abstraction could be. Saying you just install an application into primary memory and run it from there, where it maintains its state forever is all very well; how does that work if you want to run two copies of the same piece of software? If your answer is to separate data from code and have multiple copies of the data, how do you tell your computer to run a new one or pick up an old one? There is a new category of thing that is persistent process memory; how do you identify and refer to that thing? How does that model even work for something like a compiler, where you feed it a file and it produces another file in output? Is persistent state even useful there? If not, how does the abstraction work?

    2. Anonymous Coward
      Anonymous Coward

      Re: Insane

      >The power of Optane, that nobody seems to be talking about, was the ability to access small files with magnitude lower latency than regular SSDs.

      Probably because this is a surprisingly small niche. If you're in this scenario with enough files to slow down an SSD you're already into unusual territory, and most often you'd be better served by compacting the files than you would introducing the magic go-faster layer Intel wanted you to buy. Which is exactly what happens in most of these circumstances these days.

      Desktops already boot in seconds. We're deep into diminishing returns territory, which is why client space never adopted Optane.

      Server land is far better served by the compaction route, which brings additional benefits in terms of improving compression efficiency and far superior TCO.

      1. elsergiovolador Silver badge

        Re: Insane

        What comes to mind is developers and their node_modules directory. It was blazing fast.

        Things like running tests or compiling were much much quicker, probably only comparable to having ones project directory mount in ram disk.

        But imagine running tests and they'd finish before you open ElReg - you would feel like being robbed off your procrastination allowance.

        1. Michael Wojcik Silver badge

          Re: Insane

          Shrug. Most of my test suites are network-bound. And when I'm writing software, I'm thinking-bound. If compiling is taking up a significant amount of my productive time, I'm Doing It Wrong.

          My point, of course, is that even this use case is limited. Some development might be sufficient I/O-bound that it becomes a killer app for Optane, but apparently it wasn't enough of development to matter to the people making the purchasing decisions.

      2. skwdenyer

        Re: Insane

        Optane would have revolutionised the workstation space in the 1990s. At a time when we were experimenting with writing code into FPGAs to get orders-of-magnitude speed gains, running with persistent primary storage could have been a phenomenal additional tool.

        Ironically the place that Optane might score in the modern world could be in things like phones - modern devices just don't boot fast enough - they're still booting devices, not instant-on appliances. In fact, a whole class of embedded devices could operate this way and, by doing so, be powered-up just when needed and then put immediately back into a zero-power state.

        1. TRT

          Re: Insane

          I'm thinking a pairing with a RISC OS...

      3. Alan Brown Silver badge

        Re: Insane

        The reason I took optane in a few servers was simple: durability

        I was (and am) pounding the snot out of that space (backup spool space) and managed to kill SSDs, whilst spindles simply had too much latency and slowed everything down (dragged everything down to 180 IOPs when I needed at least 5000)

        Not having optane means that I'm having to size the replacements with an eye to ensuring the drives won't go toes-up with the write+erase workloads expected over the next 5 years. All those extra layers aren't compensating for lower overall endurance when undertaking this kind of activity

        1. TRT

          Re: Insane

          Dang! Of course. I had a project about 10 years ago now that required terabytes of fast storage. At the time I solved it with a 16 2.5" 10,000rpm RAID with cache. But Optane would have done the trick far better. Didn't think to revisit that project.

        2. elsergiovolador Silver badge

          Re: Insane

          Funny that, my first Optane drive died after two weeks and Intel was making fuss about replacing it.

    3. rcxb Silver badge

      Re: Insane

      the ability to access small files with magnitude lower latency than regular SSDs.

      File systems already use un-allocated RAM space as cache. So you're talking about a very specialized case of lots of access to very small files (that can't be converted into larger files, like fields in a database) and also so huge a number of these files that there isn't enough RAM to cache them for higher performance access.

      1. elsergiovolador Silver badge

        Re: Insane

        Yes they do, but it takes time to get into cache if latency is high enough.

        It's pretty standard to have 100k - 1m small files in a Node project.

    4. Roland6 Silver badge

      Re: Insane

      >The power of Optane, that nobody seems to be talking about, was the ability to access small files with magnitude lower latency than regular SSDs.

      SSDs have evolved and continue to evolve; is Optane competitive with NVMe PCIe 4.0 and PCIe 5.0 SSDs?

      Also a nice benefit of SSDs is that they can be transplanted from one system to another and the contents to be fully readable. Optane can only achieve this if it too is organised as a disk drive...

  5. Pete 2 Silver badge

    One foot in the past

    > But Intel made it work, produced this stuff, put it on the market… and not enough people were interested

    For all its supposed innovation and speed of new products (some of which actually work), the world of IT is actually quite conservative. It only likes change if that takes it further in the direction it is already going.

    So the 8086 architecture was extended, embiggened and sped-up. But even a 5GHz i9 processor boots itself in 16-bit real mode. You might even (I haven't tried) get it to run code from the 1970s.

    I would expect that it is impossible for hardware to make the break for the same reasons there is still COBOL being written today. The cost of radical change is just too high,

    1. Alan Brown Silver badge

      Re: One foot in the past

      We all know what happened when Intel tried to make the world move on from x86

      Having said that, a lot of that had to do with the replacement being more lipstick than pig

      What finally forced change was a (long overdue) power efficiency drive.

      One of the questions I always ponder as a "what if" is how the actual RISC core of Intel or AMD systems would fare if available natively instead of via the x86 emulation layers

      1. Liam Proven (Written by Reg staff) Silver badge

        Re: One foot in the past

        Depends which attempt, really.

        iAPX 432 bombed right from the start.

        Itanium they sold, to some poor suckers, and it limped along, bleeding out all the time, for nearly 2 decades.

        But in between was the i860 and i960, which did great and solid millions of units. The i860 was codenamed "N-Ten" and that is where the "NT" in Windows NT comes from.

        The i860 was a good product and a success but Intel got scared and refused to commit to it. The i960 was a crippled version so it wouldn't compete with the cash cow.

      2. Roland6 Silver badge

        Re: One foot in the past

        > is how the actual RISC core of Intel or AMD systems would fare if available natively

        I seem to remember during the 1990's "Intel - ARM inside" was a humourous marketing tag, highlighting that Intel licenced ARM technology for their CPU's...

  6. vtcodger Silver badge

    Drums

    Apropos of not much

    In the late 1950s, before there were disks, there were (a few) magnetic drums. The idea was to mount read/write heads for a row of bits -- typically a CPU word width of them plus a parity bit, then rapidly rotate a magnetic drum under them. Expensive. But quite reliable. Quite fast if the drum happened to be near the area one wanted to access. Sometimes, if you were very clever, you could make sure that happened and interleave drum access with computing. And they were easy to program -- feed the hardware a drum address, a buffer address and tell it whether to read or write. None of that moving heads, waiting, then waiting some more for the proper sector to appear that disks demanded. .. When the disks worked at all, which with the earliest units wasn't as often as one might like.

    How big were they? It's been quite a few decades, but my memory says the USAF AN/FSQ7 computers had quite a few of them -- each with 4096 32 bit words. So, 16KB. Primarily, they were used to stage programs into memory in meticulously handcrafted pieces.

    1. WolfFan

      Re: Drums

      And sometimes the drums would, errm, stick. Allegedly there were multiple examples of US Navy A-6 aircraft making attacks above Hanoi and having the drums on their computers stick, so the Bomb/Nav guy would kick the computer to try to unstick the drum while the pilot lined up his approach. And, of course, while the North Vietnamese expressed their true joy at having the USN pop over for a visit. Unsticking the drum by kicking it gave a whole new meaning to booting the computer up.

      1. ICL1900-G3 Silver badge

        Re: Drums

        Some Icl George 3/4 installations had a High Speed Drum with, if I remember correctly, half a Meg of 24 bit words, it was used for paging.

        1. Anonymous Coward
          Anonymous Coward

          Re: Drums

          Multijob on ICL4/75 sometimes had drums for RIRO of dynamic beads.

    2. Anonymous Coward
      Anonymous Coward

      Re: Drums

      On ACE there were mercury delay line storage units to complement the very small amount of cpu memory. The card programming was in time units - and a trick with calculations was to pluck values from the delay line part way through the "rotation".

    3. Liam Proven (Written by Reg staff) Silver badge

      Re: Drums

      You noticed that I put in a link to the Story of Mel, right?

  7. Nate Amsden

    as cheap?

    "Optane kit is as big and as cheap as disk drives."

    Seems to be way off. A quick search indicates 128GB in 2019 was about $695 and 512GB was $7000.

    If Optane was as cheap as drives it would of sold a lot more and Intel wouldn't be killing it. Augmenting a few hundred GB in a system obviously won't revolutionize storage in thr way the article implies. If the cost was cheap then all the storage could be replaced and moved to the "new" model of accessing storage.

    1. Loyal Commenter Silver badge

      Re: as cheap?

      It seems it has three problems that killed it:

      1) It wasn't cheap, as you say. It might be cheap compared to fast read/write memory, but it certainly wasn't, compared to slow (but getting faster) SSDs.

      2) As the article mentions, it isn't infinitely re-writeable (no, neither is flash memory, I know). If it had this advantage, it could really have been a viable killer of the SSD market.

      3) It's a solution looking for a problem. Most people are content for stuff in memory to be in memory (I have 32Gb in my desktop, and that's more than enough for practical purposes), and stuff on SSDs to be on SSDs and take maybe a couple of seconds to load a big file. Or even on spinning rust, and take several times longer. The average usage pattern of a desktop PC is not to be constantly loading and saving large volumes of data, it is to load an application, do some stuff, then possibly save your work and do something else. Even that "save your work" paradigm is shifting, as more and more stuff is done in a browser, so that pattern becomes, "open a browser, open a bookmark, do some stuff". Optane doesn't naturally fit into any of these things.

      1. DS999 Silver badge

        Re: as cheap?

        The biggest problem is that it only had a niche between RAM (where it was slower but cheaper per bit) and SSDs (where it was faster but more expensive per bit)

        The problem was that SSDs improved much more quickly so they narrowed the gap in speed and the price disparity become worse, so the niche narrowed.

        There wasn't room in the market for it, there were simply too few people willing to pay for it and without sufficient revenue further R&D was not justified.

        1. John Brown (no body) Silver badge

          Re: as cheap?

          Back when Optane first began to appear in some computer, our service manager sent all us field engineers special instructions on what do when dealing with an Optane fitted PC or laptop. I forget what they info was now, something about doing a proper shutdown or something because otherwise pulling it would irretrievably lose everything. The main reason my memory is so hazey is because a) my memory isn't as persistent as Optane and b), I never, ever saw one out in the field :-)

        2. Loyal Commenter Silver badge

          Re: as cheap?

          I could see a niche for it on a database or file server, as a cache for frequently read (but rarely written) records or files. I have to wonder whether the incremental speed increase it would give here would make a difference for all but the most performance critical applications though. The spooks may have loved it.

        3. nijam Silver badge

          Re: as cheap?

          There was another problem, also rather serious. Namely, that the speed of Optane dropped significantly as it neared production qunatities. It seemed to turn from a wonderful new invention full of promise to a new thing full only of wonderful promises.

          Maybe not an emperor without clothes, but perhaps one with only a posing pouch.

      2. Alan Brown Silver badge

        Re: as cheap?

        "it isn't infinitely re-writeable"

        Compared to even SLC flash: "it may as well be, based on real world experience"

        My use case _is_ "constantly loading and saving (and erasing) large volumes of data" but it's a niche operation for most purposes (until someone runs rm -rf / and you need your data back)

    2. Anonymous Coward
      Anonymous Coward

      Re: as cheap?

      >Seems to be way off

      Waaaay off. On a storage basis Optane was 2-3 orders of magnitude more expensive than HDDs, and 1-2 more than SSDs. It was priced to compete with RAM, not storage, where it sported an initial 2/3 price-per-capacity multiplier and much, much more capacity per stick.

      But of course massively slower in latency and throughput terms and with a painful TCO penalty because of the limited lifetimes.

    3. Alan Brown Silver badge

      Re: as cheap?

      "Seems to be way off. A quick search indicates 128GB in 2019 was about $695 and 512GB was $7000."

      I paid £2k for 800Gb in 2014 (HHHL) and more recently only slightly more for 1800GB (U2 2.5")

      I'd say you found one of the more expensive resellers

      1. Roland6 Silver badge

        Re: as cheap?

        >I paid £2k for 800Gb in 2014 (HHHL) and more recently only slightly more for 1800GB (U2 2.5")

        Trouble is to be included in mass market end-user devices ie. laptops and workstations, those prices are way too high. To stand any chance in this market and achieve the penetration and effect the author is desiring, requires pricing of sub £200 for 2TB...

    4. Liam Proven (Written by Reg staff) Silver badge

      Re: as cheap?

      Yes, OK, I will own up to that. It was poorly phrased.

      "Cheaper than RAM, but in the size range of SSDs, with performance in between the two," would have been a better way to express it.

  8. Binraider Silver badge

    I'm sure in the right application Optane had a place. The marketing people couldn't explain why I needed it, and I haven't found a use for it.

    I'm not wholly convinced established organisational practises are wholly compatible with non-volatile RAM. Working copy of a document? Put it on a shared drive. Volatile copy actually being processed? Put it in local RAM.

    A non-volatile, non-shared resource's purpose is somewhat unclear; beyond trying to accelerate boot times. Chucking "RAM" up as shared resources to a server cluster might have uses. But then you're limited by the network interconnect; so no advantage particularly to having your storage on the RAM bus.

    What does booting from Optane do that booting from a solid state disk over a decent interface doesn't do? RAM bus speed advantage over PCI-express I suppose. Even PCI-express3 has enormous bandwidth of course.

    Security concerns for persistent RAM are also a thing. The idea that RAM could be a place where malware could reside, persistent through even a complete power cycle is slightly disturbing for some applications.

    So, yes, well done for Intel for trying new ideas, but they need to be able to explain why someone might want one for it to stick.

    1. Hiya

      "The marketing people couldn't explain why I needed it, and I haven't found a use for it"

      That's because Intel marketing isn't about justifying the relevance of any of their products/solutions to the market - it's just about telling the market that because it's from Intel they need it and producing big posters with lots of buz-words shouting at you.

      I sat (dozed) through countless internal Intel presentations (gotta show all those Powerpoint slides!) which made it very clear that Intel makes products/solutions because it can rather than understand what the market actually needs and develop relevant solutions.

      But hey whilst there's still time for the lifers to accumulate their super-sized benefits why stop a good thing!

      Oh wait....

    2. Loyal Commenter Silver badge

      Security concerns for persistent RAM are also a thing. The idea that RAM could be a place where malware could reside, persistent through even a complete power cycle is slightly disturbing for some applications.

      Of course, everything must be encrypted by default as well, in order to have any sort of security. Otherwise, what's to stop someone unplugging that persistent RAM from your computer and plugging it into theirs, and getting at all your juicy in-memory data? Encrypting it properly is also probably non-trivial (security is hard). Where are your keys stored? On a TPM module? Is that glued into your motherboard?

      1. TeeCee Gold badge
        Meh

        If a miscreant has physical access to a machine for long enough to take it apart and remove / add bits, your security is fucked anyway, regardless of what was or was not encrypted at the time.

        HINT: In the case you outline, it's far simpler, quicker and less likely to be detected to just add a monitor to take a copy of everything while it's decrypted for use and send it to you.

        1. Anonymous Coward
          Anonymous Coward

          This is a a narrow view. Memory volatility protects against a whole host of threat vectors. Someone "bringing a monitor and plugging in and browsing through by hand" doesn't really make the top half of the league table. Not least because said miscreant would still be unable to log in.

          If your memory are persistent and unencrypted people could simply smash and grab the sticks, for example, [without having to faff about with cryogenic cooling]. Or you could do clever timing tricks to figure out what was last in the block that's just been assigned to you. Or the miscreant in your equipment disposal chain can hoover up the final state of the machine they're meant to be disposing of. Or an accidental overflow in another application can expose something from days/weeks/months ago that the stick hasn't wiped.

          And so on, and so forth. Volatility of the memory space is near-fundamental to the way we do security in the real world. On that vein, the proposals put forth by the article author - where memory space is expanded to cover the whole set transparently - would mean users having to explicitly manage working items as volatile/non-volatile and that's a recipe for disaster. It's hard enough getting people not to handle password fields as Strings.

          1. Loyal Commenter Silver badge

            f your memory are persistent and unencrypted people could simply smash and grab the sticks, for example, [without having to faff about with cryogenic cooling].

            Even with current DIMMs, the contents are non-volatile enough to do this without cryogenic cooling, or any cooling at all in most cases, if you are quick enough. There was article about it here a while back, I can't be bothered to go and find the link again. Your quickest way of finding it may be to scroll through my past posts...

            (edit, if you're that bothered, it's half way the third page, at the time of posting this)

    3. Doctor Syntax Silver badge

      "trying to accelerate boot times"

      I suspect the time loading from SSD into RAM is now a good bit shorter than the time taken to probe the hardware, find what's there and initialise it.

      1. Korev Silver badge
        Childcatcher

        Not to mention every single application known to man deciding to fire up a daemon/process and then update itself and/or spam you with alerts...

  9. TimMaher Silver badge
    Windows

    Re:- “Drums”

    Some decades ago, a mate of mine told me a tale of when he worked at NPL in Teddington.

    They had a very heavy, vertically mounted storage “disk”, much like a UPS centrifuge wheel (anyone remember them?). Apparently it had to rotate during the day, something to do with the Earths movement.

    Anyway, one day it came loose, spun off the axle, zoomed across the computer suite (remember them?), bashed through a door, down the steps and into Teddington park.

    At least that’s what he told me.

    1. Jim Mitchell
      Alert

      Re: “Drums”

      Many serious flywheel applications have the spinning bit in a hole in the ground, so that if things go terribly wrong the damage is minimized and nothing besides the installer's career goes rolling out the door and into the local park.

    2. Doctor Syntax Silver badge

      Re: “Drums”

      If it was anything like the fixed disk installed in the QUB mainframe back about 1970 the exis would have to be aligned with the Earth's, IOW it had to be set up to allow for the sit's latitude. If it wasn't aligned precession would have ruined its bearings.

    3. david 12 Silver badge

      Re: “Drums”

      One of my friends was on the bridge of a ship when the power went off. They didn't think anything of it: the power went off frequently, then came back on again. But it stayed off. And stayed off. And then someone was frantically trying to lock the gyroscopic compass, and then everyone was standing on their chairs, while the gyroscope, which had tipped and torn through it's housing, was wizzing around on the floor.

      My ex-navy dad was beyond disgusted that the merchant marine could casually let the power go off and not immediately lock the gyroscope.

  10. Joe Dietz

    Solution seeking problem

    Like many cool things... if the first step is 'change everything': you've failed.

    Successful technology improves upon the previous generations or layers in a way that acknowledges the technical history and keeps the old stuff working. See also: Windows backwards compatibility (yes, yes - they seem to have lost their way here a bit... but it built an empire for sure). Unsuccessful technology asks you to first change everything and do something a new way. See also: IPV6.

    The later _can_ work... but it has to be pretty damn compelling.

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: Solution seeking problem

      And yet, the microcomputer changed everything...

      And then, mass-market integrated micros...

      And then, 16-bit micros...

      And with them, GUIs.

      And then 32-bit micros. And then proper pre-emptive-multitasking OSes.

      All of them changed everything, mostly with little or no backwards compatibility.

      But now, we have VMs. We can encapsulate the old stuff, run it in nice sealed boxes, and lose nothing.

  11. Numen

    But it's not new

    The concept of Optane as a second layer of memory had been tried before, and hasn't been successful each time it has been reinvented. It had only a niche appeal for a number of reasons.

    And remember - it was cleared every time you rebooted. No storing stuff across boots. That could be a security issue, and you might not reboot the same OS and application right away, say if there was a failure in the node. You could end up with obsolete data you'd have to clear anyway.

    It's both the idea and the implementation that have to work. Not the case here.

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: But it's not new

      I have several objections.

      [1] When was this tried before?

      [2] I am not proposing a second layer. In fact, I am proposing removing an entire layer.

      [3] It need not be cleared every boot. That's a minor implementation issue.

      [4] Even a machine with PMEM can be rebooted. State can be discarded and the system re-initalised. It's persistent, not WORM.

  12. Lorribot

    Intel are the problem

    There are a number of things here.

    It was developed by Intel, they have not got a clue how to talk to people, so they dumb stuff down and sell Optane as hard disk replacement (this is the first time I have seen a proper explanation of what was intended). They over promise (I remember 1000x faster that current disks). They develop stuff so slowly (discrete graphics cards anyone) they get bypassed by more mature technology by the time they get it to market (NAND just got faster and cheaper, they just had an SSD that had no USP) and the DIMM stuff just didn't turn up any time soon, was almost as expensive as RAM (which keeps dropping in price) and only ones that could afford it were enterprise and they didn't see the benefit of persistent middle tier storage that complicated things as servers are always on. They were the only ones selling it.

    If they could have been at least price competitive with or at least 5x faster that NAND then they may have been able to make a case for it, or licenced it out to HP on the cheap to develop systems to support it say alongside SAP or some of those big Oak ridge super computers, no wait they screwed that relationship with Itanium another good idea looking for a problem to solve.

    1. This post has been deleted by its author

    2. JohnSheeran

      Re: Intel are the problem

      Ironically Intel effectively killed the better alternative to Optane when it vowed not to support Memristor. While Memristor may have had an uphill battle to be put in place sooner, it would have revolutionized computing (and still probably will) by blending primary and secondary storage into the same thing. It had a path toward migrating away from the current model without completely abandoning the current model but it required companies like Intel to at least consider it as a viable technology. They didn't and opted for Optane instead because they controlled it and now look at them.

  13. Detective Emil

    Licensing

    From Wikipedia:

    JVC, which designed the VHS technology, licensed it to any manufacturer that was interested. The manufacturers then competed against each other for sales, resulting in lower prices to the consumer. Sony was the only manufacturer of Betamax initially, and so was not pressured to reduce prices. Only in the early 1980s did Sony decide to license Betamax to other manufacturers, such as Toshiba and Sanyo.

    The only source you're able to buy those Optane DIMMs from is Intel or (formerly) close bedfellow Micron, and the only processors you have ever been able to use them with are made by Intel.

  14. Howard Sway Silver badge

    No current mainstream OS that only has primary storage, no secondary storage

    Not true. IBM's i series (or OS/400 as it was) has exactly this. OK, define "mainstream" however you like, but it's in use in lots of industries and still runs lots of large businesses. I worked in a couple of places where it was in use, and generally worked with it from its unix command line, but that is just a shell that works upon the underlying object based storage system, as is every other type of user program. Basically everything that is created is an object, and persists until it is explicitly removed. Whether this is in memory or on disk is completely under the control of the operating system.

    Its native command line is horrendous, but you can work directly on every object with it, regardless of where that object actually is in storage at any point in time. I would have hated this idea to be transferred to any sort of personal computer, but the architecture was definitely interesting and it's well worth a look at if you've never encountered it.

    1. nijam Silver badge

      Re: No current mainstream OS that only has primary storage, no secondary storage

      > Basically everything that is created is an object, and persists until it is explicitly removed.

      So, a file then, in exactly the same sense that Unix says "everything is a file". By which they mean "there is a uniform, consistent way of accessing it, whatever it is."

  15. gnasher729 Silver badge

    So my understanding is that you want to treat it as RAM. Probably your normal RAM becomes L4 cache and you could run your software unchanged. Except you throw your database out and keep your data in RAM. And Word doesn’t save documents anywhere but keeps them in RAM.

    Until your app crashes. Then you have to restart it and it needs to be restarted so that it’s fine with all documents and the database in the same place. Not really that difficult. Could be made to work.

    But then… what price?

  16. rcxb Silver badge

    Optane was not necessary

    You never needed Optane memory for that use case. In the old days, you could just design-in a battery backup system, and use RAM as your persistent storage.

    These days, you can just go out and buy NVDIMMs off the shelf. It's a use case fully supported by common server boards. You don't hear about it because there just isn't a killer app where a non-trivial number of workloads see real benefits from it.

    * https://en.wikipedia.org/wiki/NVDIMM

    * https://www.dell.com/support/manuals/en-us/poweredge-r740/nvdimm-n_ug_pub/introduction

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: Optane was not necessary

      Optane was one architecture used for NV-DIMMs, you know.

      It's also much much MUCH bigger than DRAM based ones, which have zero size advantage over conventional volatile ones.

  17. vekkq

    Article says there are only two types of memory, but imo its sortof ignoring that processors themselves have multilayered caches and registers. Each faster than the other. Its a granular pyramid with each layer compromising between performance and price.

    1. TeeCee Gold badge

      Also worth mentioning that the IBM System/38 had single-level storage in 1977.

      A 4Tb(1) memory map with everything in it for direct access, RAM, floppies(!) disks and tape.

      (1) More than enough back then. Max memory was 32Mb and I did see one with a whole 7gb of disk...

  18. david 12 Silver badge

    Database

    Our small database system is backed up by a battery-backed RAID controller, which is limited by the speed of the drive it is connected too, which is limited by the fact that the transaction is not complete until it has actually been written to persistent storage. No amount of caching can speed up that final step. Even Intel's partner, who triggered this step by giving up on Optane, admits that their new large, fast SSD technology, is no match for memory storage "because of the latency of the disk interface"

    Optane failed because of the inertia of the legacy industries (which I totally understand).

    1. rcxb Silver badge

      Re: Database

      No amount of caching can speed up that final step.

      Sure it can. Your RAID controller with the battery backup just needs to lie to your database, telling it the write was completed to disk the moment it went into the cache.

      Somewhere along the way you have to decide that a certain storage method is reliable enough, and that could be battery backed RAM (cache) just as easily as the SSDs its connected to.

      You need to decide your trade-off. Others might decide that writing to a single RAID array isn't reliable enough, and force the database to wait until the write has been replicated to a second, remote storage array.

  19. Anonymous Coward
    Anonymous Coward

    You can't really complain about 1970s OS design...

    There may be multiple possible designs for an OS, but ultimately the concept of a file is probably going to emerge. Something you can create, then later refer to, look at its contents and modify?

    Everything being a file basically means everything works like CRUD (create, read, update, delete)... Saying that we need something different is a bit like saying "How do we formulate something other than Newton's Laws?" You can do it... ...but it just ends up being the same thing looking at it from a different direction.

    In any case, Optane wasn't a fantastic new idea. It was a standard storage system that was far too expensive compared to an SSD... ... and a lot of marketing...

    1. Anonymous Coward
      Anonymous Coward

      "How do we formulate something other than Newton's Laws?"

      Relativity?

      1. Anonymous Coward
        Anonymous Coward

        Re: "How do we formulate something other than Newton's Laws?"

        I don’t think that Battery backed up RAM could be described as a breakthrough on par with the Theory of Relativity!

        I don’t think it’s a breakthrough TBH…

        1. Anonymous Coward
          Anonymous Coward

          Re: "How do we formulate something other than Newton's Laws?"

          Maybe not - but if someone didn't start to look at reality from a different perspective we would not have nor Relativity nor Quantum Mechanics. And people would keep on wondering why Mercury doesn't obey Newton's Laws, or why atoms behave strangely.

          Anyway Optane is different from battery backed RAM - and if we never start to look at PC and OS designs from a different perspective, but believe "what out fathers and grandfathers did was good and anointed", we risk to go nowhere. There's too much risk aversion right now.

      2. nijam Silver badge

        Re: "How do we formulate something other than Newton's Laws?"

        > Relativity?

        Except under extreme (i.e. rarely encountered in everyday human perception), it gives essentially the same answer as Newton's laws. In otherwords, relativity offers backwards compatibility (at least within the domain of applicability of Newton's laws).

    2. Dave 126 Silver badge

      Re: You can't really complain about 1970s OS design...

      > Saying that we need something different is a bit like saying "How do we formulate something other than Newton's Laws?" You can do it... ...but it just ends up being the same thing looking at it from a different direction.

      It's a question of scale - I'm good with with Newton if the application is playing snooker or landing on the moon. If you want to design microchips or GPS satellites however, you'll find Newton insufficient. The quantumness of things makes itself known, as do relativistic effects if you get really good at measuring things.

      I don't know the tasks that tomorrow's computers will be put to.

      I don't know where their bottlenecks will be, or the cost / benefit equations used to optimise bang for the buck.

      We humans like the concept of files. But for how long will we be the architects?

      Do we biological life forms use 'files' in our heads? No, we don't.

      Do we expect an iteratively evolved system with selection pressures for efficiency and contingency to be clearly labelled? No, we don't. Biology is messy.

  20. Batlow

    Single Level Store implemented in AS/400

    Very interesting article, and I agree with a lot of it. I would mention that a Single-Level Store, along the lines you describe, has already been implemented in one commercially significant system: the AS/400 (aka "IBM i"). OS/400 treats RAM and disk space as one continuous memeory address space. To read data from a disk, the operating system just does a branch to the right address, the application programmer does not need to call any explicit IO functions. There is no file system as such, everything is an "object" and endures across IPLs..

    I'm a bit conflicted about AS/400s. Unfortunately they are mostly found in extremely tedious commercial applications like payroll, and warehouse inventory. And it's hard to get low-level spelunking tools if you're not an IBM feield engineer. On the other hand, they are rather intruiguing technology. Original architect Frank Soltis decided back in the 80s that spinning disks would be replaced with some form of electronic storage. So he designed the operating system so that everything was storage. The fact that some storage was RAM, and some was SSDs, was a mere hardware detail, transparent to user applications.

    1. captain veg Silver badge

      Re: Single Level Store implemented in AS/400

      The PICK system that I started my career on was much the same. No distinction between primary and secondary storage, just as many 4KB frames as would fit on the disk. Under the hood it was demand-paging, of course, but neither the apps nor even the system software knew about that.

      As it happens, the hardware had SRAM too, so you could switch the machine off and restart back into the same state.

      -A.

    2. NickHolland

      Re: Single Level Store implemented in AS/400

      general rule on the Internet: don't read the comments. Big exception: The Register's reader comments.

      And you, kind person, have just given me more insight into OS/400 than a few years of swapping tapes and blindly entering commands ever did (VARY ...) (and PICK, too -- though never sat in front of a PICK system)

      Really puts Optane into perspective...there was stuff ready to use it, but you weren't going to run Windows, Doom, or other cool games on it. Locked into Intel, it wasn't going to work, because if you redesign everything, you may want to change the basic processor, and that's been tried and failed before...

      Now I kinda want a job doing OS/400 stuff. Kinda. Doubt it is as much fun as Unix...

  21. MarkMLl

    Yes, /but/...

    This is something that Liam and I have been sparring over for the last ten years or so.

    The first thing I'd say is that on Linux- in fact I'd hazard any modern unix- everything /isn't/ a file: network interfaces aren't files, sockets aren't files, USB devices aren't files... and even in the case where some device or OS interface /does/ exist as a name in the filespace it very often needs a specialist API which controls it via IOCTLs.

    Second, if we do magically decide that we can do without secondary storage and have everything inside the application program(s) like a 1980s home computer or like a PDA how do we organise it and ensure that it will scale?

    I can sympathise with Liam's uneasiness at the idea of having data which isn't immediately accessible to the CPU. However, what is the alternative? There really does have to be some sort of organisation even for something which has a single-level addressspace, and if we assume that the leading contenders are environments like Lisp or Smalltalk we have to ask: how is internal data organised, and in particular how is any form of security implemented?

    The original Smalltalk books (Goldberg et al.) casually remarked on cases where system owners were free to change low-level details. However the early non-PARC implementors were quick to point out that such things made systems virtually unmanageable since there was absolutely no way that a program (some species of object bundle) could make any assumptions about what already existed on the system.

    To the best of my knowledge, there is no persistent single-level environment where every object has an owner and well-defined access rights. Hence there is no way of saying "this stuff belongs to the user and can be read by nobody else", "this stuff belongs to the underlying environment system and can only be updated by its maintainers", and "this stuff is the intellectual property of some middleware bundle released as open-source, and if modified it can no longer be passed off as The Real Thing".

    As such, I have reluctantly decided that the idea of file-less flat systems is a non-starter.

    1. DuncanLarge

      Re: Yes, /but/...

      > network interfaces aren't files

      Yes they are.

      > sockets aren't files

      Yes they are.

      > USB devices aren't files

      They most certainly are.

      Just because you may not have these files visible in the tree doesnt mean they are not fies.

      There are some things that are unfortunatly not given filehandles. But that "rot" was implemented in UNix a long time ago and the creators of Unix saw it and made sure they resolved all those issues in Unix's replacement PLan 9, where literally EVERYTHING is a file, including your network.

  22. yetanotheraoc Silver badge

    Why would it be so good to not have files?

    "If files did not exist, it would be necessary to invent them."

    From the article: "To get to the heart of this, let's step back for a long moment and ask, /what is the primary function of a computer file/?" And, the article immediately gets into *storage*, as if file and storage are inextricably linked. Not so. The primary function of a computer file is a collection of bits, with a beginning, an end, and a known sequence (which can be represented by a hash or signature). That's it. The storage angle from the article is a rather Unix-y view of a file as a collection of bits *on-disk*. Once we realize that the Optane memory also has files, it's a short step to seeing a need to sometimes persist a file to a disk. Having persisted to disk, then we would sometimes need to load from disk, and now we can see that Optane is indeed revolutionary, but in a different sense: "to everything there is a season".

    I took a look at the comments to the linked Phantom OS article. All of them could legitimately be copy/pasted here.

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: Why would it be so good to not have files?

      No, I disagree with you.

      If I declared an array in an arbitrary programming language, presto, there is, quote,

      > a collection of bits, with a beginning, an end, and a known sequence (which can be represented by a hash or signature)

      But no programming language makes that inherently a file. It's not a file; files are found by name, or by some other unique identifier, and the OS or language must make a request of another device to retrieve that sequence from secondary storage into primary (if it will fit) or request a specific part of it (if it won't fit).

      The concept of an allocated block of storage with known structure and contents is not in any way tied to the medium that it's on.

      What defines a file is not its size or its structure. What defines a file is that it's not in primary storage, but secondary (or tertiary or quaternary, even), and it must be _opened_ or _retrieved_.

      Whereas with a single-level storage model, there is no retrieval. It's all in memory all the time. Every word of every structure is always right there, directly accessible without any retrieval or search process.

    2. Roland6 Silver badge

      Re: Why would it be so good to not have files?

      > The primary function of a computer file is a collection of bits, with a beginning, an end, and a known sequence (which can be represented by a hash or signature). That's it.

      Not quite, whilst that describes the contents of a file it does not really describe its purpose, which the developer of the Phantom OS notes "But a file in Phantom is simply an object whose state is persisted."

      In other comments, I've noted a file is an object that is both application and machine-independent and transportable/accessible to other machines and applications. I think these attributes differentiate a file from other data structures, including those used to represent the contents of the file to enable in-memory manipulation.

  23. anderlan

    TLDR Cheap read only section of RAM?

    Sounds like Intel needed to have rethought the kernel itself with patterns that could use this. Nothing wins an argument (or market) like working code. Working code using gorgeous massive textures in a game or gorgeous massive universe-encompassing models in machine learning. I could see both benefitting.

    Heck as for the latter application, the idea of a model is that if it's mature [i.e. in production] it seldom changes--that's great for this cheap write-limited RAM. When you are baby, your network prunes itself like crazy (writing, destructively no less!) then as your model of reality solidifies, it becomes less dynamic.

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: TLDR Cheap read only section of RAM?

      [Author here]

      Yes, I agree, if I read you correctly.

      While HP got all excited about whole new machine architectures before its tech was ready to scale to large size and mass production, Intel needed to throw money at some small-scale OS R&D to provide a proof of concept of how this tech could be used.

      HP's "the Machine" tried to come up with something new and take it straight to market.

      Remember that Linux was a toy when it was first introduced. So was Minix before it. Arguably, so was UNIX™ itself in the early years.

      Stuff takes time to mature 'til it's ready to deploy.

      Unikernels are a thing. A dedicated file-server OS was once a massive product. Dedicated router OSes were once a thing.

      It's OK to start small, with tech demos, and build interest from there. Changing industries doesn't happen in just a year or two.

  24. Anonymous Coward
    Anonymous Coward

    Not the idea, the implementation

    Yes, block-addressable secondary storage was always a compromise between speed and cost. For a long time it has been an acceptable one and we have architected around it.

    Yes, the new technology promised something really radically different: One, it's byte-addressable so you can read and write it like memory. And two, it's cheap compared to RAM. That means you can have a massive directly addressable linear memory space that you could even program to without having to think about it if your use case isn't too concerned about what's in fast vs slow memory.

    And that's even without the persistence.

    Add in the persistence and you have blurred or even erased the artificial distinction between "memory" and "storage". A lot of things we have built in the last few decades would, I think, have been designed very differently with this kind of architecture available. Relational databases or queues come to mind.

    The problem is not the concept; the problem is the way Intel built it (too proprietary), and took it to market (too narrowly restricted to Intel-only systems). As always, Intel's first priority with any project is "does this sell more CPUs?", and that led it to handicap Optane in a way that meant it would never be mainstream.

    1. Alex 72
      Linux

      Re: Not the idea, the implementation

      Yes, this could have won in the marketplace if Intel opened it to all CPUs including ARM, and worked with UNIX/Linux vendors and OSS projects and Microsoft to build OS varients that used it the way google and apple do before a hardware launch. If as many people here believe there was benefit to be had demonstrating this at launch and opening up potential customers to everyone could have built demand (it would have brought competition but Intel would have first mover advantage and the segment would still be there now). Oh well, I guess it's a lot to expect Intel to manage their own IP carefully and to look out for the long-term health of their shareholders and the industry when it is easier to try to put everything new in an intel walled garden like the apple one but with none of the benefits because reasons.

  25. This post has been deleted by its author

  26. Roland6 Silver badge

    Still no file systems, though.

    There was a file system - it was alluded to in the article:

    To get it into the memory, it was read off paper: punch cards, or paper tape.

    These were stored in a physical file system - hence why GUI systems used a filing cabinet icon for the file system.

    I think the author is overlooking the vast increase in computer performance and cost reduction in online storage that has happened over the decades.

    In the mid 1980's 5.25 inch 10MB HDD's were relatively new, prior to these you could have had 1MB disk cartridges, yet 525MB QIC tapes were relatively cheap and plentiful.

  27. nautica Silver badge
    Boffin

    Absolutely superb article in all respects.

    1. Liam Proven (Written by Reg staff) Silver badge

      Gosh. Thank you very much!

    2. David G from Visalia

      Agreed. I had high hopes for Optane for exactly the reasons explained by the author.

    3. NickHolland

      Agreed -- I've heard a bit of noise about Optane in the past, lots of hand-wavium and "will solve all our problems" (and been in this industry long enough to know the ONLY problem reliably solved in IT is excess money in budgets), but never had any idea what the point was.

      And now I know. Cool idea, completely at odds with our existing systems and thought processes, but ... had potential for totally new products. Kinda unfortunate in the PC world, we have been stuck on "faster, bigger versions of the 1981 IBM PC", and the OSs of note on this platform are based on 1970 ideas.

      Great article. Too bad more people didn't run across something of this caliber of explanation years ago.

  28. Roland6 Silver badge

    "It (Optane) was the biggest step forward since the minicomputer. But we blew it."

    Not so sure about that, ICLs CAFS was perhaps a bigger step forward in memory access; Optane is just full throttle persistent memory.

  29. Crypto Monad Silver badge

    Misses the point

    "few in the industry realized just how radical Optane was. And so it bombed"

    No. The reality is: nobody wanted to buy it, and hence it bombed.

    IMO, the fundamental premise of the article is mistaken. Optane was never suitable for primary storage, for the simple reason that even DRAM is already a massive system bottleneck. Whilst CPUs have increased in speed by 3-4 orders of magnitude over the last few decades, DRAM has increased by maybe 1 order of magnitude. As a result, any access to DRAM can result in hundreds of cycles of CPU stall. In current systems there need to be three levels of cache between the CPU and the DRAM for it to function tolerably at all.

    Replacing your DRAM with Optane, making it another order of magnitude slower again, would make this far worse.

    The only way it *might* have worked is to use Optane as further tier of caching between CPU and DRAM. But that requires reworking your applications and operating systems, copying data back and forth between Optane and DRAM as it gets hotter or colder - for at best marginal benefits.

    That copying is pretty much like swapping. Optane would have been a good location for your swap file. But if you're having to swap out of DRAM, your performance is already suffering badly, and Optane would just make it suffer slightly less. Similarly, Optane could have been used for the page buffer to cache data fetched from SSD - but if your data is a small percentage hot and the rest cold, then the hot is already cached in DRAM anyway.

    In short, it was an expensive solution looking for a problem. If it could have been made as cheap as SSD, then it would have won because of its higher speed and endurance. If it could have been made as fast as DRAM, then it would have won through lower cost-per-bit (and maybe some use could have been found for the persistence too). Neither of these was true. It was just another type of secondary storage but considerably more expensive than SSD, which the market considers "good enough" in that role.

  30. This is my handle

    If I could get a display that did that....

    ... I'd get a few large ones and bring up beautiful and interesting images on them which would persist when I unplugged my laptop. This would replace wall-art. Yes, I know there's e-paper but until recently was not colorful or sufficiently hi-res.

  31. Anonymous Coward
    Anonymous Coward

    Is Apple possibly tip-toeing in this direction with Unified Memory? It may not be file system level, but does seem to eliminate the divide and potential bottleneck between discrete GPU cards and CPU motherboards with their own memory systems.

    1. Roland6 Silver badge

      I think Apple might be taking a slightly different and more affordable approach that is more compatible with existing technology.

      Remember Unified Memory is about on SOC memory and sharing it between the CPU and GPU, not motherboard memory.

      In this configuration you could regard the fast on-SOC memory as a type of cache, fronting (slightly) slower motherboard RAM. Introduce fast and relatively cheap PCI5e NVMe drives and a good virtual memory implementation and for a end-user device, you probably could remove the expensive motherboard RAM and just go straight to secondary storage - users for most of the time probably wouldn't notice a performance difference.

      Obviously, you will need to make a judgement call on the life expectancy of the NVMe memory; suspect if it is greater than 4 years (ie. comfortably longer than the 2 year warranty) of typical usage then it is good enough.

  32. Grunchy Silver badge

    Faster cheaper more durable

    If optane had only advantages over flash then it should have been more competitive, but it seems like that wasn’t the case?

    For example I hadn’t heard anything about optane for at least 10 years. I thought it sounded pretty good at the time, but then never heard about it in the slightest ever since!

  33. Henry Wertz 1 Gold badge

    Too costly

    It was quite simply too costly -- it only worked with the highest-cost Xeon processors and server motherboards, and cost less than the recommended DIMMs for these servers, but costly enough it cost more than getting some aftermarket DIMMS for these same servers.

    Maybe Windows would not have supported these properly, but Linux has several subsystems that could conceivably make good use of Optane. That said, the price for Optane appeared to be so high it would have cost less to buy SSDs and preventatively replace them every so often if you're worried about writes wearing it out.

  34. birkett83

    You know mmap is a thing right?

    Linux (like all modern systems) uses virtual memory and provides the mmap() system call. If a program wants to access a file without going through a bunch of slow read and write system calls it calls mmap() to load the file into its address space and after that it can access the file as if it were any other memory location. When using a traditional block device, the kernel transparently loads parts of the file (pages) into RAM and flushes any changes back to disk. Linux does the same thing for executable files when you run them. When using optane, the kernel developers could do the dumb thing and shuffle data back and forth between optane and RAM the same as any other block device but those guys are smart and experienced so they probably just set the page table entries to point directly to the physical address of the file in the optane storage. I'm no kernel developer, I can barely wrangle a C compiler so if I can think of this idea I'm going to go ahead and assume they already did it.

    TL;DR the "everything is a file" architecture isn't causing any kind of performance penalty, programs can already access data stored on optane memory in the exact same way they access data in RAM. We don't need to redesign the entire OS to use optane to its full potential. I'm gonna go ahead and conclude this isn't why optane died.

    1. technovelist

      Re: You know mmap is a thing right?

      Yes, you can memory map the Optane DIMM storage into memory. In fact that's the only way to get direct access without time penalties.

      The good news is that both Linux and Windows do this very well. Even if you are running a Hyper-V Linux virtual machine you get direct access to the underlying storage.

  35. midgepad

    MUMPS/M and globals

    I know little of this stuff, but I gather MUMPS, and then M, regard the whole collection of data - a medical record system for instance - as a global.

    And it is mapped onto an area of hard drive.

    MUMPS has been around for a long time, but perhaps would prefer Optane to rust already?

  36. technovelist

    A valid use case for Optane DIMMs

    First of all, a lot of people here are confusing the Optane SSD, which of course has an SSD interface, with Optane DIMMs, which were the truly revolutionary advance.

    The SSD was very fast for an SSD, around 5-10 microseconds per access, but obviously that is several orders of magnitude slower than RAM and can be accessed only in pages of 4k.

    The Optane DIMM was a different animal entirely, even though the underlying medium was the same phase-change storage in both cases.

    Optane DIMMs have read access times on the order of 150 nanoseconds, not too much slower than DRAM if we are talking about enormous amounts of data where the page tables have to be traversed to figure out the physical address.

    The use case I'm referring to is a gigantic hash table. I'm currently working on building a trillion record hash table occupying 8 TB of Optane DIMM storage, with the ability to read millions of random records a second, all on one machine. Of course it is also persistent, so you can build it once and process it with whatever programs you want later.

    This isn't possible with any other commercial technology.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like