back to article Starting over: Rebooting the OS stack for fun and profit

Non-volatile RAM is making a comeback, but the deep assumptions of 1970s OS design will stop us making effective use of it. There are other ways to do things. This article is an abridged version of Liam's 2021 FOSDEM talk on novel OSes for modern hardware. For a summary of his 2024 presentation, see this series. The most …

  1. Graham Lee
    Gimp

    Other smalltalks/lisps

    In addition to Squeak, there's another popular, Free Software implementation of Smalltalk, Pharo (pharo.org), which has a lot more focus on higher-level development tools. On the LISP side, a lot of GNU components use the Scheme dialect GUILE to control a GNU OS at the lowest levels: the GNU Shepherd daemon manager, GNU GUIX package manager, and another interesting component is GNU MES (a small LISP interpreter and a small C compiler that can bootstrap each other, for bringing up a whole system from a free software core).

    1. _andrew

      Re: Other smalltalks/lisps

      On the other smalltalks/lisps, one might want to consider Racket (racket-lang.org) a scheme. It used to be built on C underpinnings, but it has recently been restumped on top of a native scheme compiler/jitter/interpreter, so it's scheme all the way down, now. It also comes with an object system and it's own IDE/GUI, and can do neat things like use graphics inside the REPL. It also has a neat mechanism to re-skin the surface syntax in a variety of ways, which is how it supports both backwards compatibility to older schemes and to other languages like datalog. There's an Algol, apparently.

    2. cdegroot

      Re: Other smalltalks/lisps

      Pharo is mostly a fork/cleanup of Squeak, they share VM and other important bits. Either would work, and so would any Lisp dialect although SBCL has some advantages (you can use it as a systems programming language, emitting straight machine code, so you can stay much closer to the metal than with a lot of other Lisps and any Smalltalk).

      The biggest trick will probably be designing the thing to indeed seamlessly map the right sorts of memory onto the right locatoins. It's an interesting idea. And I do think that the current OSes should go away, none of them are very good.

      But also: don't embark on such a project before reading Worse is Better :-) It is exactly the reason we're stuck with crappy old operating systems and incomplete/weak programming languages.

    3. jake Silver badge

      Re: Other smalltalks/lisps

      I hacked on a LISP Machine for nearly a year when I was at SAIL, but finally I saw the futility and went back to hacking BSD on VAX. (Actually, a small cluster of vaxen, which DEC helpfully donated.)

      The only place I ever used the LISP experience in the wild was with the bastardized AutoLISP, starting about 10 years later and continuing to this day.

  2. Ian Johnston Silver badge

    Our main memory can be persistent, so who needs disk drives any more?

    How does the cost of NVRAM compare with the cost of an SSD (or even with spinning rust) and how is that likely to change? The rather old ThinkCentre on which I am writing this is currently using 2G out of 8G of RAM and approximately 1TB out of 2TB of disk.

    And I really can't see the link between NVRAM and the author's interesting and traditional stamping group of LISP, Smalltalk, Oberon and so on. Would a new hardware paradigm not deserve more than a rehash of a different 70's OS than Unix?

    1. John Robson Silver badge

      I think the point is not to reinvent everything at the same time - that's far too much work.

      If you can make useful (toy) persistent memory devices, then at some point the hardware will be at the point when "normal" computers start to look odd (I'd argue that they already sort of do in the era of the smartphone)

    2. cdegroot

      You _use_ a lot of RAM and disk, but do you _need_ it? I mean, I ran a graphical desktop GUI on *nix systems in the late '80s/early '90s. I'm 100% sure that a designed-from-scratch OS would be an order of magnitude smaller than the "cobble some random crap together and pray it works" style of OSes we use today.

      1. _andrew

        I like the 90s-vintage QNX demo disk: a single 1.44M floppy that booted into a full multi-tasking Unix clone with (very nice) GUI. Didn't have many spurious device drivers on there, but it was a demo, after all.

        Speaking of lots of RAM: the RAM in my current desktop (64G) is larger than any hard disk that you could buy in the 80s. The RAM (1 to 4M) in the diskless Unix workstations that I used in the 80s-90s is smaller than the cache on just about any processor that you can buy today.

        So you very likely could run a system entirely out of optane or similar, and rely on cache locality to cut down on your wear rates. I think that you'd want some DRAM though anyway: there are things like the GPU frame buffer, network and disk buffers and so on, that are caches anyway, and have no need to persist across reboots.

        As has been mentioned before: Android (and much modern software) manages the appearance of persistent storage quite adequately, and it does it through application-level toolkits and patterns that encourage deliberate persisting of "user state" to various database mechanisms, because the application execution model does not guarantee that anything will be preserved between one user interaction and the next. It isn't an execute-main()-until-exit model, with all of the attendant persistence of state.

      2. Michael Wojcik Silver badge

        That depends on your definition of "need".

        On my development laptop for work, by far the biggest virtual-memory pigs are browsers. I run two (one for sites that play nicely, the other for crap sites, web SPAs, and multimedia), and between the two of them they're using most of the RAM.

        Second and third place are SQL Server and Outlook. Yes, an email client manages to use nearly as much memory as the RDBMS. Well, Teams probably beats it because of the many Teams processes, each hogging resources. But it's safe to say that eliminating all the Microsoft "productivity" apps — all of Awful365 and Teams — would free up a chunk of resources that's roughly browser-sized.

        The browsers would be using much less memory if I didn't let them load images and other media. Then there are the bulked-up heaps caused by garbage-collecting Javascript and poorly written Javascript code running on Every Damn Webpage. Most of the rest of it is likely caching, and could be reduced by flushing to disk (though not in Liam-world, where there is no such distinction :).

        So at least in my experience on end-user systems, it's not the OS which is aggressively using system resources (though of course there are OS-side offenders as well). The biggest problem is applications, and users' appetite for multimedia is a big contributor.

    3. Liam Proven (Written by Reg staff) Silver badge

      [Author here]

      > How does the cost of NVRAM compare with the cost of an SSD (or even with spinning rust) and how is that likely to change?

      It's futile to speculate on unreleased tech, but my colleague Dan Robinson has written about PMEM kit before the article I linked to.

      But looking at the one thing that made it to market, Optane was (much) cheaper than RAM and both (much) faster and (much) longer-lived than SSD. This is a pretty good deal for a new tech.

      > And I really can't see the link

      My point was that current mainstream OSes -- various Unixes and the spawn of OS/2 and VMS -- are very heavily disk- and file-oriented, to the point where it's difficult to do anything else.

      I wrote the SUSE documentation on SLE's PMEM support, and the way Linux handled this tech was to turn fast NV memory on the memory bus into something a Unix could understand was... to emulate a disk drive with it, format it with a filesystem, and pretend it was a big fast disk.

      This is not SUSE's fault. It was good at handling new, high-end, cutting-edge stuff. But it's tragic that this is the way a "modern" OS can handle such technology: by making it 1970s tech, but faster.

      > deserve more than a rehash of a different 70's OS

      My over-arching point here is that there is _always_ someone with a Brilliant New Idea™ which will be ready for deployment Real Soon Now™ and will fix this. 99% of them don't survive.

      That the real value is in looking for decades-old tech which _has_ survived even though it never went mainstream. If everyone ignored something except a few hardcore users, then maybe there is some special virtue in it which is why it survived.

      Much 21st tech R&D is iterative, nibbling at the corners of modern problems.

      In the 1970s and 1980s, people were much bolder, and unconstrained by rulebooks, they tried wild stuff. Some of that has hung on in there.

      Nobody uses Simula-67 now but its OOPS model informed C++. Nobody uses Algol, but all procedural imperative languages inherit from it. Ditto Snobol, but from that grew Perl and Python and things. Even Awk and Sed are relatively obscure now.

      But some of these old tools, such as Oberon, such as Lisp, continued in use despite never going big or enjoying a brief flowering and then fading out of sight.

      For me that makes them worthy of special attention.

      1. John Robson Silver badge

        "Even Awk and Sed are relatively obscure now."

        Really - am I the new breed of cobol programmer?

        That's... probably inevitable to be honest.

      2. Crypto Monad Silver badge

        > Optane was (much) cheaper than RAM and both (much) faster and (much) longer-lived than SSD

        Optane was (significantly) slower than RAM and (significantly) more expensive than SSD. That's why it failed.

        In current architectures, DRAM is already around 3 orders of magnitude slower than what the CPU can deal with - hence the need for 3 layers of cache between RAM and the CPU. Optane would only have exacerbated that problem.

        If you want a radical architectural rethink, then how about smaller, loosely coupled cores? Imagine, instead of having 16MiB of shared L3 cache, your CPU had 256 processors, each with 64KiB of local SRAM. Like a load of Commodore 64's on a chip. They could be running LISP or Smalltalk or whatever you like. If there are more than 256 things going on at once, then the entire CPU state can be paged out to DRAM, using bulk page mode transfers. Since each block of internal RAM is dedicated to a single CPU, all Spectre-like cache timing attacks are eliminated.

        Of course, this involves writing applications in a completely different way, being unable to depend on a single virtual address space accessible by all processors, and with more explicit message passing. Like Smalltalk does. Or perhaps Occam, also originally designed for lots of small CPUs talking to each other.

  3. Kevin Johnston

    RAM/PMEM

    The final part of this took me back to a Z80 based home computer I had, a Memotech. If you bought the CP/M upgrade with the 3.5" floppy drive then at startup it would copy the disc down to RAM and run everything from there. This is only really different to the RAM/PMEM idea in terms of hardware, not concept...albeit that there was no practical way to copy back up without dragging out the shutdown process

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: RAM/PMEM

      [Author here]

      > at startup it would copy the disc down to RAM and run everything from there

      Never had one of those, but that's what CP/M Plus did on Amstrad PCWs. It configured (RAM-64kB) as an M: drive, copied the boot files to that, and then remained usable on a single-disk system.

      Maybe they were inspired by Memotech.

      Personally, the main way that Memotech inspired me was via its appearance in _Weird Science_... and TBH it wasn't the dominant feature of the film.

  4. abend0c4 Silver badge

    Technology moves in circles

    Computer design has always been about making the most of the available technology and making the compromises appropriate for the application at hand.

    There's always been a hierarchy of storage because there's always been an engineering trade-off between speed and capacity. That's why we currently have on-die registers and various level of cache increasingly decoupled from the CPU and then working storage and then permanent storage. It's also why the word length of computer systems has grown gradually: some of the earliest vacuum-tube computers used serial logic (one bit at a time) to minimise components. Scaling that up has depended not only on increasing integration of components, but on manufacturing techniques that allow wide bus interconnects to run reliably at speed.

    We've had persistent working storage before - magnetic core memory and drum memory, for example - but its persistence was rarely more than a convenience; assuming your previous program hadn't trashed the lower part of core, you could restart the machine at the next power-on without having to toggle the bootloader back into memory with the operator switches.

    And we've had computers that didn't habitually shut down or restart on errors or for upgrades. Decades ago programmable telephone exchanges ran continuously and could upgrade their software in place while keeping existing calls in progress: if something is sufficiently exigent, it can be done.

    That's not to say that there have not been systems that created the architectural fiction of a single-level store in which everything is persistent. IBM's System/38 is an example: is had a protection system based on capabilities and each "address" was a handle to a chunk of storage which could be operated on apparently directly and the changes transparently persisted. This, however, is not how it actually worked in reality. Programs compiled to an abstract "machine interface" which ultimately translated into hardware operations in a much more conventional CPU with the aid of two separate levels of microcode.

    And that, finally, is the point. All computer architectures are something of a fiction (including the instruction sets), but a consistent fiction is a necessary basis for stable and sustainable software; we don't want our investment undermined by incompatible changes in the underlying assumptions. As technology changes, we can hopefully tell a better story, but it's imporant it's not too bound to a moment in time: the point about abstractions is that their usefulness depends on their longevity. And in the end, we can't defy physics: the electrons that travel furthest will be the last to arrive.

    1. Doctor Syntax Silver badge

      Re: Technology moves in circles

      "there's always been an engineering trade-off between speed and capacity"

      Between speed and price and it's price - including the price in terms of power consumption - that determines capacity. Can it achieve the speed of DRAM at the price of DRAM or close enough to lower the overall cost of the system?

    2. Liam Proven (Written by Reg staff) Silver badge

      Re: Technology moves in circles

      [Author here]

      > Computer design has always been about making the most of the available technology and making the compromises appropriate for the application at hand.

      I think that's a false generalisation. It doesn't apply to things like the work at Xerox PARC, or to Prof Wirth's work at ETH.

      There is more to computing than the lowest-common-denominator mass-market stuff.

      There used to be amazing stuff coming out of the labs, but the trouble is that if the money-men couldn't see a way to make a buck from it, they decried it as toys.

      Linux is what saved Unix from such a fate. Unix went proprietary on proprietary chips, and could never compete with the mass market COTS kit. Linux put Unix natively on COTS x86 kit and thereby created a revolution.

      Well, maybe it's time to put a Xerox-like OS on COTS kit using FOSS in the same way. Not even necessarily on x86. Maybe on ARM.

      Teaching kids with a bloated mess like Linux is only going to appeal to a very specialised niche sort of kid. The sort of kid who easily assimilates and internalises vile 1970s BS such as case-sensitive filesystems, Vim, and write-only languages such as C.

      It's horrid. It's ghastly. But we're all used to it and take it as a given. It's not.

      A 10K line OS that can run Logo and Smalltalk and Dylan could maybe inspire rather more kids, and they'll invent the real future.

      1. Peter Gathercole Silver badge

        Re: case-sensitive filesystems and other thoughts

        I fail to understand the thinking that case-sensitive filesystems (really you mean file and directory naming in filesystems) is really a bad thing.

        The only thing I can think of is that the DOS/WINDOWS world (and extending back into VMS, RSX and the mainframe world) is so ingrained in this thinking that anything else is unthinkable.

        But case sensitive file naming is not difficult to understand, and when you extend this to other written languages, it's absolutely essential that you can understand more than case insensitive English. Filesystems need UTF-8 support in modern computing, and if you have the wealth of characters available, surely it's less than intuitive to make "a" equivalent to "A" just for English. It becomes an arbitrary mapping that users of other character sets will regard as a weird and unusual quirk.

        I have said before that past English language dominance in computing, particularly US flavour English is a terrible arrogance, almost a cultural imperialism, of the English speaking world. Yes, that world pretty much invented modern computing, but we need to get past that.

        I admit that I find it hard to cope with Kanji or Simplified Chinese file name on CD's sourced in the far east, and there are huge numbers of other languages that are not based on the Roman alphabet, and when (not if) non-Roman characters becomes a significant part of the World Wide Web, life will become much different (we may need a translation service for filenames!)

        In a world of usable computing devices, where not everyone understands this technical debt or even English as a language, things have to become more diverse. If you want the world to settle on a lingua-franca, there is no reason to think that it will be English over, say, Simplified Chinese.

        On the subject of a change in the OS paradigm with regard to storage, you have to extend your thinking to an even larger name-space that includes what we currently regard as network attached storage. There is no reason, if you are designing a new operating model, to limit your thinking to just local storage. I can envisage a world where there are few to no barriers to the way storage is presented to a user. Of course, you then run into problems with permissions and ownership of data. This cannot just be handled by visibility, so it is important that you retain the concept of identity in the OS, although maybe not the multi-user model that we have at the moment (although there are good things as well as bad in that).

        I did comment on this a long time ago when Optane and Memristor were in the news (I think it was on an HP "The Machine" article) but I can't find the post at the moment.

        One of the problems with a 'flat' access model that you may end up with using persistent storage is that you need to have some form of index to find data. This is effectively what a 'filesystem' is. In most current cases it's a non-binary tree index of the data stored in the files. I know I'm getting old, and have not embraced the cloud, but I find cloud object storage difficult to use, because there is no obvious structure imposed on the data objects by the storage mechanism. You still need some form of index/database to find the data, and making that more opaque may not hinder applications designed for this storage method, but it just increases the complexity of a system, not making it more simple.

        Current filesystem design does provide a solution to that problem, and may not be the hindrance you think it is. It may persist (possibly just as an extra index) well into the age of the 'flat' storage model.

        1. Michael Wojcik Silver badge

          Re: case-sensitive filesystems and other thoughts

          And for every "filesystem names should be case-insensitive", there's the obvious retort: "case insensitivity" is meaningless or poorly defined in some languages, and people who want to write in those languages might be displeased to have to learn weird Latin-1 rules for "oh, these distinct characters are treated the same way".

          One of the problems with a 'flat' access model that you may end up with using persistent storage is that you need to have some form of index to find data. This is effectively what a 'filesystem' is. In most current cases it's a non-binary tree index of the data stored in the files.

          Yes, and the affordances of other structures and their drawbacks are not well-understood for this use case. I've seen proposals for using relations to identify objects in a flat store, for example, and whenever someone suggests one, others immediately bring up problematic use cases.

          Hierarchical filesystem naming was adopted in part because hierarchical filing turned out to be a really successful way of organizing information back around the turn of the twentieth century, and has continued to be pretty successful. Humans understand it, and our mental models of it are generally pretty accurate. (See Yates, Control Through Communication, for a discussion of the evolution of filing technology in the nineteenth and early twentieth centuries.)

          Sometimes alternatives seem to work pretty well for particular use cases. Sugar, the OS (really a shell on top of Linux, of course, because what isn't) for the OLPC, organized the user's products in a chronological "journal", IIRC. That's a clever idea that sort of works for the OLPC use case.

          The Single-Level Store and original non-hierarchical filesystem in OS/400 organized objects by various criteria, such as object type and name, but still found it necessary to have "libraries" and an MVS-style file-type object that was internally divided into "members". (There were other types of files that didn't have members, again somewhat like different types of MVS datasets.) You'd find an object by refining a query, in effect, if you didn't already know how to specify it. It worked, and continues to work for all those extant IBM i installations, but sometimes it worked better than others.

      2. ianbetteridge

        Re: Technology moves in circles

        "There is more to computing than the lowest-common-denominator mass-market stuff."

        But mass market stuff becomes mass market because it meets mass market needs. Oberon etc didn't, because they weren't designed to – they were meant for a different purpose.

        I'm left wondering what the problem is that you think all this will solve. It's interesting – and as a person-who-likes-tech I'm interested – but what fundamental challenge is it solving?

  5. Anonymous Coward
    Anonymous Coward

    Costs and Benefits......

    Interesting article!!

    But the people using today's technology will want to know quite a bit more:

    (1) What is the cost of scrapping today's stuff and replacing it all?

    (2) What is the cost of moving the USER things I do today over to doing the same USER things tomorrow?

    Note: not just new hardware and software......people need to be re-trained as well!!

    (3) And finally, once we have assessed the costs, do the benefits exceed the costs, say calculated over the likely lifetime of the NEW stuff (five years?)????

    The Apple Lisa is poster child for this argument -- fabulous piece of kit -- but too expensive compared with the IBM PC and clones. (See items #1, #2 and #3 above.)

    Oh...a personal note: it's taken me years to achieve even basic skills in dBASE, C and Python3......and you want me to learn Lisp or Smalltalk right now? I might take a look....but then again.....

    1. Doctor Syntax Silver badge

      Re: Costs and Benefits......

      Wasn't Lisa's real problem that it didn't get the Jobs fairy dust* sprinkled on it? Macs were also pretty expensive for a good deal less functionality.

      * Decisions about relative pricing might have been a fairy dust ingredient.

      1. that one in the corner Silver badge

        Re: Costs and Benefits......

        The Lisa required far more hardware than the Mac, and definitely way more than the early IBM PC, which helped push its price way up there. Who on Earth would ever want a computer that could take 2 MB of RAM and a hard drive? One floppy and upto 512kB was enough to get the work done all around the office.

        And a soft power-off function? Pah, the BRS is cheaper and all you need!

        (The Lisa GUI was a lot nicer than the IBM PC, but I'd been spoiled - the PERQ had a bigger monitor)

        1. Doctor Syntax Silver badge

          Re: Costs and Benefits......

          AIRI the first Mac was little more than a toy. It required a few upgrades to become useful. Jobs had fought the developers to design it to a price. He also did his best to make it non-upgradeable, Somebody sneaked in an ability to solder in a link to enable it to take higher capacity memory without having to buy the bigger version next year or whenever it was. I wonder to what extent Jobs planned it as a bait and switch.

          1. Liam Proven (Written by Reg staff) Silver badge

            Re: Costs and Benefits......

            [Author here]

            > AIRI the first Mac was little more than a toy.

            That was the point of my retrospective:

            https://www.theregister.com/2024/01/29/mac_at_40_real_significance/

            But the clever bit was making a toy that showed the potential for just $2.5K.

            Very few people would pay $10K for something useful, but some will pay 1/4 of that for something fun and interesting.

    2. Julz

      Well

      It could have been worse. You might have majored in Javascript. Python is just wrong, C is a horrible set of compromises, and dBase?

      You mention cost a lot. I would posit that the cost of using inappropriate languages and systems is far higher than that of the odd training course (do they still exist?) and a bit of time to get you head around some syntax or concept. Look around at the mess that is the current insecure, bloated, bug ridden, untested, etc. etc. set of systems we have to use. All built on top of built to the lowest cost hardware. The risk and the real cost have been passes on to us, the users. It's our data that gets stolen, it's our photographs that get 'lost', it's our time that gets wasted, it's our infrastructure that gets hacked, it's our cars that won't work without an internet connection, it's...

      What we have now is so so much less than it might have been if the cost of making the poor decisions had been truly taken into account and not just the profit of whichever tech company was selling it's wares.

      /end rant

      1. Anonymous Coward
        Anonymous Coward

        Re: Well

        @Julz

        Yes.....you are absolutely right about all of that. And "cost" is an interesting concept.

        If all the (correctly identified) defects that you list are such a problem, then why is nobody measuring the cost?

        If all these costs have been "passed on to us, the users", then why is there not a huge outcry about exploitation?

        The answer is "benefits"!! The benefits associated with the things which cause your list of defects....the benefits are perceived to outweigh the costs!!

        So there are two obstacles in the way of fixing the problems which you (correctly) identify:

        (1) Publicise plausible data about the cost of these problems....and....

        (2) Persuade the public that the "perceived benefits" are much less that the costs in item #1

        Good luck!!!!

        1. Julz

          Re: Well

          It's not the public that make the decisions on which CPU architecture to use, which language is better for which task, which development environment to use, whether to develop and support your own code or trust some crap off the internet, which OS brings the best blend of features and risks or any other of these types of system decisions. The public are innocent bystanders in a drive by shooting.

          We choose the gun (and the car) and it's usually the cheapest the company can get away with.

          1. An_Old_Dog Silver badge

            Re: Well

            Despite the low-retail-price advantages, I have purchased neither a Raven Arms P-25 pistol, nor a Yugo automobile. I do care about quality. I would love to have an Asus EeePC netbook and some spare batteries for it, again.

      2. Doctor Syntax Silver badge

        Re: Well

        "the cost of making the poor decisions"

        The poor decisions are usually made for the sake of operational convenience rather than security. Let's not bother with the inconvenience of a separate privileged user ID with a separate password, let's not enforce complex passwords, let's not fence of different bits of the network, etc.

        The hardware and languages will support better but better costs some user effort and a few seconds of time here and there and time is money...

    3. Greybearded old scrote Silver badge

      Re: Costs and Benefits......

      I think you missed the part where Liam said not to replace the existing systems, but to play with new toys until they become good enough. Just as with the Linux kernel.

    4. JoeCool Silver badge

      Re: Costs and Benefits......

      Did iPhones and Android devices and iPads require "scrapping today's stuff and replacing it all" or did they find a new way for people to use compute ?

    5. Liam Proven (Written by Reg staff) Silver badge

      Re: Costs and Benefits......

      [Author here]

      > (1) What is the cost of scrapping today's stuff and replacing it all?

      Who cares?

      The real advances came from hairy lunatics in labs, not from bean counters. To hell with them.

    6. vtcodger Silver badge

      Re: Costs and Benefits......

      Not only were Lisas expensive. They were also incredibly slow. The summary I heard from others was "Cute but useless". When I got a chance to try one, I had to agree.

  6. bolangi

    Eumel/Elan, developed by Jochen Liedke of L4 fame

    Eumel was the OS, Elan the language.

    "The dream is a machine where you can just pull the plug and when you plug it back in, it picks up exactly where it left off."

    Eumel was close to that. Paging to the backing store (disk) was managed by the OS. There was no C compiler. No concept of writing a file to disk. Concepts Liedke explored in Eumel led to the L3 OS and L4 family of microkernels, one of the latter being used in billions of cell phones.

    1. OffTropics

      Re: Eumel/Elan, developed by Jochen Liedke of L4 fame

      Maybe also RiscOS can do the trick, and https://riscoscloverleaf.com/ sell desktop machines endowed of British glory.

      1. Liam Proven (Written by Reg staff) Silver badge

        Re: Eumel/Elan, developed by Jochen Liedke of L4 fame

        [Author here]

        I love RISC OS but honestly I don't think it has much of a future.

        If it could be extended to be SMP-capable, 64-bit, pre-emptive, etc. it could be useful again -- but it might not be RISC OS any more.

    2. Liam Proven (Written by Reg staff) Silver badge

      Re: Eumel/Elan, developed by Jochen Liedke of L4 fame

      [Author here]

      > Eumel was the OS, Elan the language.

      Thanks for this! I will look into it.

    3. DexterWard

      Re: Eumel/Elan, developed by Jochen Liedke of L4 fame

      A machine which resumes where it left off is fine until some hideous software malfunction renders it inoperable. I recall this being quite easy to do by accident on Smalltalk machines by redefining a method that you shouldn’t have. You still need some way to reload the OS and your software in a working state after something goes tits up. Where does that data come from? Backing store of some kind. And you need a big red button to allow you to reload it.

  7. Neil Barnes Silver badge

    In the absence of files...

    How do you differentiate between the zillions of pages of deathless prose you have composed, and scratch notes that can be deleted? How can one generate a file with one program and open it with another? (Something I do very regularly). Perhaps I'm missing something obvious?

    (as a side note, I observe that a number of music applications seem to have given up on the idea of, for example, selecting a music album and playing it from start to finish. Instead, there is just one big pool of tracks, which one curates with playlists. I find this a pain - surely, the producer of the album has already curated it for you? (And I ignore the extension of this to not owning any music/not having any on your local storage))

    1. Steve Graham

      Re: In the absence of files...

      "How can one generate a file with one program and open it with another?"

      This was exactly what Android and iOS initially tried to prevent. "Files? I see no files, just apps and their data." Obviously, that was nonsense.

    2. Ian Johnston Silver badge

      Re: In the absence of files...

      How do you differentiate between the zillions of pages of deathless prose you have composed, and scratch notes that can be deleted?

      As I recall, one of the many, many failings of the original OLPC machines was that they had no directory structure or, effectively, concept of files. Instead it was all tasks, sorted in reverse date order. As with everything else in the project it all seemed to derive from a very privileged and patronising American view of what children in the developing world should want to do.

    3. the spectacularly refined chap

      Re: In the absence of files...

      How do you differentiate between the zillions of pages of deathless prose you have composed, and scratch notes that can be deleted? How can one generate a file with one program and open it with another?

      Very easily as it happens, I would suggest trying one of the systems instead instead of dismissing something out of hand because it is different to what you are used to. Documents still have labels and can usually be tagged, often multiple times which the file and directory model doesn't really accommodate - e.g. does that record of the Project Alpha budget belong with the Project Alpha stuff or the financial stuff - why, it's both.

      Indeed with the true OO platforms the very concept of an "app" frequently disappears, new software allows you to manipulate a new document type or enhances the capability to deal with a existing type. On the Newton is was sometimes surprising just how far the built in Notes app could be extended with custom "stationery", at the simplest level these could be simple data acquisition forms but it was frequently extended to full DB like operations, and yes, images and audio too.

      1. AndrueC Silver badge
        Meh

        Re: In the absence of files...

        Documents still have labels and can usually be tagged, often multiple times which the file and directory model doesn't really accommodate - e.g. does that record of the Project Alpha budget belong with the Project Alpha stuff or the financial stuff - why, it's both.

        Both Linux and Windows NT support the concept of links. It's a bit poorly presented in Windows but it exists so both can make a file or directory appear in multiple places in the tree assuming the file system supports it and for both OSes the most common file systems in use do support it.

        Having pointed that out however I don't disagree with the idea of doing away with 'files and directories'. Or at least hide them away as belonging to a particular application. If another application wants that data it has to go through the owning application.

        So instead of having a 'Word document' you have a Word application and other applications can query Word about the data it owns (which is basically DCOM, OLE et al only we hope better). How Word chooses to store its data would be application specific but I think that the concept of files and folders is a bit too well known and useful to be dispensed with so most application would use an OS provided API for that. With this fancy new memory it's possible that API could be more streamlined to make the bytes appear to be in RAM (ie; no need to load the data into memory to act on it). But here's the thing about that - Windows and Linux already offer this; it's called a memory mapped file.

        None of this seems revolutionary to me. Under the covers some optimisations might be possible ie; memory mapped files wouldn't need paging but for most programmers it would make little difference and if it's largely all possible anyway why isn't more popular? Is it because of the hacks required to do this with current hardware design as the author might argue or just that in the battle of survival of the fittest files and directories won out.

        An interesting discussion for sure.

        1. Anonymous Coward
          Anonymous Coward

          Re: In the absence of files...

          And of course, having been managing files/folders for "a long time", now we are being pushed to ditch (at least the folders) in favour of metadata. So we toss all our files into Sharepoint and label them with metadata so we can select the file(s) we want from a massive list later. Nice idea, but the overhead of a bloated monster that eats resources like it's some sort of competition makes it ... a bit painful at times. So in some ways, there is movement away from "folders and files" for managing information.

          1. Richard 12 Silver badge

            Re: In the absence of files...

            Removing the concept of "files and folders" tends to break the user's brain.

            Take gmail, for example.

            Gmail does not have folders, instead it has labels.

            But nobody uses it that way. Almost everyone treats the labels as folders - and gets confused when an email thread has multiple labels.

            This is probably because there is no physical analog. Folders are a physical box to put things in. I put the thing in box A, it's still in that box later. I move it to box B, it's not in box A anymore.

            As a software engineer I'm perfectly happy with the concepts of pointers and references, but most people are not. In the industry I develop for we have entire training courses about referenced data, and yet there are still many users who simply don't understand.

            And a fair few software engineers who struggle, too.

            1. that one in the corner Silver badge

              Re: In the absence of files...

              > This is probably because there is no physical analog.

              There is - a directory! Or a set of directories.

              You can use the directory of local trades, or the directory at Companies House, or the local phone directory of every subscriber, or the directory of local council contractors, or the directory of members of the plumbers & gasfitters union, or the directory of Corgi[1] registrations - all of which can lead you to Joe Bloggs, of Bloggs and Sons (gas) Ltd, who fitted the new mayoral kitchenette.

              Which is why computers actually have directories full of files, not "folders"; which is just a weird UI choice made by someone who created a GUI and it unfortunately stuck (a great shame, as we definitely still had lots of examples of real, paper, directories in use when that silly choice was made).

              [1] old name, but far more memorable than whatever it is now

              1. Doctor Syntax Silver badge

                Re: In the absence of files...

                Whether you call something a directory or a folder it's still an abstraction of a place in which to store other things, some of which may also be files or folders depending on the vocabulary you choose to use. You're confusing the abstraction with its implementation.

                1. that one in the corner Silver badge

                  Re: In the absence of files...

                  > Whether you call something a directory or a folder it's still an abstraction of a place in which to store other things, some of which may also be files or folders depending on the vocabulary you choose to use. You're confusing the abstraction with its implementation.

                  No - a directory is *not* "a place to store something" - it is an indirection, a reference to where the thing is stored.

                  Joe Bloggs is not "stored" in any of the paper directories - finding a reference to Joe then points to his physical location (or to another reference that you can then follow to finally reach him physically - e.g. a 'phone number).

                  >> Folders are a physical box to put things in. I put the thing in box A, it's still in that box later. I move it to box B, it's not in box A anymore.

                  That is why adding or removing a reference to Joe does not change the way he is stored, nor does it change any of the other references.

                  Similarly[1] a file system's directory entry is *not* the file itself, it is a reference to where the file can be found. You will often "move" a file from one directory to another, but that is conflating two directory operations into a single command: add the new dir entry, then delete the old dir entry. At no point[1] does the file itself change its location during this move.

                  The semantics of a "folder" and a "directory" are very different, but everyone continuing to use the UI term "folder" is clearly confusing/limiting people's thinking.

                  [1] ignoring later optimisations that were added to the file systems, like storing the contents of a tiny file inside its directory node - which then causes all sorts of fun when you create another link to reference that data

            2. jake Silver badge

              Re: In the absence of files...

              "Take gmail, for example."

              Thanks for the offer, but I'd really rather not.

            3. Liam Proven (Written by Reg staff) Silver badge

              Re: In the absence of files...

              [Author here]

              > Almost everyone treats the labels as folders - and gets confused when an email thread has multiple labels.

              As an olde pharte myself, I can believe this. It's how I use them.

              But I am told, or have read, from multiple sources, that for folks a generation or so younger than me, that the concept of hierarchical folders is non-obvious to them and they prefer categorisation by labels.

              https://www.theverge.com/22684730/students-file-folder-directory-structure-education-gen-z

              https://www.pcgamer.com/students-dont-know-what-files-and-folders-are-professors-say/

              Witness some of the comments here:

              https://www.reddit.com/r/PleX/comments/16q9oz2/do_you_subfolder/

              1. Julz

                Re: In the absence of files...

                Take your pick on abstraction. VME had libraries where each library only contained one type of thing, usually some sort of file but it could be other sorts of objects. The library contained the information the OS needed to do things with that sort of object. Oh, and all of the access controls etc. No one bothered where they actually were stored (they certainly weren't all in the same physical location, or even logical location) unless you were like me and tasked with performance.

              2. that one in the corner Silver badge

                Re: In the absence of files...

                > they prefer categorisation by labels.

                > Witness some of the comments here

                What I notice, particularly in the Reddit comments example, is that they appear to be largely relying on labels that have been attached by a third party to widely-disseminated data (in that particular case, TV and Film, using things like the age ratings).

                Other sources of data can also add labels (aka metadata) on behalf of the user: a photo on a mobile 'phone can[1] be GPS tagged, then looked up on an online map to add the venue name, and the shot sent to Facebook to have face recognition applied, tagging all of your friends; whilst my boring old camera will write a UTC timestamp and then anything else is left to me. So my photos are all stored in subdirs by date, with manual dir names added to give a brief description (venue, event, whatever is meaningful to me), if for no other reason than that is consistent over *all* my photos, no matter how old (including transfers from film), and I'm used to it by now.

                Similarly, I have a load of MP3s that don't contain any metadata 'cos when they were ripped from the CD we were just glad to get an audio file. So now all my music is sorted using a directory structure that suits my needs, I'm happy with it - and don't really care whether or not the files that make up the latest album that I bought, directly in digital form, have any extra metadata in them.

                Perhaps a major difference between the beardies and the youngsters is that they were never faced with doing all of the organising and labelling by hand, so never *needed* to learn any other organisational skills, such as creating their own hierarchic taxonomy of subdirectories, just out of necessity?[1][2]

                [1] and, if the scenario given above is realistic, the youngsters would be less wary of just throwing *all* the photos at places like Facebook?

                [2] although I do admit to also using a wiki full of notes (which can be searched by text, have category tags attached to pages, ...) where I also keep references to specific locations in the massive archive dir tree - but the amount of material that has been "tagged" in that way is miniscule, mainly 'cos the effort to do that is far too high - and it would lose the metametadata that those items are *so* super-special that they are worth the extra time to add them to the notes.

          2. hh121

            Re: In the absence of files...

            At the risk of approaching a rabbit hole, the reason for that sharepoint metadata-rather-than-folders thing is more because sharepoint doesn't treat the parent folders as a searchable attribute of the file. So the chances of a file called 'Jan2024.doc' in the folder 'board papers/fy24/europe' being found using a search for any of those terms would be iffy at best. Painful experience would rate it as unlikely.

            That and the 400 char limit on URL length which is quite easy to hit if you drag a file share into a sharepoint library.

            Of course getting people to enter metadata on new content (useful and valid you hope), or parsing it from existing content in bulk are both significant hurdles.

        2. Ian Johnston Silver badge

          Re: In the absence of files...

          Both Linux and Windows NT support the concept of links.

          OS/2 - or perhaps Presentation Manager - was good at this. The same file could be represented by different objects simultaneous. As a corollary, the name of an object did not have to be the same as the name of the underlying file, so you could rename a file without changing how it (i.e. its object representation) appeared in a folder. All very freaky and I suspect most users didn't explore these areas much.

        3. that one in the corner Silver badge

          Re: In the absence of files...

          > If another application wants that data it has to go through the owning application.

          So do I *have* to have Word installed, as it is described as the "owner" of the file that you sent me? Or can Libre Office be used? What if I have both installed, because I'm supporting Users who each choose a different program?

          Is that SVG file "owned" by Inkscape or by the web browser from which I used "save link"? What about the SVG file I created using Inkscape but want to view in the browser? What about when I uninstall Inkscape? What about when I just open the SVG in a plain text editor to do a quick search and replace of an element?

          Is that JPEG owned by Irfanview, my favoured quick'n'dirty viewer, or by the Sony program that copied it from the camera? What about the JPEG copied across from the Olympus DSLR? Or the images from my mate's Canon? Maybe they are owned by Lightroom whilst I'm in an editing frame of mind, trying to correct the colour balance? Or back to Irfanview as all the editing required is a simple rotate and crop?

          1. Neil Barnes Silver badge

            Re: In the absence of files...

            What happens to a 'file' which has no owner? You've deleted the Olympus application, and the Sony program, and Irfanview, and Lightroom; nothing remains on your computer that owns those pictures. So now what?

            Or building a new system, which you might like to have things similar to your existing system. So you copy your data across, but how do you associate it with any new applications? File types? Magic numbers? Other metatdata?

            In answer to The Spectactularly Refined Chap upthread, I'm not trying to dismiss out of hand the new and unfamiliar. I'm trying to understand how it might work for me... but I am reminded of the observation that 'if you can't open your data with something other than the original application that created it, you don't own it. You are hostage to the original software'. We all know how well a 'standard' format for word processors worked out... requiring a file type to be owned by a particular mediating program, and that all access to that data be through that program, well, that worries me. It's too similar.

          2. AndrueC Silver badge
            Happy

            Re: In the absence of files...

            So do I *have* to have Word installed, as it is described as the "owner" of the file that you sent me? Or can Libre Office be used? What if I have both installed, because I'm supporting Users who each choose a different program?

            'Owning application' in this context only means 'the application responsible for managing that collection of bytes on this system.'. If someone sends you a file you open a compatible application and import it - or as now when you try to open the file in your mail client the operating system suggests a compatible application.

            Is that SVG file "owned" by Inkscape or by the web browser from which I used "save link"? What about the SVG file I created using Inkscape but want to view in the browser?

            The file is owned by Inkspace, the link is owned by the OS and would contain sufficient information to identify the original data and its owner. The browser would talk to Inkscape to render the SVG (and register its use of it) or offer to adopt a clone of the data for you. If you modify data linked to by or from another application you get prompted whether to notify the other application or just make a copy and update that.

            If you uninstall an application:

            * Any application that has registered and not released a link to data will notified so that it can decide how to handle it (clone its own copy or warn the user that it won't be able to render the data moving forward).

            * You are asked if you want to keep orphaned data and if so it remains on your storage along with whatever metadata identifies the original application.

            There might have to be an ability for the OS to provide access to 'orphaned' storage so that applications can search it and take over ownership of the data in some fashion. Although an application ought to be able to mark data as 'for me only' for security reasons as below:

            Although the above seems complex it's better than the current system where if you uninstall an application you can lose access to data without realising it. At least here there is something that is responsible for and tracking each data object. This also improves security because it prevents applications from opening data objects willy-nilly. An attempt to open secret_db.sql with - say - a hex editor will be impossible because you can't navigate to it. The only way to access that database is to ask the owning application and that will (we hope) prompt the user for confirmation before allowing it.

            1. that one in the corner Silver badge

              Re: In the absence of files...

              From the first message, I was having a hard time understanding just what problem this is trying to solve.

              Now you have given two use-cases, neither of which seem to warrant such a complicated scheme:

              1) " if you uninstall an application you can lose access to data without realising it."

              Is that something that happens a lot and is a real killer, so much so it needs this complexity? The file is still there, you can see it is still there; presumably you recall what application you uninstalled and can just re-install it? Or just look the name and type of the file and install anything that claims to be able to read it (if you have no idea what was uninstalled, this may require a bit of searching, or just asking your friendly IT guy) but is it something that occurs ofen)? Indeed, if you are as totally application-oriented as this whole idea suggests, your thoughts would more likely be "Ah, I need this data that I usually access via SuperApp, oh dear, SuperApp is not installed, let's just install it then".

              2) an idea that apparently "improves security because it prevents applications from opening data objects willy-nilly"

              > An attempt to open secret_db.sql with - say - a hex editor will be impossible because you can't navigate to it.

              Security for whom? We already have (unless you are suggesting throwing them away) mechanisms for providing security for the data based on the access rights granted to the User (many such mechanisms, of varying strengths); e.g. if your User does not have read-access to a directory containing secret_db.sql then you can't even see it exists, let alone navigate to it.

              If an application is able to prevent data being opened then the only thing you are going to protect is the implementation details of that application's data files! That is adding a lot of complexity to the User's system to add a feature that is doing absolutely nothing for the User.

              I'd also add that those two ideas can clash, badly. If you have uninstalled an application that can read a particular file - or never installed one in the first place - then methods to decide what type the file is include running the 'file' command and having a look inside with a hex dump. If the file is prevented from being opened because the correct application is missing and you can't identify the file type (no, file extensions are not always reliable) then - ooops. You really *have* lost access to that data!

              > or as now when you try to open the file in your mail client the operating system suggests a compatible application

              Once you have made your choice from the list of all the compatible applications that the OS has suggested (your OS only suggests one?), that isn't making the selected application "responsible" for the data; the email client just wants to let you view the data it can't render. In fact, if it is anything like my email client, the attachment as received is still in the email store, all you have is a temp file: if you want to keep that as a separate file, save it somewhere sensible now, as otherwise it'll get cleaned up along with the rest of the temp files at some arbitray time after you've stopped viewing it.

              > you open a compatible application and import it

              Ugh, I absolutely loathe applications that "import" your data: either they just mean they'll open it for you (in which, just say "open") or are making a note of its location for future reference (in which case, just say "noted") *or* they are moving the file, renaming it, trying to take it over completely and prevent you from ever daring to make use of it in any other program. I keep as far away from that kind of program as I possibly can. For example, having tried it, twice, I'll manage all my ebooks (of which I have many - thanks to Humble Bundle) in a simple dir tree rather than *ever* install Calibre again (unless they stop "importing" and just start "taking note")!

              If that "import" behaviour is the sort of thing that you like then I can see why I'm having a hard time with this concept.

              > The browser would talk to Inkscape to render the SVG

              Doing things like providing a rendering service for SVG to any program that wants it is a reasonable thing to do - and, as you mentioned, there are and have been ways of doing that (OLE being one such attempt). But they are totally divorced from any idea of "being responsible" for the data.

              Clearly I am not understanding what this concept is supposed to provide to the User and how it warrants such complexity - and, as far as I can see, such ambiguity over "what application is responsible for the data" and the potential for duplicating data with clones for no good reason. Ah well. If it is a good idea that I'm just not understanding then I'm sure it'll appear somewhere and, seing it in action, the penny will drop.

        4. Mike007 Bronze badge

          Re: In the absence of files...

          IIRC windows NTFS had support for symlinks, but the OS didn't expose it and they were effectively useless because explorer would do things like a recursive delete... So they introduced a deliberately broken version of symlinks at the OS layer with junctions. Rather than fix explorer.

          1. AndrueC Silver badge
            Meh

            Re: In the absence of files...

            Yeah, Explorer has a number of deficiencies with respect to the primary file system it encounters:

            * Doesn't report on named streams.

            * Doesn't support paths longer than 260 characters.

            * Doesn't allow creation of links although it can show them in a different colour and follow them.

        5. Jaguart

          Re: In the absence of files...

          Isn’t that the wrong way around?

          For me a doc is a collection of information and that’s more important than the app used to create it.

          It’s the Apps that should disappear and the files/documents should become king. A doc with sections of text, markdown, html, rtf, mind maps, images, sound, 3d vectors - and interactions that select an appropriate actor - viewer, editor, printer, explorer etc would work for me.

          And multiple external orthogonal hierarchies using classified tags should replace the single monolithic directory hierarchy. Indexing by my personal arbitrary choices - as well as creation/modification, content, title, location etc. depending on how I want to find my needles. In utf8 or course.

    4. that one in the corner Silver badge

      Re: In the absence of files...

      > I observe that a number of music applications seem to have given up on the idea of, for example, selecting a music album and playing it from start to finish

      And applications that take your carefully arranged files, one directory per album, some of the (newer) files even containing all the MP3 metadata (so even the files know which album they belong to) and carefully ignore all of it.

      Our car radio does precisely this - it'll find all the files, put them in alphabetical order and then play them. So the Christmas albums USB memory stick become unbearable - we get every. single. version. of. "Come All Ye Faithful" one. after. another. And end the trip with "We Three Times 11 Kings".

      Even worse, trying to play one story from a USB stick with a collection of Blake's 7 audio plays? All the "01 - Title Music" together, then all the "02 - Credits" and so on. OTOH if you find the plots a little simplistic then you'll probably enjoy trying to follow every single one of them at the same time!

      > surely, the producer of the album has already curated it for you

      Yes, and, as with the audiobooks, made sure that the Concept of the album is made clear by the order of the tracks.

      Clearly, this is all our fault for not finding out what format of playlist this player can use (including whether it allows dir names within the playlists) and taking the time to write a program to create them all, because that is something everybody can do.

      /rant

      PS

      This is relevant to the bigger picture, as highlights the point: ok, you want us to label our data - but does everything that insists on reading those labels use the same format as everything else? Or do we have to keep on adding fields when trying to move from one application to the next?

      1. Neil Barnes Silver badge

        Re: In the absence of files...

        This, one hundred percent. And add to that, a player in the car which will accept an external music store - either bluetooth from a phone, or from a USB memory stick - but which offers a very tedious way of moving between directories, starting at the top of the tree every time, and will only play the contents of one directory in serial order. Or random order, if that's your choice (which on a long journey it often is; I like to be surprised) but only of one directory. Neither parent nor child directories are included.

        I finally located a phone application which will allow both playing a complete album and playing tracks at random and can stream music from their to the car bluetooth. It would still prefer me to make a playlist... sigh, life's too short.

  8. An_Old_Dog Silver badge

    Hit-and-Miss

    1. Falsehood #2: Primary store is small, fast but volatile: when you turn the computer off, its contents are lost. Secondary store is big, but slower and persistent.

    The author has forgotten about magnetic core memory, which was primary storage, and would retain its contents if you shut the computer down properly.

    2. Put some PMEM in your computer's DIMM slots, and most of the core primary/secondary distinction is lost. All the computer's storage is now directly accessible: it's right there on the processor memory bus. There's no "loading" data from "drives" into memory or "saving" any more.

    The author has forgotten economics: PMEM is far-unlikely to become cheaper than spinning rust. Does he expect people to buy sufficient PMEM to replace all the terabyte's-worth of disc drives they have?

    3. The other thing is that modern FOSS Unix isn't much fun any more. It's too complicated. A mere human can no longer completely understand the whole stack.

    True, and the same can be said of the underlying (modern) CPU.

    4. (Regarding Apple's System [1..9.2]): No config files, either – the OS kept config in a database which the user never even saw.

    That "database" was a per-app file called the "Preferences" file, and common help desk advice to Mac users of malfunctioning apps of the time was, "Try deleting the Preferences file and restarting the app."

    5. Our main memory can be persistent, so who needs disk drives any more?

    Answer: anyone who can't afford terabytes of PMEM, which probably could not be "directly addressed" by a current X86 CPU.

    6. Whereas what Lisp and Smalltalk workstations show us is the opposite: that if you choose a powerful enough language to build your software, you only need one.

    One size does not fit all. Let's talk about "version control" and LISP and Smalltalk. Oh, they don't have that, so we're back to BASIC-style "SAVE PRG001.LSP", "SAVE PRG002.LSP", etc.

    7. You may have a great, fancy, dynamically typed scripting language, but if it's implemented in a language that is vulnerable to buffer overflows and stack smashing and memory-allocation errors, then so is the fancy language on top.

    That would be assembly language, which has all those vulnerabilities. One can implement an interpreted langauge, but the interpreter itself has those potential vulnerabilities. With a CPU whose microcode one could alter oneself, one could "create" a CPU which directly supported the Uber-language the author is searching for. But X86 microcode is (a) not documented outside of Intel and AMD, and (b) not something one could alter oneself -- microcode packages are "signed" in some cryptographic way to prevent non-Intel/AMD-provided microcode from being loaded.

    8. When you shut down, the running OS image, a mixture of program code and program state, was snapshotted to disk.

    When you turned the machine back on, it didn't boot. It reloaded to exactly where you were when you shut down, with all your programs and windows and data back where they were.

    Yes, everything is re-loaded as-was, including a potentially-broken state. Where do you go from there? Is there a "RESET" button which goes back to a known good state? Even so, I don't want the state of my data (pictures, documents, music, videos) tied to the state of my operating system! If I have to reset my OS (in this scheme), I will lose data! If I want to make back-ups, I will need external storage devices (disks or tape, as all-flash media is insufficiently cost-effective). Disk drives and/or tape drives which this author-envisioned dream machine will not have.

    1. Paul Crawford Silver badge

      Re: Hit-and-Miss

      The cost of PMEM is not such a critical aspect as, one might presume, it becomes just a tier in a seamless pool of storage and is essentially a cache that happens to be non-volatile so you don't have the issue of journaling, etc, so a crash or power off causes incomplete storage.

      However, given the crappy state of all software (and absolutely no sign of that improving) the issue of how to recover from the inevitable crashes is really, REALLY, serious. It needs seamless and low-overhead snap-shots of the OS, apps and data, so you have a sporting chance at the not-a-boot prompt to select how you want to recover if anything is amiss following a crash (or yet another borked update).

      1. that one in the corner Silver badge

        Re: Hit-and-Miss

        > The cost of PMEM is not such a critical aspect as, one might presume, it becomes just a tier in a seamless pool of storage

        We can already do this without the PMEM: you just[1] have to memory map the entire raw disk, without worrying about creating a file system first. You can even leave all that to hardware and not bother the (programmer' model of) the CPU at all, giving you a transparent memory model from L1 associative cache down to spinning rust. Which I'm pretty sure a few applications have done, albeit ending up laying down a data-structure that you could easily just call a task-specific file system (e.g. a database taking over a raw partition instead of using an OS provided file system).

        Which is fine and dandy, until you wanted to use more storage than the CPU could address (ok, we hit that immediately for 16-bit CPU addressing, very quickly for 32-bit but, whilst 64-bit made a mockery of the 48-bit LBA hard drives and their feeble 100-odd terabyte limit, by then we were rather stuck in our ways).

        But once you do have your memory-mapped drive *or* your PMEM, to protect against software (and some hardware) crashes you apply the same methods to data structures in all of memory as we do now to those on the hard drive: i.e. journalling, atomic writes to switch representations of a data structure from one consistent representation to a newly copied-and-modified-then-verified-consistent representation and so forth. Which means a *lot* of discipline by programmers (not everyone actually writes database programmes all day, so aren't very used to thinking about doing everything by read-modify-commit transactions) but preferably compiler/language runtime support (which sadly isn't, as far as I know, fundamentally built into Smalltalk or its ilk, as per the suggested languages in the article; anyway, how do you automatically decide when a totally arbitrary data structure is in a new and consistent state?).

        [1] he says, glossing over the issue of unexpected power loss, but keep on reading.

    2. that one in the corner Silver badge

      Re: Hit-and-Miss

      > Let's talk about "version control" and LISP and Smalltalk. Oh, they don't have that, so we're back to BASIC-style "SAVE PRG001.LSP", "SAVE PRG002.LSP", etc.

      Absolutely. And forget about diff'ing the saved images to get a meaningful view of what has changed from one version to the next.

      And, for that matter, wanting to run different images alongside each other, for the sake of comparison: at that point, you are into loading VMs and selecting the images to run, which is a sudden jump in complexity (especially in Bygone Days).

      As much as I had wanted to like Smalltalk, since reading the dedicated Byte issue, that was one of the ideas that I really was never comfortable with, so never really got to grips with "the proper Smalltalk experience" enough to complete a product in it - not helped by being asked how to remove all the debugging and object examination features to turn it into something that was deemed shippable![1].

      (FWIW All my LISP work was done the normal way - editing the source in the same editor used for Pascal, roff etc; and I got a couple of the Squeak books - and so many versions downloaded - hoping to "really get into it this time" but, well, not so far, despite being one of those C++ programmers who still really likes using OOP)

      [1] There probably was a way to do that but it didn't seem to be documented at the time - maybe we hadn't bought the full "commercial-grade package"? - and, unlike nowadays, you couldn't Google and find a Youttube video entitled 21. Tutorial. How to distribute a Smalltalk application.

      1. timrowledge

        Re: Hit-and-Miss

        Smalltalk can do version control perfectly well, thank you very much. For code centric uses, there is Monticello, or changesets, or Tonel/git, or Pundles and so forth. For more info centric needs people have made assorted solutions ranging from simple to expansive (gemstone, for example).

        I mean, seriously, if it couldn’t, how could so many important ideas have been originated in it?

        1. An_Old_Dog Silver badge

          Re: Hit-and-Miss [Stone Knives and Bear Skins]

          I mean, seriously, if it couldn’t, how could so many important ideas have been originated in it?

          "I am endeavoring to construct a mnemonic memory circuit using stone knives and bearskins." -- Spock, in "The City on the Edge of Forever".

          There are a lot of virtual Spocks out there who have created a lot of computer advances using those virtual stone knives and bear skins.

        2. that one in the corner Silver badge

          Re: Hit-and-Miss

          > I mean, seriously, if it couldn’t, how could so many important ideas have been originated in it?

          Well, version control is all about letting you make a complete mess of things and then have some hope that you can recover

          You don't *need* a VCS to create an amazing piece of software and within that originate many marvellous concepts.

          It is just a *lot* safer and less nerve wracking to do that with the VCS, especially if there is potentially more than one person working on a section of code[1]. DVCS extends that, making it easier (understatement) to spread your team around; it still isnt *necessary*[2], just helps maintain the sanity of those involved.

          [1] The first commercially shipped, shrink-wrap software I worked on was written by a team of half a dozen and we had nothing that could be called "version control" - we each worked on dual-floppy PCs and it was only *just* possible to compile and link the entire program on the single hard drive equipped PC that we all shared. The closest we got to version control was someone running grandfather-father-son backups. We were careful. And delivered. /macho

          [2] Barring things like management refusing to allow the longer timescales due to posting floppy discs around the globe - those sorts of things control whether you'll get the resources to write the software, not whether it is possible to ever it under those conditions.

    3. Greybearded old scrote Silver badge

      Re: Hit-and-Miss

      "Let's talk about "version control" and LISP and Smalltalk."

      Needs to be in the OS layer. VAX/VMS was versioned at the file system. Trouble was, with disks being so small we had to keep deleting the old versions.

      1. billbo914

        Re: Hit-and-Miss

        I don't think VAX/VMS versioning is even close to modern version control (at least for software development). No way of keeping track of changes in multiple files and how they relate. No way of keeping track of why changes were made. No merging support so one can have people working simultaneously. etc., etc., etc.

        1. Ian Johnston Silver badge

          Re: Hit-and-Miss

          That's application level detail. VAX/VMS let you have different files with the same name in a clear order.

          1. that one in the corner Silver badge

            Re: Hit-and-Miss

            > That's application level detail

            Hmm, nope. It is the difference between tacking version numbers onto files (which you can do manually, VMS just did it automatically) and "version control" as per a "version control system" or VCS. There are "application level" features built on top, some of which are so common that we associate them with the concept of a VCS, even though they are totally generic facilities: most importantly, diff and merge (which you do to anything, with varying usefulness, not just the contents of a VCS).

            Ignoring the use of compacted storage (e.g. just saving the results of the diff) - which is a pretty big thing to ignore, but never mind - version numbers do not a VCS make.

            Even the crudest VCSs, which operated on the single file level, such as SCCS and RCS, provided more than VMS versions when you saved the next revision - basically, more meta data was added. Some of that VMS retained - eg, the timestamps on each version.

            But nowadays, we tend to shudder at the thought of going back to RCS and file-by-file versioning. The use of CVS (and later) features, such tracking renaming and grouping changes to multiple items as a single atomic change are beyond what VMS would do for you.

            Perhaps cheekily, let us not forget that automatically saving FRED:001, FRED:002, FRED:003 etc does not even provide the level of meaning we attach to "saving the copy with a new version number", such as FRED:1.2.0, FRED:1.2.1 and FRED:2.0.0 (yup, the third copy was a major change in the file).

        2. Greybearded old scrote Silver badge

          Re: Hit-and-Miss

          It's not close, no. We can do better now, but that was a simple example of the concept of having versioning in the OS.

          As I understand it, IPFS versions through content hashing in a similar way to VCS software.

      2. Ian Johnston Silver badge

        Re: Hit-and-Miss

        VAX/VMS was versioned at the file system.

        Twenty five years after I lost my last VAX account I still really miss machine::drive:[directory.subdirectory,subsubdirectory]finename.extension;version

      3. Peter Gathercole Silver badge

        Re: Hit-and-Miss

        Versioning in VMS (and RSM-11 and RSTS) was a feature of the Files-11 filesystem, with enough in the OS to allow you to manage it. It was just a file generational thing, not any form of change control on the data in the files.

    4. IvyKing

      Re: Hit-and-Miss

      I'm glad I'm not the only one who remembers magnetic core storage. I've even been exposed to a machine that used a drum for main memory, archive storage was either punched cards or mag tape. Not old enough to have experienced the original DRAM - which was implemented in a special CRT.

      On a slightly different topic, the OS intended for the first RISC machine, the CDC-6600 was called Sipros Ascent which was supposed to make it easy to embed assembly code into Fortran (the CDC 6000 and 7000 series machines were designed for Fortan). Development was overdue and over budget, so a bootleg OS called SCOPE was used instead and became the most common. The 6000 and 7000 series did have an interesting set of memory hierarchy, starting with the 8 general purpose 60 bit registers, standard core, extended core, disk, tape/cards.

      1. An_Old_Dog Silver badge

        Re: Hit-and-Miss

        Our uni ran SCOPE for a while on our 6600 ("Cyber 73"), then switched over to KRONOS when it became available. Thing is, I don't see why you'd want to directly embed assembler inside your FORTRAN program, since you don't know which machine instructions had proceceded your assembler code. As-was, it was easy enough to have your FORTRAN program call a function or subroutine you'd written in assembler ("COMPASS"), assembled, and linked in.

        I think the more interesting aspects of the 6600 series were that the OS ran on one of the peripheral processor units, that the CPU was essentially a slave processor, and that the multiple PPUs used a single set of PPU execution hardware, with a set of registers for each "virtual" PPU. "Hyperthreading" was invented in 1964!

        1. IvyKing

          Re: Hit-and-Miss

          Yeah, the PPU's on the CDC 6000/7000 series machines were quite a neat design. Having the control portion of the OS reside on a PPU greatly enhanced security as there was no way that user code could overwrite the OS. Another advantage of the Fortran model was the lack of a heap. I did do some assembly language programming for subroutines to be called from "RUN" Fortran, used CLDR instead of LGO as part of the job card and I still have my copy of Ralph Grisham's "Assembly Language Programming for the Control Data 6000 series" book. The CDC 6000 ISA is one of the cleanest that I've come across.

          During my first quarter at the big U, the comp center had two 6400's, with one running SCOPE as modified by the CS department and the other running KRONOS. The real big users of computer power could submit jobs to the 7600 on the hill or a collection of 7600's about 45 miles away run by the only guy Seymour Cray paid attention to. Unfortunately the 6400 running Kronos was sold, but the CS department was given a PDP11 to play with. They started off running three different OS's on it, two from DEC and one from some outfit in New Jersey - after their hacking of SCOPE they decided it might be fun to try their luck with the code from NJ.

          As for NVM, the one class that I got an A+ in was taught by some guy named Leon Chua...

          1. An_Old_Dog Silver badge

            Re: Hit-and-Miss [CDC Big Iron]

            Was CLDR part of KRONOS? I only used LGO. The Blue Book was wonderful. We also had a CDC 3300, originally running SCOPE. Some grad students wrote an alternative OS, called "OS-3", which was eventually so popular at uni that the C.C. chucked SCOPE, and ran OS-3 24/7. I'd love to find a 3300 emulator and a copy of OS-3.

            1. IvyKing

              Re: Hit-and-Miss [CDC Big Iron]

              I know CLDR was part of SCOPE or at least CalidoSCOPE (Cal Improved Design Of SCOPE), but the KRONOS machine had been gone three months when I started the Assembly Language class. I do remember warning of making sure the "IDENT" card followed immediately after an END statement card, otherwise SCOPE would get confused. On a related note, a while back I downloaded the COMPASS code for SNOBOL. I still like the way setting registers A1 - A5 triggered a read to the corresponding X register and A6 and A7 triggered a write from the corresponding X registers, along with A0 and X0 being used for ECS reads and writes. On a 6600, the ECS could transfer a 60 bit word every 100 nsec, though my fiber internet connection does 940Mbps up and 930Mbps down.

              It was impressive watching the 6400 handle dozens of TTY connections, two card readers with one being a CDC 405, four line printers and at least a couple of tape drives. The CC had an Extended Core Storage cabinet that was originally shared between the SCOPE machine and the Kronos machine. Ten years later, the CC was running networked VAX's running an updated version of the OS the CS department wrote for the PDP-11, though they had to kludge a terminal driver system that would handle the variety of terminals that had been used with the 6400 as well as an SDS machine.

              I remember the CDC La Jolla facility having a CDC 3200 in 1971 (along with a bunch of CDC1700s), and the Smithsonian Museum near Dulles has a 3800. The 3000 series machines were basically the CDC 1604 implemented with silicon planar transistors as opposed to the original germanium junction transistors, though the 3200 was to the 3600 as the 6200 was to the 6600.

              1. An_Old_Dog Silver badge
                Windows

                Re: Hit-and-Miss [CDC Big Iron]

                Our setup had a couple of local customizations. One was that all the terminal lines fed into a DEC PDP-8e, which ran custom code to act as a front-end processor/concentrator, and which itself was interfaced via high-speed links to both the 3300 and the 6600. When you switched your terminal (typically, a Teletype ASR-33) to "LINE" (aka "ONLINE"), you were initally talking to the FEP, which understood a limited set of commands. One was TRAFFIC, which would print out how many interactive users were connected to the 3300, to the 6600, what the load averages and such were. This way, you could decide whether it was worth going on-line or not. Control-A connected you to the 3300; Control-Q connected you to the 6600. The 6600's interactive response time would get really-slow with more than 80 or so simultaneous online users. The highest I saw it was 92 users, at the end of a term.

                Another was that the "Last Job Out" number was displayed on some wall-mounted, high-speed, blue-phosphor CDC-supplied CRT terminals-sans-keyboards: one in the terminal room (filled mostly with ASR-33s in carpet-padded carrolls to reduce noise), one in the keypunch room, one in the common room (tables, chairs, and vending machines), and one on a table inside the "fishbowl", positioned so the display could be easily seen from out in the hallway, looking into the fishbowl.

                Peripherals were I-don't-know-how-many-or-what-types-of-disks, 7-track tape drives for the 3300, 9-track tape drives for the 6600, a flatbed plotter (first a Calcomp, then an elctrostatic Varian), a card reader or two, a card punch, one line printer for the 3300, two line printers for the 6600, a computer-output-to-microfilm device, a bunch of dial-up modem lines, and a laser printer when those first became available. Somewhere in there we got some ECS. No RJE stuff, no Plato stuff, and no tie-lines to other mainframes. Our CC charged for cards read, cards punched, lines printed, CPU seconds used, permanent-storage file blocks, and terminal connect time. I was a scrimpy bastard, so I generally keypunched my programs (blank cards were free, as was time on the keypunch machines - mostly 029s with a couple 026s), ran an night-shift job to COPYSBF the cards to line printer, picked up the listing in the AM, then manually checked it for typos, etc. I'd punch "fix" cards, integrate them into my deck, then submit another night job to actually run the program. If it wasn't right, I'd break down, copy the cards to a permanent file, and do editing/testing on-line during the day (most-expensive rates).

                I know all that old stuff was god-awful expensive and energy-hungry, but dammit, it felt like computing; it felt wonderful! There's a great piece of free (for non-commercial uses) software called "DesktopCYBER" for Linux (and MS-Windows, too). I've got mine set up as a Cyber 865 running NOS 2.8.7 Level 871 with Plato. It runs nicely-speedily on a Core i7, and noticably-less-speedily on a Core i5.

                Icon for old-tech dude, muttering about how ugly the current X86 instruction set and hardware are. "FUCK YOUR WRITE-ONLY, MACHINE-SPECIFIC REGISTERS!" (takes a another swig)

    5. Liam Proven (Written by Reg staff) Silver badge

      Re: Hit-and-Miss

      [Author here]

      > The author has forgotten about magnetic core memory

      I haven't, but I think the main difference is that there was no choice then. By the time HDDs became common, core store was obsolete. This is a thing that went away around the time the distinction I talked about appeared.

      > The author has forgotten economics: PMEM is far-unlikely to become cheaper than spinning rust.

      Au contraire. The thing about economics is that it sometimes catches up and overtakes you, and then it can look like it's behind when it's actually far ahead.

      It's not that HDDs are bigger but cheaper than solid-state: it's that they still are but it doesn't matter any more. It no longer matters for most people.

      When terabyte-class SSDs are well under $100 then most ordinary computers no longer need the dozens of TB offered by spinning media. Solid-state is enough.

      Most people don't need 10+ TB of very slow, very fragile storage. That is slowly becoming datacentre-only kit.

      > That "database" was a per-app file called the "Preferences" file, and common help desk advice to Mac users of malfunctioning apps of the time was, "Try deleting the Preferences file and restarting the app."

      Are you sure you're not mixing up early OS X stuff with Classic MacOS?

      > Answer: anyone who can't afford terabytes of PMEM, which probably could not be "directly addressed" by a current X86 CPU.

      Cheaper than RAM, faster and longer-lived than Flash. What more do you need?

      > One size does not fit all. Let's talk about "version control" and LISP and Smalltalk. Oh, they don't have that, so we're back to BASIC-style "SAVE PRG001.LSP", "SAVE PRG002.LSP", etc.

      I suspect that something akin to VMS's file versioning is enough for most people. I suspect that for most of its users, Git is a sledgehammer and their needs are a mere peanut. Certainly it was in all the professional uses I've put it to.

      Git's power is literally a joke:

      https://xkcd.com/1597/

      > That would be assembly language, which has all those vulnerabilities.

      But who writes that by hand any more?

      It's not about the theoretical edge cases but the real world ones of actual usage.

      > When you turned the machine back on, it didn't boot. It reloaded to exactly where you were when you shut down, with all your programs and windows and data back where they were.

      Again: versioned snapshots are consumer tech now. Snap does it, Flatpak does it, Btrfs does it in openSUSE and Garuda and siduction.

      This is a solved problem.

      > Is there a "RESET" button which goes back to a known good state?

      No. You automate that away. You partition system storage from user storage. You delta-track changes to remote servers.

      1. An_Old_Dog Silver badge

        Re: Hit-and-Miss

        (1) >> The author has forgotten about magnetic core memory

        > I haven't, but I think the main difference is that there was no choice then. By the time HDDs became common, core store was obsolete.

        Let's be precise, here. "By the time HDDs became common", I presume you meant, "common on PCs." And by, "core store was obsolete," you meant, "not used in PCs." Please correct me if this is wrong.

        (2)>> The author has forgotten economics: PMEM is far-unlikely to become cheaper than spinning rust.

        > Au contraire. The thing about economics is that it sometimes catches up and overtakes you, and then it can look like it's behind when it's actually far ahead.

        This seems like hand-waving. That said, there's a chicken-and-egg aspect here. When large numbers of people decide they want Product X in volume, China steps up to the plate and we have £3.15 Tomogachis, or whatever. But that presupposes that they can be made cheaply enough that the manufacturers still make a sufficient profit, despite the low retail price. Retail Optane cost vs spinning rust cost on a per-terabyte basis?

        Oh, and there's this: "As announced in Intel's Q2 2022 earnings, after careful consideration, Intel plans to cease future development of our Optane products. While we believe Optane is a superb technology, it has become impractical to deliver products at the necessary scale as a single-source supplier of Optane technology."

        https://www.intel.com/content/www/us/en/support/articles/000091826/memory-and-storage.html

        > It's not that HDDs are bigger but cheaper than solid-state: it's that they still are but it doesn't matter any more. It no longer matters for most people.

        Why would do you think it "no longer matters for for most people"? Do people suddenly have less data to store?

        > When terabyte-class SSDs are well under $100 then most ordinary computers no longer need the dozens of TB offered by spinning media. Solid-state is enough.

        Again, why do you think people suddenly have less data to store on their PCs?

        > Most people don't need 10+ TB of very slow, very fragile storage. That is slowly becoming datacentre-only kit.

        Unless you're literally kicking your PC around the room, hard drives are sufficiently-physically rugged (dropped laptops are a separate issue).

        Also, you've ignored another aspect of hard drives which flash and, presumably (yes, I'm assuming regarding PMEM), PMEM have, and hard drives do not: Sudden Death Syndrome.

        When things are humming merrily along on my PC, and I sense a delay when I write out a file (document), I get a creeping feeling along my spine that that hard drive is going to soon die. To verify, I exit the editor and run a program which looks at that drive's S.M.A.R.T. statistics. If I see a high "re-allocated blocks" value, it's time to immediately back up that entire drive, replace it, and restore my data. The delay I wrote of earlier was the drive re-trying a block, failing, re-trying, etc., then giving up and re-allocating that block to a spare one.

        Flash drives just die. No warnings, no second chances. As for PMEM, how would we know? Because it looks "just like" main memory, there's no S.M.A.R.T. interface, or equivalent. Having something like S.M.A.R.T. is important for PMEM users, because, despite wear-levelling, PMEMs do wear out.

        (4) >> That "database" was a per-app file called the "Preferences" file, and common help desk advice to Mac users of malfunctioning apps of the time was, "Try deleting the Preferences file and restarting the app."

        > Are you sure you're not mixing up early OS X stuff with Classic MacOS?

        I am sure. Apple's Classic OS had preference files. See here: https://discussions.apple.com/thread/348971

        (5) >Our main memory can be persistent, so who needs disk drives any more?

        >> Answer: anyone who can't afford terabytes of PMEM, which probably could not be "directly addressed" by a current X86 CPU. (This was incorrect; with 48-bit effective virtual addresses on AMD-64 architecture CPUs, one can address 256 TB)

        > Cheaper than RAM, faster and longer-lived than Flash. What more do you need?

        See item (2) above.

        (7) >You may have a great, fancy, dynamically typed scripting language, but if it's implemented in a language that is vulnerable to buffer overflows and stack smashing and memory-allocation errors, then so is the fancy language on top.

        >> That would be assembly language, which has all those vulnerabilities.

        > But who writes that by hand any more? It's not about the theoretical edge cases but the real world ones of actual usage.

        I wasn't talking about a programmer who writes an app or utility in assembly language (shout-out to Steve Gibson). I was talking about the fact that the fancy language's interpreter, or virtual machine, or whatever, is compiled down to assembly-language, or perhaps directly to machine code.

        The vulnerability is at the bottom of the software stack, and the fanciness in the language above does not fix the cracks in the foundation below. Perhaps you are suggesting some new CPU type which directly-executes LISP, Smalltalk, Pick, or whatever. If so, why would you expect that the CPU-designers would produce a flawless, vulnerability-free product? Based on electronics development history, that seems highly-unlikely.

        (8) > When you turned the machine back on, it didn't boot. It reloaded to exactly where you were when you shut down, with all your programs and windows and data back where they were.

        > Again: versioned snapshots are consumer tech now. Snap does it, Flatpak does it, Btrfs does it in openSUSE and Garuda and siduction.

        > This is a solved problem.

        That still doesn't help. How does the system know it needs to revert (one-or-more) apps to to a previously-taken snapshot? You've got a broken system, so you can't give a command to do the revert. And if the system is sufficiently-broken, it won't be able to revert to a previous snapshot.

        >> Is there a "RESET" button which goes back to a known good state?

        > No. You automate that away. You partition system storage from user storage. You delta-track changes to remote servers.

        So ... if I don't want copies of my programs and/or data to reside in *The Cloud* ("remote servers"), I need to have a backup server in my office, too? And an emergency-boot flash drive to talk over the network to those/that server(s)? And if the Internet is not available? (Me and my laptop, out in the field.)

        That's a lot of dubious, additional complexity to gain the conceptual Nirvanna of a "flat" storage interface.

    6. SCP

      Re: Hit-and-Miss

      Yes, everything is re-loaded as-was, including a potentially-broken state.

      Congratulations, you now own a brick.

  9. Mage Silver badge
    Boffin

    Windows NT

    It did have a method of treating files as a memory array, but hardly anyone used that. Something like 16 exabytes of virtual memory. The Windows on DOS (win 3.x, 9x & ME) couldn't do that, nor even create named pipes. Win9x/ME didn't have much more 32 bit NT compatibility than Win 3.x with Win32s installed.

    1. Paul Crawford Silver badge

      Re: Windows NT

      You can memory-map files in C on Linux as well using mmap() to make them look like RAM, can be very handy at times. Not sure if it is limited to local storage (most likely) or can also be done with network storage though. Also available for python and probably a few other platforms.

      1. that one in the corner Silver badge

        Re: Windows NT

        Under all the Usual Suspects operating systems, you can happily memory map a file across the network. With caveats.[1]

        You should be able to use memory mapped files in any programming generic language, Python, Smalltalk, Lisp etc, unless they are being a pain in the backside and not allowing you to take advantage of the OS features.

        [1] the most obvious being consistency[2] if multiple processes are mapping the same file and at least one is writing to it: across the network, those processes can now be on distinct machines, so unless you (use a library to) do the hard work, you are not guaranteed that both processes will see the same contents at the same time.

        [2] actually, I suppose *the* most obvious is that the server can be unplugged without the client being told and your next attempt to page the file will cause an exception! But that is hard to prevent just b linking in a helpful library.

      2. Peter Gathercole Silver badge

        Re: Windows NT

        Precedes Linux by more than a decade. mmap() was a feature of SunOS and SVR4, although the interface to it was described (but not implemented) in BSD4.2.

        According to Wikipedia, the memory mapped file concept was first seen in TOPS-20

    2. Liam Proven (Written by Reg staff) Silver badge

      Re: Windows NT

      [Author here]

      > It did have a method of treating files as a memory array,

      Treating disks as very big blocks of memory is sort of the opposite of the thing I'm getting at, which is treating very large blocks of memory as disks.

      1. Dan 55 Silver badge

        Re: Windows NT

        If you wanted to share memory between applications and access it with a file descriptor then POSIX has shm_open() but there was a System 5 version before that. If for some reason RAM gets full and some shared memory isn't being used then it would be swapped out to disk until needed again. In a modern Linux there's little difference between shared memory in /dev/shm and files in /tmp apart from the user deletes stuff in /dev/shm less.

  10. Mage Silver badge
    Coat

    NV RAM never entirely went away & predates Optane

    Even in 1980s we had NV RAM / PRAM. It was static memory with a lithium coin cell. Microcontrollers could be powered off and resume. Some could "sleep" due to having a static design. You could literally stop the CPU external clock to pause execution.

    Also magnetic core memory was non-volatile.

    Mine's the one with the 8K RAM cartridges in the pocket.

    1. Doctor Syntax Silver badge

      Re: NV RAM never entirely went away & predates Optane

      Over the years I've seen a few would-be memory technologies come and go (bubble memory is one that comes to mind - what was it again?). The few that have stood up to reality and their predecessors are the few we have today. Reality includes the cost of completely revising the way things are done to take advantage or having a commercially viable, if niche, use case. Mobile would appear to provide a suitable use case for PRAM. If it can't make an inroad there it doesn't seem likely to succeed elsewhere.

      1. Rich 2 Silver badge

        Re: NV RAM never entirely went away & predates Optane

        That made me smile. I had completely forgotten about bubble memory

      2. An_Old_Dog Silver badge

        Re: NV RAM never entirely went away & predates Optane

        Wasn't Texas Instruments big into bubble memory development? (/me/mem/fuzz-factor ~= 0.67 on this)

        Success or failure of a technology depends on multiple things: (1) can it be made reliably and cost-effectively? (2) can it be maintained reasonably-easily? (3) can it be easily interfaced to existing systems? (4) is it sufficiently-well marketed? (5) do the perceived advantages outweigh the perceived dis-advantages?

        Don't-worry-about-anything-our-new-black-box-hardware-and/or-software-will-automagically-take-care-of-everything [paternalistic pat-on-the-head] developments grate against my soul, probably because whatever it is, it removes control and repairabilty from my hands. (/me looks at spare 34-pin and 20-pin hard-drive ribbon cables, thinking, "I probably should flog those off.")

    2. that one in the corner Silver badge

      Re: NV RAM never entirely went away & predates Optane

      > NV RAM / PRAM. It was static memory with a lithium coin cell. Microcontrollers could be powered off and resume.

      As used in the Palm Pilot PDAs (although the power was triple-As with a small backup whilst you switched in new batteries - which was *not* a daily occurence!)

      > Some could "sleep" due to having a static design. You could literally stop the CPU external clock to pause execution.

      The CDP1802 COSMAC ELF was a brilliant first computer to learn assembler on, for exactly that reason. Make a simple two-transistor logic probe[1] and watch it all happening, one clock at a time.

      [1] rich sods built a couple of dozen probes and soldered them onto the system busses; flash gits. Mutter.

  11. Doctor Syntax Silver badge

    You have a grid of icons and you can't directly see your "documents". Open an app and it picks up where it was. Turn off the fondleslab and when you turn it on it's right where it was, although some things will be slower to open at first.

    That may work for a piddling little mobile. On a laptop or bigger my documents are primarily what I need to see. "Open recent" will bring up more than one document for me to pick but not many. If I've had to hunt through a few recent documents to check on something that may well have been enough to have squeezed what I was actually working on yesterday out of the list.

    The advantage of the files and folders way of thinking is that the organisation can be structured so that I can navigate through them to find that document i last opened 5 years ago but need to check up right now.

    OTOH while Okular will open a read-only PDF where I left it 5 years ago LO Writer will only re-open a document it was writing 2 minutes ago at the beginning. I could do with that being changed.

    1. Mage Silver badge

      LO Writer will only re-open a document .. at the beginning

      Not if you use ODT format and put a name into Tools-> Options -> User Data.

      Though some recent versions of LO Writer have a bug and you need to press F5 to jump to last position.

      We do need a concept of documents as well. See project Xanadu, Apple Hypercard and also the Nebo App on iOS and Android.

      A document should have a history axis, different kinds of objects (text, images, data, audio, video etc), parents, children and siblings and metadata. Tradition file systems are too limited, See also having a reading library via a Kobo or Kindle ereader with the metadata interface vs trying to organise books, collections, tags, subtitles, genres, series, authors, translators and editions via simply files and directories. Or Calibre on Mac/Windows/Linux

      See any professional document management system. The user can more easily create / find / edit information via metadata. Win10 with it's reliance on search is a failure for accessing programs or information.

      1. This post has been deleted by its author

  12. Mage Silver badge
    Thumb Up

    cut Oberon down into a monolithic, net-bootable binary, and run Squeak

    Great idea and great article.

    But OTOH, I wrote C++, C, and VB6 as if I was writing Modula-2 with co-routines, generic typed functions and opaque modules.

  13. MarkMLl

    No. Sorry, just /no/.

    First: "You load a program, then once in memory and running, you can then load a file into it to work on it. But you must remember to save regularly or you might lose your work. If your computer crashes it has to reboot and reload the OS. This resets it and whatever you had open is lost."

    Look me in the eye Liam, and tell me that you've never had a program crash for no obvious reason. If you don't have a copy of your magnum opus on secondary storage then the only thing you can do is revert to the same memory image: which is likely to be inconsistent, hence your work is unsalvageable.

    Second, if Smalltalk and Lisp are so great 'ow come the DoD considered neither as a foundation for Ada? (In the interest of full transparency, I wrote my MPhil on Smalltalk and related matters: I wot of which I speak).

    Third, environments such as Smalltalk and Lisp suffer from not having per-object ownership and access rights. One of the early PARC books (can't remember whether it's the Blue or the Green) makes the point that if the system owner decides to rewrite the definition of (all) arrays so that they are zero- rather than one-based... well, be it on his own head. And reviewers of the earliest published version in the Green Book make the point that while great fun such a thing is virtually unsupportable since it's impossible to work out what state a user's system image is in: the one thing you can't do is say "reboot and try it again".

    Look, I'm a firm believer in "it's my PC, and I'll pry if I want to" but without internal protection Smalltalk (and Lisp) are basically unusable: something I've pointed out repeatedly to Liam over the last ten years or so.

    In addition to that, "If you find that you need different languages for your kernel and your init system and your system scripts and your end-user apps and your customisation tools, that indicates that there's something wrong with the language that you started in." Sorry, I disagree: the languages used at the lowest levels of implementation /have/ to be able to do things are are anathema at the application level, the canonical example being handle hardware-level pointers (i.e. physical memory addresses) and [horror] perform arithmetic on them. In addition, the lowest levels of system implementation usually (a) include the code that implements at least one heap and (b) carefully avoid using such things because the risk of indeterminacy is too great. By way of contrast, the highest levels usually /demand/ dynamic storage on the heap, plus either reference counting or garbage collection. And $DEITY help programmers with a bit of app-level experience who think they are competent to write low-level microcontroller code, and that it's appropriate to have garbage-collected strings in critical code...

    And so on...

    1. Dan 55 Silver badge

      Re: No. Sorry, just /no/.

      Someone has done it. It's a JIT version of C which can also be called from the shell - you edit a file, make the change, save it, and that's the program or OS changed.

      The language is called HolyC, the OS is called TempleOS, and the developer was... troubled.

    2. timrowledge

      Re: No. Sorry, just /no/.

      “environments such as Smalltalk and Lisp suffer from not having per-object ownership and access rights”

      I claim that

      A) this is not something ones suffers from. It’s my objects. Nobody else gets to play there.

      B) this is not something for the language to handle, it would be for the system

      C) pretty sure gemstone can provide it

      1. MarkMLl

        Re: No. Sorry, just /no/.

        > B) this is not something for the language to handle, it would be for the system

        You're overlooking the fact that in Smalltalk and Lisp environments there is no significant distinction between the language and the OS.

        1. timrowledge

          Re: No. Sorry, just /no/.

          No, I suspect you inferred ‘operating’ in front of ‘system ‘ in my comment. Just as C has no language constructs for I/o (well, it didn’t last time I had to use it, who knows these days) Smalltalk needs no language construct for ownership etc. The system written in it can have whatever you want. And yes, perhaps some VM support would be nice, but we’ve done that plenty of times before.

          Also I think you’re forgetting that in practical usage we do still run Smalltalk on an OS and make use of that OS. The trick is to try to make it easy to do everything from within Smalltalk. We do fairly well but not perfect.

  14. Bebu Silver badge
    Windows

    Winding paths of history...

    I recall being intrigued back when (90s?) reading GO Corp PenOS doco the the application's text was executed in-place on the devices non-volatile storage ie it wasn't copied into ram.

    I don't imagine these devices had demand paging or copy-on-write, or indeed a mmu at all, but I assume the non-volatile storage was roughly as fast as the processor or a fairly large ram cache.

    An early australian 8 bit z80 pc, the Applied Tech. Microbee used 6116 cmos static ram which could be battery backed to provide non-volatile storage.

    I always thought Multics approach of making everything a segment was an idea worth doing properly - 64kb segments are just plain silly but 2^64 byte segments could fly. I imagine mmap(2) took part of its inspiration from this.

    I suspect flat address spaces are pretty much the rule partly because its the unix way and the complete dogs breakfast the 80286 made of segmentation. Although I think at one stage on x86_32 hardware you could run the linux kernel in one segment and the user space in another. I assume the intersegment call were slow so was a space v time trade off.

    1. MarkMLl

      Re: Winding paths of history...

      "I always thought Multics approach of making everything a segment was an idea worth doing properly - 64kb segments are just plain silly but 2^64 byte segments could fly."

      The fundamental problem was that even after x86 segments were expanded to a maximum of 4Gb each, the descriptor tables- GDT and LDT- were still each restricted to 64Kb i.e. 8K entries. If one wrote an OS as Intel (originally) suggested, and if one actually had more than one descriptor per segment (e.g. one to allow code to be executed, and another to allow a debugger to examine it) that 8K was ridiculously inadequate.

  15. Howard Sway Silver badge

    The argument for keeping different horses for different courses runs thus...

    Get back to me when you've rewritten Call Of Duty 6 in Lisp, and a high performance industrial scale database in Smalltalk, and have them running on your tiny OS as well as they do now, and we'll have another look to see whether all that work to recreate exactly what we already have was really worth it. I'm guessing neither project would be anything other than a catastrophe.

    1. that one in the corner Silver badge

      Re: The argument for keeping different horses for different courses runs thus...

      > we'll have another look to see whether all that work to recreate exactly what we already have was really worth it.

      Um, why would you, or anyone, want to recreate *exactly* what we have now? We have Call of Duty 6, you can play it now, why recreate it - and do so by changing over to a new language at the same time?

      Maybe you are thinking that CoD6 is deserving, in the future, of the cult status that many retro games already have, so we must be sure to be able to run it, exactly as it is now, on the Machines Of The Future? Well, maybe, but then we'd use the same methods as we do for the current retro games - fire up a VM and run it in that.

      In the course of time, unless you know of some fundamental reason why not (in which case, please share it), there is no reason to think that one couldn't build a video game of the scale of CoD totally natively to a Lisp-based system[1].

      But it will get to that point piecemeal, just as CoD did on the PC (what, CoD came into being as a single project, not in any way pieced together from and building upon existing materials?). But did anyone promise the new platform as a Gamer's Paradise from Day One? Maybe, if it happens, it'll take over from a different starting point and, for the first couple of years after it trickles down to all the home users, there will be a resurgence in console gaming whilst the new stuuf ramps up behind the scenes.

      As for the database - why recreate it, especially as it is running on a server and we can just talk it over the network? Heck, if it is a big industrial database it is probably already running on a different OS than its clients are!

      But we can write some new database clients using Smalltalk - after all, how many people actually write the database server compared the number that write new databases to live in that server and new clients to access that data in User-appropriate fashion?

      Or did you actually mean that you don't see any purpose to a new programming idea until it has been used totally replace the current crop, in one fell swoop?

      [1] Lisp-based doesn't restrict you to only writing raw Lisp, of course; just like the use of C in CoD doesn't limit it to being written in C (in fact, Googling it, C isn't mentioned - only C#, C++ oh, and Lua, which is in C but aren't writing C, they are just using the language that is implemented on top of it. Many a domain specific language has been implemented in Lisp.

    2. timrowledge

      Re: The argument for keeping different horses for different courses runs thus...

      Evidently you’ve not heard of Gemstone. A vastly capable database Smalltalk. Gemtalksystems dot com.

  16. Dan 55 Silver badge

    "We don't need multiuser support, or POSIX compatibility."

    I have to disagree on POSIX. A familiar shell and a familiar API to program with will mean developers have somewhere to start. If an operating system isn't natively POSIX compatible, someone comes along and writes POSIX compatibility for it anyway because it's easier to do that than it is to learn a new shell and rewrite every program when porting it.

    1. Anonymous Coward
      Anonymous Coward

      Re: "We don't need multiuser support, or POSIX compatibility."

      Back in the day when NetWare was still a thing, Novell was slowly rewriting their core C Library to look more POSIX/Unixy to make porting software to run on NetWare easier. The big challenge is that Unix typically means processes protected by an MMU from each other. But Netware had no memory protection (to speak of). It was slow work with lots of fudges. In the end, Novell took the cheat's option and ported "NetWare" to run on SUSE Linux. Even back then, Linux was winning in the server arena.

  17. Greybearded old scrote Silver badge

    Dream a Little Dream

    Mine goes like this:

    A BEAM compatible (Erlang VM for those who don't know) interpreter running directly on the hardware. Well, it already usurps the OS process separation and switching after all. It had to, feeping creaturitis had already poked holes in the separation and slowed down the switching.

    That already gives you Erlang, Elixir, Gleam and even Lua. And parallel processing if you use processes as intended.

    But we wants a visual programming language like Squeak that compiles to BEAM code. I've worked with a db that had a tile based programming language. (No names, I mostly didn't think it was a good product. Our office database was named "Godot" for good reasons.) You tended to feel silly drawing a diagram of how to constuct a sentence, but overall it was more readable than pages of a text language.

    1. Greybearded old scrote Silver badge

      Re: Dream a Little Dream

      And another thing. The whole Apps idea has it backwards. We want to concentrate on the data, and have the system load the appropriate tools for the data type you're looking at atm. Even if they are embedded in each other.

      Opendoc had the sort of component architecture I'm thinking of. But the Apple/IBM/Novell partnership gave up on the dream of displacing MS.

      1. A Non e-mouse Silver badge

        Re: Dream a Little Dream

        Wasn't this the idea of OLE in Windows? You opened your file but it had text which was managed by Word, pictures from Paint, and tables of numbers from Excel. You just clicked on the object and all the menus change to the relavent "application". (I did try this one time but it didn't really work: It was an easy way to briing a PC to its knees)

        1. Richard 12 Silver badge

          Re: Dream a Little Dream

          Pretty much.

          And it does actually work now, at least for MS Office from MS Office.

          Not really for anything else, though, and I'm certain that it's actually a huge array of horrible workarounds all the way down.

        2. Greybearded old scrote Silver badge

          Re: Dream a Little Dream

          It was. MS did it badly, whodathunkit? To be fair (no fun!) the PCs of the time were short on resources. I did use it for college work in 4MB of RAM though. Yep, megabytes.

        3. SCP

          Re: Dream a Little Dream

          Sounds like a Pipedream.

  18. Wily Veteran

    EMACS?

    What the author describes sounds a lot like good old emacs (loved by many, reviled by otherss) on top of a PMEM system.

    Most of it is written in eLisp on top of a small interpreter written in C, it has a tiling window system, it has a "storage" system in terms of buffers which can contain executable eLisp or text rather than files, can (more or less) transparently access other resources on the network....

    Is emacs a perfect implementation? Of course not, but it can be seen as a "proof of concept" for the software part of the ideas presented.

  19. that one in the corner Silver badge

    Chickens and swans: they are harder to tell apart than one might think

    Is there *really* some magical difference between using the fondleslab and the PC? And I'm admitting here to using Windows most days[5]

    > You have a grid of icons and you can't directly see your "documents". Open an app and it picks up where it was. Turn off the fondleslab and when you turn it on it's right where it was, although some things will be slower to open at first.

    You know, that sounds like my day to day experience with the PC for, ooh, decades now. *The* "productivity tool" I use is a plain old programmer's text editor - and those have been reloading the last state for donkeys years, including cursor position[3]! And way better than that, tracking a whole lot of different sets of states that I can switch to easily.

    Just quickly comparing the Android[1] devices sitting next to the PC, on both in their normal configuration:

    * I have to start up a special program in order to 'directly see my "documents"' (although I can create an icon that will directly open a particular one): xplorer on the PC, Ghost Commander on 'phone & tablet.

    * Some programs will automagically reload the previous state, some won't; the web browser is the obvious example (which started doing that on the PC back in - not sure, was it Opera around about 1997?) but the PDF reader(s) will as well[2]

    * Other programs steadfastly refuse to reload the previous state, but do something more useful (e.g. load the night sky as it looks today, right now, by default)

    * Other programs don't (and I sometimes think I'll hunt down something that will, but it is more faff to do that than not bother; hey, I probably just need to check the options): on both, opening the photo browser program directly means I then have to navigate to whatever I want to see today, even if it is the same as that thing yesterday.

    * When I turn on the PC[4], it starts all the bits I *want* it to start, in their previous state (if that is appropriate)

    Oh well, maybe my PC is an outlier, running so much weird stuff that it is unrecognisable to Joe Bloggs.

    Okay, on the PC it is a *lot* easier to drop into the Really Oldie Days way of doing things, and I do do that (frequently) as, strangely enough, using a command line can be a really efficient way of doing *some* parts of the task. So maybe that is the big difference between the two? But if I decided to load

    [1] full disclosure, haven't used an iOS device for more than a few tens of minutes, but I'm told the experiences are similar

    [2] actually, both the PC versions win here, as they reload *all* the docs I had open last time, right at the relevant page; great for hardware refs.

    [3] seriously, donkey's years; the *current*, latest, exe for my favourite editor, the one I open first in the morning, CodeWright, is dated 01/08/2003

    [4] if I used my laptop more, it would hibernate and come back up in its last state even faster!

    [5] I have a Real OS running as well, honest, Windows is just in a VM - with sole access to this GPU, so it *is* all very hackery and clever; stop looking at me like that!

    1. _andrew

      Re: Chickens and swans: they are harder to tell apart than one might think

      Surely even your command-line has the recent history of what you were doing, and might even start in the same directory that you were previously in. Command-line users aren't savages either.

  20. Pete 2 Silver badge

    Y2K times a million

    > Firstly. We have to throw away backwards compatibility again

    And that is where it all falls apart.

    Nobody except computer science students buy platforms simply because they have a novel architecture. The real world buys computers to get stuff done. And as such academic articles about the neatness of the architecture get short shrift.

    If it was even a little bit important, nobody would have gone down the Intel / 80x86 branch, but would have stuck with the "cleaner" addressing modes of the 68000. Yay! let's revive the architecture battles of 40 years ago.

    But they didn't. The decision makers with the money to spend chose backwards compatibility. They will always do the same again. The biggest reason being that nobody would trust an emulated layer, without testing all their code and data against it and fixing the inevitable gaps.

    1. A Non e-mouse Silver badge
      Flame

      Re: Y2K times a million

      I'm writing this on a computer using an ARM CPU, running an operating system that has been ported across multiple CPU architectures over its life. And that's before we get to Javascript/WASM.

      1. _andrew

        Re: Y2K times a million

        Indeed. We are ever so close now to the internet being our storage architecture, with our data locked up not in "apps" as the mobile devices would have it, but in "apps" that are services running on some cloud somewhere. Liam can reinvent the low level pieces in Smalltalk or whatever he likes, but as soon as he builds a web browser on top, he can immediately do everything that a chromebook can, or indeed most of the stuff that everyone who has an enterprise IT department that provides access to a smorgasbord of single-sign-on web apps, and no-one would notice.

    2. Doctor Syntax Silver badge

      Re: Y2K times a million

      "If it was even a little bit important, nobody would have gone down the Intel / 80x86 branch, but would have stuck with the "cleaner" addressing modes of the 68000. Yay! let's revive the architecture battles of 40 years ago."

      The early SMB-scale Unix computers I used were either Z8000 or 68000 based. That wasn't, at least directly, because of cleanliness of addressing modes, it's because, at system integrator level, it's what was available to buy.

      But because IBM chose Intel for their PC and the economies of scale prevailed all that stuff became a dead end. So much for not throwing away backwards compatibiliy.

  21. disgruntled yank

    Ah, LISP

    Michael Stonebraker wrote that when undertaking Postgres they started with the notion of using LISP for its elegance etc. etc. The performance was found to be intolerable, and the project reverted to C.

    Or so I recall--the book is behind a paywall at ACM.

    1. that one in the corner Silver badge

      Re: Ah, LISP

      Try The Implementation Of Postgres - I wasn't presented with a paywall.

      >> Our feeling is that the use of LISP has been a terrible mistake for several reasons. First, current LISP environments are very large. ... As noted in the performance figures in the next section, our LISP code is more than twice as slow as the comparable C code. Of course, it is possible that we are not skilled LISP programmers or do not know how to optimize the language; hence, our experience should be suitably discounted.

      >> ...

      >> However, none of these irritants was the real disaster. We have found that debugging a two-language system is extremely difficult. The C debugger, of course, knows nothing about LISP while the LISP debugger knows nothing about C. As a result, we have found debugging POSTGRES to be a painful and frustrating task.

      1. disgruntled yank

        Re: Ah, LISP

        Thank you for that link, and please have an upvote.

        The reference I saw was in Stonebraker's memoirs, which for a while were freely viewable on-line. I had not seen this particular item.

    2. Liam Proven (Written by Reg staff) Silver badge

      Re: Ah, LISP

      [Author here]

      > Michael Stonebraker wrote that when undertaking Postgres they started with the notion of using LISP for its elegance

      As interviewed recently by my colleague Mr Clark:

      https://www.theregister.com/2023/12/26/michael_stonebraker_feature/

    3. hammarbtyp

      Re: Ah, LISP

      It was similar story with Erlang development. It was originally developed in prolog (the 3rd leg of the holy trinity of languages everyone should try, but virtually no one does)

      However performance was so bad, it had to be ported to C.

      For that reason I have doubts whether there is a language that can do it all, without bloating it out of existence

      1. HuBo
        Thumb Up

        Re: Ah, LISP

        Haskell

  22. This post has been deleted by its author

  23. DS999 Silver badge

    Do not want!

    Temporary and permanent storage needs to be completely separate because programs don't always write where they are supposed to. You can mitigate that for bugs via stuff like page protection, but all the issues we've seen with malware demonstrate that there are always ways around whatever protections we can come up with. Let's not create FEWER hoops for them to jump through to modify/corrupt what should be read only data like OS code and become better able to create persistent malware (all of the scary 0 day stuff you hear about from NSO Group and so forth is effectively a TSR, if you reboot your phone it is gone and they need to re-infect you to regain control of it...Apple might do well to add a "reboot phone nightly while you sleep" option to make things more difficult for NSO)

    Fortunately the chances that we get a memory technology that is at once as fast or faster than DRAM and as cheap or cheaper than NAND get lower and lower the cheaper NAND gets. No one has even been able to get memory that's as fast and cheap as DRAM, so the chances of getting cheap enough to replace NAND are almost nonexistent at this point.

    The reason Optane failed was because it was cheaper than DRAM and faster than NAND, so it had a small niche where it could be useful to some, but it needed to be faster than DRAM and cheaper than NAND to have any chance in the mass market.

    1. Dave 126 Silver badge

      Re: Do not want!

      If I'm reading the author correctly, the idea is that you could restart your OS in an eyeblink. Every program is within its own virtualised OS, spun up from ROM media if needs be. Under the control of a very small hypervisor, which would have fewer attack surfaces than a bigger OS. Additionally, application states can be paused and resumed, backed up or even baked onto Read Only media. So you have a known-good untampered-with reference.

      On today's computers, its only software that is stopping nasties in RAM from corrupting what's on SSD. A program in RAM with admin privileges can do what it wants to a disk, the disk isn't safe just because it is separate physical device to the RAM.

      1. DS999 Silver badge
        Facepalm

        Re: Do not want!

        Who the hell wants to "restart the OS in an eyeblink"? The whole POINT of a reboot is to get a known good state!!

        If aren't doing stuff you need to do to reach that known good state, like resetting the CPU and all I/O devices and zeroing "RAM", how is that even a restart? Just because you have the capability for persistent memory that survives a reboot doesn't mean you want it ALL persistent. Otherwise how would you recover from an OS bug that locked up your GUI, for example, or malware that has scribbled on some OS data structures it shouldn't?

        You'd need to at minimum zero all memory that will be allocated by the OS when memory is requested, starting the OS with no allocated memory and requiring it to rebuild its data structures from scratch to insure their integrity (or to, you know, allow the system to be different if you have e.g. installed kernel patches or changed configuration files!) So you gotta build page tables from scratch, start with a blank page cache, reach the correct execution mode (system boot starts with the processor in the most privileged state) step by step to reach user mode and so forth. A reboot takes time because doing all that stuff takes time, and as CPUs and I/O protocols get more complex at best faster CPUs can only keep up. Persistent memory isn't going to change that, unless you change the meaning of "reboot" to mean basically nothing has been done, and when you want to reboot by today's meaning of the word "reboot" you'll have to do whatever a REAL reboot is now called!

        And I'm just talking about a warm reboot here. A cold boot from a power off state means that the CPU and I/O devices are in an unknown state, so even if you want some sort of meaningless "fast restart" for a warm boot you sure as heck can't cut any corners in a cold start.

        1. jake Silver badge

          Re: Do not want!

          As the old (early '80s) AI koan put it ...

          A novice was trying to fix a broken Lisp machine by turning the power off and on.

          Knight, seeing what the student was doing, spoke sternly: “You cannot fix a machine by just power-cycling it with no understanding of what is going wrong.”

          Knight turned the machine off and on.

          The machine worked.

    2. WayneS

      Re: Do not want!

      IMO you are correct that we need both temporary and persistent storage. Temporary storage for data structures that you want to disappear on restart. After all the system may be gone for an indeterminate timeframe between shutdown and init, and the environment around the system has moved on, rendering many data structures invalid.

      The challenges are significant with regards persistent storage and issues like memory versioning, recovery/safe modes, RDMA/DMA, resilience and durability. A single NV-DIMM is not good enough for persistent storage - its a change in media and access method, not storage attributes we demand of persistent stores. Pretty quickly things devolves into a multi-node global shared NV-DIMM, and distributed memory lock manager OS requirement also, to achieve levels of resilience and durability needed.

      Basically, we need to establish new extra OS page types - persistent text and persistent data. Then develop OS memory management to handle them correctly, re-educate ourselves, and develop operational processes to protect the pages we want protected (backup/restore/snapshot/ransomware protect etc). Not a trivial ask!

      To get the benefits of nearly-as-fast-as-SDRAM we cant dodge these bullets. And the advantages of not doing the slow bits of all the memory copying malarky we do now would seem worthwhile in the long term.

  24. Jumbotron64

    PARC and Wirth

    Just a brilliant article and essay. Even before the author got to the payout I was beginning to wonder if Smalltalk and Nicklaus Wirth would be mentioned. Not because I am brilliant ( I am most certainly not as compared to the author ) but because I ponder such things in my perpetual state of boredom running idylls in my often idle mind ( perhaps a wetwork loop function ). But when he mentioned Smallttalk I was piqued and in my mind was urging him forward ( say his name…say his name ). And then POW…Nicklaus Wirth. And here is where once again my lack of brilliance versus the author shines through. I thought that somewhere after all the talk of Object Oriented languages and PARC and Smalltalk and Apple the author would have to mention Nicklaus Wirth and Pascal and eventually Apple’s object oriented dialect known as Object Pascal. But with a shout of “YES” and with a fair amount of giggling on my part when the author got to Wirth he did so by way of Oberon. And the heavens opened and I heard a voice roughly sounding like Niklaus Wirth saying “Behold, I have seen thine Register article….and it is good !”

  25. that one in the corner Silver badge

    Well, at least this reminded me to have a play

    Despite not having had success applying Lisp or Smalltalk in the Real World[1] this article has reminded me that how exciting all this was.

    Now that I have some more time on my hands, I have at least made a few notes, reminders to dig back into the piles of books (Blue, Green, Red and Purple - I still have them all in dead tree form), look up the latest installer for Squeak (and Pharo, why not) and give the middle mouse button some attention.[2]

    I might even dig out my hacked copy of Budd's "A Little Smalltalk" (although that reaks of masochism these days, for a while it was the best - yikes - that was actually available)!

    Thank you, Liam, for a definitely thought-provoking article.

    [1] we tried, back in the '80s; I was even hired 'cos I could namedrop "object browser" and draw a picture to explain CADR, although they did demand I came back looking a little less like a programmer and a bit more City of London!

    [2] But first, I am still gritting my teeth and trying to convince myself that semantic white space is bearable and Python *can* be learnt! Oh, why didn't they release the "Raspberry Lua"?

    1. Ken Hagan Gold badge

      Re: Well, at least this reminded me to have a play

      At least in python, the "semantic white space" is merely an insistence that you indentation must agree with your scoping. As far as I'm aware, it is universally agreed that a failure to agree in this way is a horrible thing to do, so what's the objection to taking advantage of it?

      1. Richard 12 Silver badge

        Re: Well, at least this reminded me to have a play

        Basically, because whitespace is invisible, and is often not preserved.

        Python is the only language I'm aware of that needs a "pass" statement, and that is purely and simply because of the limitation of meaningful whitespace - you cannot see where it ends.

        1. that one in the corner Silver badge

          Re: Well, at least this reminded me to have a play

          > Python is the only language I'm aware of that needs a "pass" statement

          Absolutely.

          I am still such a Python neophyte that, seriously, I only learned of "pass" last week - and then it took hours of scouring to find it - it is buried hundreds of pages deep in "Learning Python", skipped over entirely in lesser beginner's guides (as they sneakily avoid situations where it is needed) and just what the heck do you google for if you don't already know the name but are hoping something like it exists?

          1. HuBo
            Holmes

            Re: Well, at least this reminded me to have a play

            IIRC, Python was developed to ease the process of grading CS homework at MIT. There's meant to be only one corect way of writing a Python program that solves any given specific task, and so if your homework matches it then it's 10/10, otherwise 0/10 (easy!). The language is not good for anything else (eg. outside of academia; much like RISC-V).

    2. timrowledge

      Re: Well, at least this reminded me to have a play

      Load a copy of Squeak (from Squeak dot .org) on your Pi and read a few books (legally) downloaded from http://stephane.ducasse.free.fr/FreeBooks.html and/or watch some relevant YouTube videos. And join the Squeak mailing list.

      I’ve been making a living with Smalltalk (almost entirely on ARM) for a tad over 40 years and any time I have to spend on nasty textfile languages is anathema. Just today I had to poke at a Python program to fix a problem and.. no, just no. That is so very not the way to do it.

  26. steelpillow Silver badge
    Windows

    Symbian

    TLDR, but anybody remember Symbian? It was the OS that powered the 1990s PDA revolution, on devices such as the Psion Series 5. Fast, non-volatile memory was hitting the high street and Psion recognised its potential for running code straight from it. Loading it into conventional RAM would still run faster, but the instant-on and low power of the NVRAM was seen as the way ahead.

    My own memory is getting a bit volatile these days, but ISTR Symbian was held in cheap ROM and loaded into RAM for speed, but apps were installed to and ran from NVRAM. Perhaps somebody can confirm/deny the truth of it?

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: Symbian

      [Author here]

      > anybody remember Symbian?

      FOSS now, and crying out for a revival effort IMHO.

      https://github.com/SymbianSource

  27. bazza Silver badge

    Replacing one set of falsehoods with a new set of falsehoods

    The article seems to be based on the assumption that modern architectures are headed from a model of two storage classes to one.

    Except, that it then proceeds brushes over the fact that in a new world we'd still have two different storage classes, despite briefly mentioning it. If you've got a storage class that's size constrained and infinitely re-writeable, and another that's bigger but has wear life issues, volatility makes no difference; one is forced to treat the two classes differently, and use them for different purposes. The fact that both storage classes can be non-volatile doesn't really come into it.

    And also except that one is never, ever going to get large amounts of storage addressed directly by a CPU. RAM is fast partly because it is directly addressed - an address bus. Having such a bus is difficult, and the address decoding logic becomes exponentially more difficult if you make it wider still. If you wanted to have a single address space spanning all storage, there'd be an awful lot of decoding logic (and heat, slowness, etc). That's why large-scale storage is block addressed.

    And whilst one storage class is addressed directly, and another is block addressed, they have to be handled by software in fundamentally different ways.

    One might have the hardware abstract the different modes of addressing. This kind of thing already happens, for example, if you have multiple CPUs you have multiple address buses. Code running on one core on one CPU wanting to access data stored in RAM attached to another CPU makes a request to fetch from a memory address, but there's quite a lot of chat across an interconnect between the CPUs. So, why not have the hardware also convert from byte-address fetch requests to block addressed storage access requests? Of course, that would be extremely slow! It would very poor use of limited L1 cache resources.

    1. steelpillow Silver badge

      Re: Replacing one set of falsehoods with a new set of falsehoods

      Hi Bazza,

      Bank switching is an ancient technique for attempting to resolve the traditional dilemma. By pointing the memory bus at a different block, aka bank, you can gain some of the benefits of bulk storage along with some of the benefits of direct addressing. The technique originated in the mainframe world, and became common in the 8-bit micro revolution once demands for ROM+RAM exceeded 64kb. For example it was used by the 128k Speccies. It too needs its own approach to memory management and to coding demanding apps.

      Then, there is the linear storage of tape. Mainframes used it for bulk storage, and some developed automated to-and-fro write/read systems which enabled its use as dynamic storage during computation, when the thing ran out of core store. It was a bad idea because the tape wore out, but that didn't stop Sinclair reinventing it for the Microdrive - though looping the tape rather than reversing it.

      There are probably other paradigms we have missed.

      All in all, the reality is far from the binary RAM+bulk suggested by the article.

      1. bazza Silver badge

        Re: Replacing one set of falsehoods with a new set of falsehoods

        Indeed. I was going to mention expanded Ram from the old days of DOS, which I guess is a form of bank switching for PCs

        The one thing that might do something in this regard is HP's memristor. There were a lot of promises being made but it did seem to combine spaciousness with speed of access and no wear life. Who knows if that is ever going to be real.

        Files are Useful

        I think another aspect overlooked by the article is the question of file formats. For example, a Word document is not simply a copy of the document objects as stored in RAM. Instead MS goes to the effort of converting them to XML files and combining in a Zip file. They do that so that the objects can be recreated meaningfully on, say, a different computer like a Mac, or in a different execution environment type altogether (a Web version of Word).

        If we did do things the way the article implies - just leave the objects in RAM - then suddenly things like file transfer and backup become complicated and interoperability becomes zero. The Machine and network couldn't do a file transfer with the aid of the software doing "file save as" first.

        And if the software had been uninstalled the objects are untransferrable.

        If one still saves the objects serialised to XML/Zip, one has essentially gone through the "file save and store" process which then may as well go to some other storage class for depositing there. There is then no point retaining the objects in RAM, because one then has no idea if the file or in RAM objects are newer.

        1. Liam Proven (Written by Reg staff) Silver badge

          Re: Replacing one set of falsehoods with a new set of falsehoods

          [Author here]

          > Files are Useful

          Absolutely, yes.

          But mainframes were useful. Multiuser minicomputers were useful.

          Even so, they were largely replaced by smaller, simpler, stupider devices, because the new ones were cheaper and faster.

          This is nothing new.

          Let's make new stupider cheaper devices, that are much easier to understand! Then we can work out what fun stuff we can do with them later on -- same as we've done every previous time we threw the whole thing in the bin and started over.

          There's nothing holy and special about what he have today.

          https://thedecisionlab.com/biases/the-sunk-cost-fallacy

          Never forget the sunk cost fallacy.

    2. Liam Proven (Written by Reg staff) Silver badge

      Re: Replacing one set of falsehoods with a new set of falsehoods

      [Author here]

      > Except, that it then proceeds brushes over the fact that in a new world we'd still have two different storage classes, despite briefly mentioning it.

      I tried to be explicit about _not_ proposing something that is entirely _instead of_ existing systems, but _as well as_ them. As being a small, simple, fun tool to hack upon, and play with, and explore and learn from, instead of gigantic massively-complex designs inherited from 50-60 years ago.

      I think that at the small end, the client end of things, yes, single-level high speed PMEM _is_ likely to become dominant over legacy dual-level storage. It's a better fit for simple, pocket devices such as phones and tablets.

      But we could use it more effectively if we discard some of the legacy baggage.

      It will require writing millions of lines of new code, yes. But make no mistake: a new version of Android or iOS or macOS or MS Windows $this_year means that anyway.

      It won't replace the racks of servers on the back end, though. Why should it? Who would benefit?

  28. Dr. Ellen

    Magnetic core memory

    Back in the age of dinosaurs, the first computer I was allowed to get really intimate with was a CDC 3100. Its memories were kept on ferrite doughnuts - TINY doughnuts - physically woven into a tapestry of wire. ALL the memory in the computer was persistent. It had 8k 24-bit words, later increased to a stunning 12k. And you just turned it on. Everything else was external, on punchcards, punched tape, or magnetic tape. If you wanted the 3100 to do the same today as yesterday, you just turned it on, and there it went. Every bit was as fast and as permanent/changeable as any other.

    There were, of course, drawbacks. Each bit in the tapestry had to have wires threaded through it, neatly, carefully, and expensively. And cramped.

  29. timrowledge

    On a Raspberry Pi 5 (not even one with an NVME hat) my working Smalltalk image goes from d-click to ready to start typing in essentially no time - an eyeblink at most. It includes all the development tools, code browsing tools - that use proper antialiased, proportionally spaced fonts, compiler, a bunch of games, a web application development system, documentation system, web server framework, code & version management system, graphics libraries, database connectivity (Postgres in this particular instances)... everything. Bang, there, ready. If I need to copy it across to a different machine - maybe my x64 Ubuntu-box, or a colleague’s Widows machine, or a Mac, it will work identically on that.

    It just makes sense.

    1. H in The Hague

      "On a Raspberry Pi 5 (not even one with an NVME hat) my working Smalltalk image ..."

      Sounds fun. Which Smalltalk implementation do you use?

      1. timrowledge

        Almost exclusively Squeak, occasionally Cuis (a fork focussing on quality vector graphics and some new ui ideas), very occasionally Pharo (another fork focussing on some ideas about pushing boundaries and production). Very rarely, Visual Works but I’m not really pleased with how it has changed since I was the engineering manager back in the 90s.

  30. Anonymous Coward
    Anonymous Coward

    Encrypted data safety implications.

    It's actually pretty handy that all traces of unencrypted data conveniently vanishes when the plug is yanked (assuming disk memory for swap and tmp memory are also encrypted).

    If that doesn't happen, then any memory that temporarily holds unencrypted data must be explicitly wiped/scrambled before turning the device off. If the power source goes off accidentally - or even gets shut off on purpose - then that great permanent memory suddenly becomes a liability.

    Currently unencrypted data is stored pretty randomly across RAM at runtime. The idea of separating "sensitive" unencrypted from "non-sensitive" unencrypted data, to minimize the volatile memory required, sounds not only extremely tedious but also potentially dangerous security-wise.

    Of course, if we, and all of industry and business, are going to be password free and iris identified by Sam's 8-ball instead, then there's obvious no need to worry and we can just relax because all our sensitive parts are in good hands anyway.

  31. danielmeyer
    Stop

    Just wow

    The main thrust of this article seems to be proposing that the future of computing is to remove the ability to turn it of and on again to get a known good system status. I can't believe that any of the IT crowd are seriously entertaining it.

    1. Anonymous Coward
      Anonymous Coward

      Re: Just wow

      "Hello, IT; Have you tried just EMPing it?"

  32. Morten Bjoernsvik

    looking forward to CVEs for memory

    With persistet memory it will be like disk today. Not looking forward to the CVEs comming. But it will create lots of work.

  33. Binraider Silver badge

    New and exciting vectors for security issues

    Non-volatile and persistent RAM is going to create all kinds of fun places to hide malware that will withstand a reboot. I wasn't a fan of Optane for this reason; and a more generally available rather than vendor specific solution will have writers scrambling to find exploits.

    Volatile RAM was of course a target for such things as well; no shortage of examples where contents of some RAM could survive a soft reboot.

    No questioning the advantages of a more permanent RAM state. We think of "loading" up an OS as default. By having it simply "there" you can cut down quite a bit of user nuisance. Not far off the idea of putting the OS into ROM or even some common applications for that matter. (see Archimedes).

    1. jake Silver badge

      Re: New and exciting vectors for security issues

      On the other hand, when you have a decent OS, you only need to reboot it when certain aspects of the kernel need upgrading. Going weeks, or even months between reboots means that the speed of rebooting isn't really much of an issue.

      1. Binraider Silver badge

        Re: New and exciting vectors for security issues

        Decent OS and standard business issue laptop rarely go in the same sentence!

        1. Doctor Syntax Silver badge

          Re: New and exciting vectors for security issues

          It depends. You are not constrained by the OS the H/W vendor installs. You can even buy H/W OS free. But having taken that route I still subscribe to the idea of turning it off when you're not using it.

          1. Binraider Silver badge

            Re: New and exciting vectors for security issues

            The laptop foisted onto average employees is almost certainly some variety of dogshit Dell or Lenovo on Windows 10 with a somewhat standardised OS image for security purposes.

            While one could probably use another OS on the corporate network, I have a hunch our security guys might take a bit of a dim view of doing so, uncontrolled. They tend to have kittens just for requesting some fairly basic applications let alone another OS!

  34. Stuart Castle Silver badge

    I think, thanks largely to a lot of users switching to devices rather than computers. While even mobile devices do have seperate RAM and storage, from a user point of view, the lines between the two are bllurred, and the devices themselves encourage you to switch between apps, rather than start and stop them. At work, I use a Mac, PC and my iPhone. I could tell you how much RAM my Mac and PC have, but would have to look up the RAM on my phone.

  35. prandeamus

    "Oberon is a smaller, simpler, faster Pascal. Like Pascal, it's strongly-typed and garbage-collected [...]"

    I think you lost me there. The original Pascal was stripped down in so many ways from its Algol predecessors, and one could argue it's stripped to the bone in ways which make it an excelled teaching language. Subsequent commercial implementations added back stuff that was needed for large scale development and that's a story in its own right. But is Oberon objectively net simpler than Pascal when it adds new features like dynamic dispatch? Was Wirth's Pascal ever garbage-collected, because I don't recall that? In what sense is Oberon faster - it was certainly possible write fast compilers for Pascal generating pretty decent code.

    I don't have a downer on either language but it's thing like this that make it harder to appreciate your wider debating point. I love me a good polemic, but a polemic still has to have a core of hard logic.

    1. Brewster's Angle Grinder Silver badge

      "Was Wirth's Pascal ever garbage-collected,"

      No, and I've just checked with google.

    2. Liam Proven (Written by Reg staff) Silver badge

      [Author here]

      > The original Pascal

      And there is the assumption on which your argument pivots.

      Who said we were talking about the _original_ Pascal? I didn't.

      I'm not talking about 1960s Pascal. I'm talking about the enormity of Embarcadero Delphi 12:

      «

      Minimum Hardware Requirements

      1.8 GHz or faster processor

      3 GB of RAM. 8GB of RAM is recommended.

      Between 6 GB and 60 GB of available disk space. Using an SSD is strongly recommended.

      DirectX 11 capable video card that runs at 1440×900 vertical resolution screen is recommended.

      Windows 10 Anniversary Edition is recommended and required for development for the Windows 10 store.

      »

      ... or for that matter FreePascal + Lazarus in 2024.

      1. prandeamus

        I'm not looking for a flame war, but it's not really "assumption on which your argument pivots", just pointing out an ambiguity that threw me when reading your (interesting) article. I dare say Oberon was a reaction against bloated sorta-Pascal. As was Modula-2 in some ways: Wirth had a genuine skill for reducing things to their simplest form, although sometimes his work resulted in programming environments that were a teeny bit too simple. The world definitely owes a debt to Wirth for Pascal and so on. However, that wasn't the way your line of thought was constructed: you didn't mention Delphi or Turbo Pascal or Object Pascal in that context. You said "Pascal".

        To be positive, I found the article worth a read and you make some good points, like making the effort to understand LISP, or something like it, in order to think differently about software. We should periodically jolt ourselves out of complacency.

  36. Brewster's Angle Grinder Silver badge

    There's lots I could write, but my question is Why?

    What, as a user, does it give me that I don't already have?

    If it doesn't, then where, as a manufacturer/developer, is the competitive edge or productivity gain that will allow me to trounce my rivals in doing what they are already doing?

    1. Liam Proven (Written by Reg staff) Silver badge

      [Author here]

      > my question is Why?

      I thought I said. For fun.

      Because Linux etc. isn't fun any more. It's too big and hairy. I want fun little computers I can understand again.

      But you know what? I'd like them to be gigaFLOPS class things with 1TB of RAM and a dozen CPU cores.

      There is no inherent contradiction in these things. We all presume there is, but it ain't really so. It's just that the success of some multi-billion-dollar corporations rests on us continuing to believe it.

      1. jake Silver badge

        Fun little computers I can understand again.

        I've been hacking on ATMega328 processors for a couple years now. Reminds me of the '70s. Fun, and useful.

        You can use Arduino kit for similar results if you don't like board layout and soldering, or aren't teaching grandkids the basics.

  37. ICL1900-G3

    What a great article

    Thank you so much.

  38. BenMyers

    The Achilles heel of NVDIMMs

    "A programmer friend of mine, who is a lot smarter than me and often shoots my ideas down, pointed out a snag. Even though NVDIMMs are orders of magnitude tougher than Flash SSDs, they do still wear out. One loop just incrementing a variable would burn out some memory cells in minutes."

    NVDIMMs sound like agreat idea, but unless they are robust and not prone to failure, there does not seem to be a good reason to bet the farm or your OS on them.

  39. captain veg Silver badge

    Prior art

    Good to see a mention of the Palm Pilot. I had a Handspring Treo running PalmOS and loved it deeply. It had 16MB of flash storage and that was it. No RAM. Applications ran in situ.

    My first programming gig was for a company that made its own hardware as well as the software. It ran PICK, which treats the the entire disk as virtual memory. Possibly related to this, the machine had SRAM -- which I think was also usefully faster than DRAM -- so you could pull the plug, take it to a client, plug it back in and it would continue where it left off.

    -A.

  40. JohnSheeran

    Isn't this really the whole foundation of the move to something like memristor?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like