back to article The successor to Research Unix was Plan 9 from Bell Labs

To move forwards, you have to let go of the past. In the 1990s that meant incompatibility, but it no longer has to. This article is the third based on The Reg FOSS desk's talk at FOSDEM 2024. The first part of this short series talked about the problem of software bloat. The second talked about the history of UNIX, and how …

  1. Neil Barnes Silver badge
    Unhappy

    No browser, no office suite?

    And if I understood the article, a seriously non-trivial task to include them?

    Given that these two alone are probably the major reasons why almost anyone uses a computer, irrespective of OS, it does not bode well for mass adoption of Plan9.

    Perhaps, as Liam suggests, they could be containerised as 'generic linux apps' but that wasn't the feeling I got from reading this article. All very well to have an OS of wonder and beauty, but the killer may be in the phrase 'or do things like that on other computers'...

    Which is really rather depressing.

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: No browser, no office suite?

      [Author here]

      > it does not bode well for mass adoption of Plan9.

      Oh, it flopped, well and truly. Far more successful than Oberon, though!

      But what I am proposing is a way to make it far more useful for ordinary desktop use, while also, in principle, making it more useful for constructing distributed systems _even mainly running legacy Unix code_. That's the key thing here.

      1. Doctor Syntax Silver badge

        It's all in TMMM

        It seems to have been the classic second system effect. Multics, of course, was the one you build to throw away,

        Seriously, I think the reason Unix succeeded was that it was built simple so everything could be layered on top. The Unix design was and is flexible. By building stuff in that had previously been layered on top it would have become less flexible. Assumptions become limitations.

      2. This post has been deleted by its author

        1. Tets

          Re: No browser, no office suite?

          Would it be possible to make the code open source?

          I am interested in the http 9p bridge. I like the concepts of plan 9 and am thinking of designing a RESTful OS (REST as in Fielding's definition).

  2. Mage Silver badge
    Unhappy

    So...

    How does Plan 9 / 9Front / Inferno compare to Oberon?

    Applications are the thing. Gradually migrating to non-MS applications on Windows available on Linux allowed me to entirely switch to Linux in 2017. But I'd used UNIX first in about 1986, NT in 1994, Linux in 1998. It took a long time.

    Android succeed for two reasons, one was Google and the other was Java. It leveraged all the programmers and apps on Symbian which due initially to resources and at the end due to stupid Sun licencing couldn't run full Java, but the cut down mobile version. Lack of applications and compatibility has hampered ARM versions of MS Windows. Apple switched 68000, Power PC, x86-32 and then x86-64 only and has more control than MS, so was able to switch to ARM Mac.

    And Android is terrible.

    Maybe it doesn't matter how good Plan 9 / 9 Front / Inferno or any other OS is, it's a nearly impossible task to dent the dominance of existing platforms. Servers & gadgets & chrome books have linux, TVs, tablets and phones have Android. Phones and tablets have Android and iOS. MS couldn't save Windows phone and Nokia knew the phone division was a millstone, there was no MS Trojan; Nokia got 11 Billion from MS for something worthless, now rents phone brand to TCL and has other businesses. They were successful before the did phones and still are. Amazon Fire is Android.

    Where is Sailfish, OS/2, RiscOS today? MS once had PDAs, set-boxes (a load were changed to Linux OTA), phones and Servers. Inertia and corporate compatibility leaves them dominant on desktop and nowhere else. How long did Xbox make a loss?

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: So...

      [Author here]

      > How does Plan 9 / 9Front / Inferno compare to Oberon?

      Whoah. That is a whole other article. It may come in time.

      In terms of comparing the size of Oberon's successor A2, which is the version with SMP support, networking, and a (slightly more) conventional GUI, they are actually in comparable ballparks: easily fit on a single CD with all the extras included, for instance. Readable by a single human in a manageable time.

      Oberon directly influenced Plan 9. Plan 9's editor Acme is modelled upon Oberon. I believe Rob Pike studied under Wirth.

      The Oberon lineage is a very different one from any Unix, though. It's inherently single-user, with very weak security and memory protection (AIUI); its stability depends on a strictly-typed, memory-safe language. It doesn't have a clear model of installable device drivers and so on.

      Oberon was designed for students although it was used in production as a workstation at ETH.

      Plan 9 is a successor to a multiuser OS with strong security, authentication, etc.

      There is little to no _technical_ resemblance between them, and Oberon's design probably would never scale to mass deployment. OTOH as a teaching/educational tool it has vast potential, IMHO. All totally ignored.

      1. MarkMLl

        Re: So...

        > I believe Rob Pike studied under Wirth.

        There's definitely a "citation required" tag on that. Wirth's connection with the ALGOL lineage- including Ada- is unassailable, and most modern languages seem at least tolerant of his ideas regarding strong typing. And one of the first APL implementations was done under his supervision at Stanford.

        Robert Griesemer provides a link between Wirth and Google including Go, but so far I've seen no definite confirmation that Wirth had a personal connection with any of the Bell researchers- or for that matter anybody on the USA's East Coast i.e. IBM and the rest.

        > Plan 9 is a successor to a multiuser OS with strong security, authentication, etc.

        OK, but is "multiuser" relevant any more? Multi-*tasking* is definitely relevant, as is strong security between processes which might be briefly processing data on behalf of a known or anonymous user. Most desktop systems are now strictly single-user, and anything larger seems to have reverted to the 1970s mainframe model: a frontend would enqueue a query (etc.) on behalf of a user, and a backend would dequeue and action it with appropriate access permissions.

        Focusing on "multiuser" as the prime requirement and running backends with their own fixed identity might actually be undesirable: the industry needs a good hardware-enforced security model applicable primarily to daemons, and if it turns out that that can also support a traditional multiuser architecture that's just icing on the cake.

        1. ldo

          Re: OK, but is "multiuser" relevant any more?

          Definitely. Think of SAMBA servers using Linux file protections to keep users’ files protected from each other. Or go a step further, and think of the added levels of isolation available with containers and VMs.

          1. jake Silver badge

            Re: OK, but is "multiuser" relevant any more?

            "Think of SAMBA servers"

            I try not to.

            1. ldo

              Re: "Think of SAMBA servers"

              Interesting that SAMBA may be the best way of sharing files even between Linux systems, without Windows being involved at all. It has better security than NFS, for a start.

              1. Michael Wojcik Silver badge

                Re: "Think of SAMBA servers"

                Ugh. SMB is a horrible protocol.

                NFS isn't great, it's true. Unfortunately I'm not aware of an alternative that's actually unconditionally better. AFS was definitely superior to NFS, but the last I looked, the AFS implementation for Linux was incomplete and not very good. DCE's DFS is based on AFS, but while parts of DCE are still around (it's used in DCOM, for example, and Kerberos is part of DCE, though of course it predated it), I don't know that many people run DFS these days. I guess there's Coda; I've never used it.

                1. ldo

                  Re: SMB is a horrible protocol.

                  It has evolved somewhat over time, that is true. But, at least in the form of Samba, it works. The Samba developers are smart people, and they are able to offer more versatility than Microsoft’s own Windows Server.

          2. MarkMLl

            Re: OK, but is "multiuser" relevant any more?

            But the Samba backend process runs- AIUI- as its own user (i.e. "samba" in group "samba"). It assumes responsibility for checking who is trying to access a file (i.e. which user on a client system) but once it's done that it still uses samba:samba at the library and kernel level.

            I think I'd prefer that protection to be enforced at a lower level, in the same way that cross-system requests are enforced on many capability-based distributed OSes.

        2. jake Silver badge

          Re: So...

          "OK, but is "multiuser" relevant any more?"

          Absolutely.

          Take the small example of MeDearOldMum. Her computer (Slackware based) has multiple users. Her own account, Dad's account, the Admin account (she has the root password because it's her box, but I'm the admin and to the best of my knowledge she's never used it), a "guest" account for visitors to her house, my siblings and most of the grandkids have accounts for when they are visiting, etc. And yes, two or more of these accounts can be, and often are, in use simultaneously.

          1. Mage Silver badge

            Re: Take the small example of MeDearOldMum

            There are two kinds of multi-user.

            1) Different user accounts and only one user is logged in

            2) Multiple users at the same time.

            UNIX systems did both, Non-NT "windows shell" barely did (1) as you could simply access with no login, the log in was only for network. NT for years only had sense 1.

            1. Michael Wojcik Silver badge

              Re: Take the small example of MeDearOldMum

              Or as a variant of #1, a single user account with different roles, and some security boundary imposed when switching roles.

              I agree that most end-user computers these days are effectively in category 1. That doesn't make multiuser-in-sense-2 irrelevant for OSes in general, but I can see the argument that it's no longer a primary concern for end-user computing. There are still interactive end-user systems where it remains relevant, but it is a minority use case (by the proliferation of smartphones if for no other reason).

          2. MarkMLl

            Re: So...

            But be realistic: if you're the sort of person who still considers Slackware then you got a lot of that good attitude from your parents.

            A thoroughly above-average family :-)

            (The main reason I left Slackware for Debian was support for SPARC etc.)

      2. Mage Silver badge
        Coat

        Re: little to no _technical_ resemblance between them

        I didn't imagine there would be. I was thinking generally. I did look at Oberon OS and decided it was educational. I did manage to write DLLs in Modula2 for VB6 programs that ran on NT 4.0, Win2K and XP. One had to assign a VB string to a desired length string before passing, or "Bang!". I spent slightly less time on Oberon OS than Minix in maybe 1992, as I considered Minix for a course, but decided against it.

        And Linux succeeded in the sectors where it's a success because of compatibility and similarity and free, not because it's hugely better than UNIX or XENIX would be today if Linux hadn't existed. Likely some BSD + GNU would have replaced the commercial expensive Unixes for servers, routers, eink ereaders etc if Linux had never existed.

        1. Doctor Syntax Silver badge

          Re: little to no _technical_ resemblance between them

          " Likely some BSD + GNU would have replaced the commercial expensive Unixes for servers, routers, eink ereaders etc if Linux had never existed."

          Alternatively SCO could have realised that it was competing with free but not, as yet, as good. If they had aimed for a mass market - cut the price for single use, provided a free student edition or whatever it's just possible we might all, and I'm not confining this to Linux, users, have been using Unix on the desktop now. Linux would have got the chance to become as good.

          They released a developer's disk that was actually free, but I don't think it was available for long and was only licenced form 6 months' use. In practice that wasn't enforced so it was useful for anyone freelance supporting the paid for deployments. With a bit of prompting they got involved in the long running court case with Linux, took their eye off the ball and lost the SMB server market they'd dominated.

          1. jake Silver badge

            Re: little to no _technical_ resemblance between them

            Coulda, shoulda, woulda.

            It's all an accident of history ... and one that will likely not be repeated.

        2. ldo

          Re: not because [Linux is] hugely better than UNIX or XENIX

          I would argue that Linux made possible new application areas that would not have existed without it. Consider those cheap NAS boxes running SAMBA: do you think Microsoft would have allowed XENIX to be used in such an application? Also Android would not have existed without Linux. In which case maybe Windows Phone and Windows RT would have had a chance. Or maybe not. What would the world’s supercomputers be running today, without Linux? No other OS can scale to thousands of nodes without bottlenecking on some “big global lock”.

          1. jake Silver badge

            Re: not because [Linux is] hugely better than UNIX or XENIX

            "I would argue that Linux made possible new application areas that would not have existed without it."

            Nah. BSD would have picked up all that slack. When Linus started the kernel, what was to become 386BSD was already being made available to anybody with enough clues to look for it and ask. (Read up on 1991's Net/2 and the BSD Tahoe and Reno story if you are unaware of the history.)

            "do you think Microsoft would have allowed XENIX to be used in such an application?"

            Why do you think the early cisco built their own IOS? On the other hand, the early Sun Microsystems chose BSD for what became SunOS. And Minix was a wildcard, currently being (ab)used by Intel..

            Note that I'm not suggesting BSD is better than Linux (nor vice versa ... I happily use both), but I AM saying that Linux did not fill the vacuum that everybody thinks it did.

            1. ldo

              Re: BSD would have picked up all that slack.

              I notice you didn’t mention the point I made about supercomputers. Because that is some “slack” that BSD could never have picked up.

              1. jake Silver badge

                Re: BSD would have picked up all that slack.

                "I notice you didn’t mention the point I made about supercomputers."

                I chose to ignore it, rather than point out your ignorance.

                "Because that is some “slack” that BSD could never have picked up."

                BSD was a strong player in the supercomputer world back in the day. The Cray-1, arguably the first modern supercomputer, ran quite a bit of BSD code once Los Alamos was done with it. (Also arguably, most Linux machines also run a lot of BSD code, but that's a story for another day.)

                The last BSD-based supercomputer dropped off the top 500 list less than 10 years ago. Not for lack of ability, but because SUSE and IBM/RedHat pay more money to the research institutions who build the boxen.

                1. ldo

                  Re: BSD was a strong player in the supercomputer world back in the day.

                  Must have been before supers went massively parallel, back when Linux still had its “Big Kernel Lock”. The last traces of that were removed over a decade ago, whereas I believe BSD is still saddled with its one.

                  1. jake Silver badge

                    Re: BSD was a strong player in the supercomputer world back in the day.

                    FreeBSD (and derivatives (Dragon Fly comes to mind)) had "the last traces" removed at roughly the same time as Linux.

                    1. ldo

                      Re: had "the last traces" removed at roughly the same time as Linux.

                      Doesn’t seem like it.

                      1. jake Silver badge

                        Re: had "the last traces" removed at roughly the same time as Linux.

                        From the article you cite, "DragonFly BSD and FreeBSD have modern SMP support".

                        Not my words, mind. Matthew Dillon's words.

                        And from what I've seen and worked with, he's quite correct.

                        1. ldo

                          Re: DragonFly BSD and FreeBSD have modern SMP support

                          Seems that’s only on x86 architectures. If you look at the Top500 list, you will see a lot of non-x86 architectures on there. Only one OS offers massively parallel SMP on both x86 and non-x86 architectures, and that’s Linux.

  3. karlkarl Silver badge

    Plan 9 is an interesting OS.

    I wasn't convinced by the "mouse first" approach. The CLI has a lot to offer still in comparison. However we did get later ports of Vim to it.

    The git9 client is great as a project. In many ways it is so much cleaner than the upstream Git. They took a better approach than just a bunch of messy compatibility layers unlike Windows's Cygwin/Msys2 based approach.

    For me, last time I checked, Plan 9 lacked mmap which is a little bit of a deal breaker for a lot of software to be ported.

    I found drawterm to be very good for remote desktop. It even has a built in 9fs client in it so your disks are instantly accessible. That said, the vncs VNC server was also very straight forward to get working and slightly more portable across platforms.

    Since I run my web browsers in a Windows VM anyway, I probably could use it day to day. I just prefer the command line for what I do.

    1. ldo

      Re: better approach than ... Windows's Cygwin/Msys2 based approach

      Damning with faint praise, if ever there was ...

      1. Michael Wojcik Silver badge

        Re: better approach than ... Windows's Cygwin/Msys2 based approach

        Perhaps, but Cygwin's made Windows bearable for me for many years. I did use Interix, U/Win, and WSL 1 back in the day, but Cygwin is better from a user perspective than any of those.

        WSL 2 may well be a superior alternative today, but as far as I'm concerned, it's not worth the effort of switching from Cygwin.

    2. jake Silver badge

      "Plan 9 is an interesting OS."

      Indeed.

      I've been running it on one box or a dozen since it was first made available. To date, I have found absolutely no use for it at all, except as a tool to learn about OS design, and as a curiosity. (I used it as my main writing platform for about a year (coding, documentation, contracts, the books I'm writing, longer posts to ElReg, dead-tree letters, etc. ... honestly, I gave it a good solid chance, but I'm back to Slackware.)

      Plan9 is the poster child for a solution looking for a problem.

      But I like the silly thing. I want to find a use for it. Maybe someday.

      And no, using it as a container server for Linux applications isn't it ...

      1. Roo
        Windows

        "But I like the silly thing. I want to find a use for it. Maybe someday."

        That is Plan9 in a nutshell.

        I reckon Plan9 could be a useful OS for HPC, *if* it can scale to millions of nodes. You'd need some kind of resource allocation/management functionality in there to carve up a cluster of nodes into application sized chunks (and stop them from stomping on each other) which could render Plan9 not Plan9 anymore. The other opportunity would be what comes after the "Web", IoT, and mobile phone OSes as we know them today. No one cares what's running under the hood in those instances - they just want their interwebz, texting and doorbells that spy on their neighbours..

  4. disgruntled yank

    Features

    Are Javascript, CSS, and HTML5 features of any operating system in particular? I guess one could almost say the Javascript is a feature of Windows, given that the scripting host will allow one to run scripts written in a subset of Javascript.

  5. YetAnotherACUser

    ...files...

    NOT everything is a file...

    https://www.youtube.com/watch?v=9-IWMbJXoLM&pp=ygUPYmVubm8gcmljZSB1bml4

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: ...files...

      "What UNIX Cost Us" - Benno Rice (LCA 2020)

      Superb talk, that.

    2. ldo

      Re: ...files...

      On Linux, this principle is generalized slightly. You get three variants:

      * Everything is a file

      * Everything is a file descriptor

      * Everything is a file system

      The first one is the traditional Unix idea, with device files, Unix-family sockets and named pipes.

      The second one started there (pollfd), but has been taken further on Linux (e.g. eventfd, signalfd, inotify).

      The third one was also present somewhat on Unix systems (procfs). But again, Linux has taken it much further (sysfs, tmpfs, configfs, securityfs, cgroups and a host of others).

      1. _andrew

        Re: ...files...

        Bit of a shame that IP and it's numbered connections fit so badly into that scheme. The way Bell Labs networking was going, with abstractions over multiplexing of communication connections seems much more scalable and "container-able", but I don't think that there's any way to get to there from here, now.

        1. ldo

          Re: The way Bell Labs networking was going

          How did that work, though? How do you represent unreliable, datagram-oriented communication like UDP over file I/O?

          1. dboddie

            Re: The way Bell Labs networking was going

            The short answer is that it's done via files in /net/udp. Of course, you're going to use libraries to manage those interactions. See https://p9f.org/magic/man2html/3/ip for the low level details.

            1. ldo

              Re: Of course, you're going to use libraries to manage those interactions.

              I think it’s simpler just to stick with the good old socket API. That is also protocol-independent. For example, unix-family sockets support the SCM_RIGHTS function.

  6. nijam Silver badge

    > We don't know of a snappy term for this rule...

    Usually stated as "The good is the enemy of the best" or something very similar.

    1. Bitsminer Silver badge

      Sounds more like Thomas Kuhn to me. One summary has it as:

      Any replacement paradigm had better solve the majority of those [problems], or it will not be worth adopting in place of the existing paradigm.

      https://plato.stanford.edu/ENTRIES/thomas-kuhn/

      For example the issues with central mainframes were sufficient to motivate minicomputers and then, later, personal computers (PCs).

      And so on.

      1. Michael Wojcik Silver badge

        And then the issues with minicomputers and then, later, personal computers were sufficient to reinvent mainframes, in the form of web applications.

  7. Doctor Syntax Silver badge

    "Eighth Edition Unix didn't have much industry impact, and little if anything drew significantly upon the Ninth and Tenth Editions!

    The initial industry interest - and wider interest in general - grew out of the releases, primarily into academia, of the early versions. When AT&T were allowed to sell it as a product they set up a separate division that went its own way with System III (Was there a System I or II? I never encountered anything between &th ed & III myself) and later In FOSS parlance they forked it.

    1. Anonymous Coward
      Anonymous Coward

      Doctor Syntax Error! Missing closing double-quote on Line 1. (^8 ... ah-ah-ah ... 8^)

      1. mjflory

        At least it wasn't a Doctor Syntax syntax error!

    2. jake Silver badge

      There was no System I or System II ... there was no System IV, either.

      System III was named after a couple Bell-Labs-only 3.0 UNIX releases, namely Columbus UNIX 3.0 and UNIX/TS 3.0(.1(?)). It was a kludge of the best bits of many internal-only variants on the theme, including the Real Time project, the virtual memory project, and etc.

  8. Doctor Syntax Silver badge

    "it makes Wayland look like it was invented by Microsoft."

    It wasn't?

    1. David 132 Silver badge
      Trollface

      No, you’re thinking of Systemd.

      (Please note icon!)

      1. jake Silver badge

        That icon looks more like a GNOME.

        Just to stay on the topic of useless Linux projects.

  9. Zippy´s Sausage Factory
    Thumb Up

    I played around with Plan 9 when it was first open sourced. I really liked the ideas behind the OS but I couldn't find a way to do anything particularly useful with it. It was a fascinating experiment though.

    1. Liam Proven (Written by Reg staff) Silver badge

      > I couldn't find a way to do anything particularly useful with it

      9front is a little more polished in some ways, but yes, the stuff the P9 folks want and the stuff mortals want barely overlap.

  10. Christian Berger

    There is one important legacy from it...

    and that's UTF-8 which strips away multiple layers of complexity when dealing with international texts. There are now a lot of cases where you can just handle text as text, not having to worry about whether the characters inside of it are hieroglyphics or Arabic characters.

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: There is one important legacy from it...

      You're right. You know what, I intended to mention UTF-8 but forgot. Dammit.

      I did squeeze a nod to /proc though.

      1. jake Silver badge

        Re: There is one important legacy from it...

        Yes, UTF-8 was first implemented on Plan9 ... but that is just an accident of history. It would have been implemented on whatever Thompson and Pike were working on at the time that ken grabbed a pencil and started sketching. Most of the spec came out of USL, not Bell Labs.

        1. stiine Silver badge

          Re: There is one important legacy from it...

          USL?

          1. jake Silver badge

            Re: There is one important legacy from it...

            Unix System Laboratories.

  11. HuBo
    Holmes

    Long scale (EU) sex-OS / und-OS (US) short scale

    Could be slightly off-topic, but as we've met the FP64 Exaflopping Frontier (2⁶³ in individual bytes), where every 2 seconds our 64-bit bytes-processed-so-far counter is reset, we might need to ensure that our Next-OS (Plan 11, HPC Linux, or Windows Home), and our CPUs, are 128-bit ready. IPv6 is already 128-bit, and so are Neoverse V2 vector units, while AVX and AMD are right around the corner at 256-bit vectors. A 128-bit OS running on 128-bit machines seems to be the next expected evolutionary step (128-bit instructions could entail up to 8x 16-bit compressed instructions, to be ILP-executed on 8x execution ports in the CPU).

    A 128-bit address space (≈ 300 EU long scale sextillions, or US short scale undecilions) could provide for an "everything is a byte" (or a cache line) perspective for the OS, simplifying it some from the contemporary "everything is a file" prospect. In other words, with space to address 300 million quettaBytes, each of 300 billion humans could have 1 ronnaByte of unique addresses reserved for her/his "website" data (and other needs), without overlap with anybody else's space (that's one billion exaBytes per person). An updated version of CXL 3.0 might then be used to access remote bytes anywhere permitted.

    If this OS were 128-bit Inferno, its Dis would be updated to 128-bit words (and similarly for JVM, WebAssembly, and Erlang's BEAM). As 64-bit gets obsoleted, we probably need to get ready for 128, which should be ok until we reach more than a million times the quettaScale (no approved name for that order of magnitude yet).

    1. jake Silver badge

      Re: Long scale (EU) sex-OS / und-OS (US) short scale

      We've been working on 128-bit computing for a very long time.

      The IBM System/360 Model 85 could handle 128-bit floating point arithmetic back in 1968.

      At DEC, the VAX line called 'em octawords and HFLOATs, using four consecutive registers ("four longwords").

      If I were you, I'd be calling for 512-bit computing and wait for everybody to call me a visionary.

      1. David 132 Silver badge
        Happy

        Re: Long scale (EU) sex-OS / und-OS (US) short scale

        Pfft, Jake, you lack ambition. Only 512 bit addressing?

        Look to the future.

        5,242,880 bits should be enough for anybody.

  12. ldo

    cgroups Are Not NameSpaces

    Just a note that cgroups are about resource control (e.g. CPU and memory allocation). They are entirely orthogonal to namespaces, which are about access isolation. Linux doesn’t have “just one global namespace” because it wants to give you different ways of dividing up your access control. They could have done it the Plan9 way, with only a single concept of “namespace”, but they wanted to be more flexible than that. Thus, different processes under the same Linux kernel can have entirely different ideas of what processes are running, what the hostname is, and even what the system time is.

    Also, Plan 9’s “network-centric” approach only seems to work if you only use one network protocol. Remember that a single Linux kernel can handle IPv4, IPv6, NETBIOS, and even hoary old AppleTalk and DECNET.

    In short, Plan 9 was interesting as a research project, and maybe some ideas could still be taken from that. But as a workable, practicable system, I think it should be left in a museum.

    If it’s any consolation, I feel the same way about MULTICS.

  13. Brantley Coile

    Long time Plan 9 user ... and provider.

    Our network storage products run Plan 9, as does our development lab. We have Macs for browser and things.

    I used Plan 9 when I was employed briefly at Bell Labs Murray Hill. In 1995, when they opened it up, I switched to it and have been using it for all our products since. (I developed the PIX Firewall that Cisco purchased from our startup, and it used a very tiny executive I wrote, patterned after some of the bits of Plan 9.)

    My kernel is a branch from the main 9fans based kernel, is a lot smaller, and is more in line with what Dennis Ritchie described to me in 1987.

    We run Ken Thompson's file server and not Venti and Fossil. We still use the protocol IL, and other things of the original vision.

    This is not an operating system to replace Linux. It's not for the main stream.

    Brantley Coile

    CEO/Founder Coraid

    Creator of the PIX Firewall

  14. nielsl

    Microsoft WSL2 uses plan9 filesystem

    At least for the Ubuntu distribution in WSL2, the C:\ D:\ ... ntfs volumes in Windows are mounted in ubuntu side with mount -t drvfs

    They must have implemented drvfs with plan9 source code because when you do df -Tm they show up as type 9p.

    So parts of plan9 are used by millions of WSL users on Windows desktops.

    Filesystem Type 1M-blocks Used Available Use% Mounted on

    none tmpfs 12020 1 12020 1% /mnt/wsl

    none 9p 476310 373837 102473 79% /usr/lib/wsl/drivers

    /dev/sdc ext4 256947 83172 160652 35% /

    none tmpfs 12020 26 11994 1% /mnt/wslg

    none overlay 12020 0 12020 0% /usr/lib/wsl/lib

    rootfs rootfs 12017 3 12015 1% /init

    none tmpfs 12020 1 12020 1% /run

    none tmpfs 12020 0 12020 0% /run/lock

    none tmpfs 12020 0 12020 0% /run/shm

    none tmpfs 12020 0 12020 0% /run/user

    tmpfs tmpfs 12020 0 12020 0% /sys/fs/cgroup

    none overlay 12020 1 12019 1% /mnt/wslg/versions.txt

    none overlay 12020 1 12019 1% /mnt/wslg/doc

    C:\ 9p 476310 373837 102473 79% /mnt/c

    D:\ 9p 114471 517 113955 1% /mnt/d

    F:\ 9p 2861589 1163060 1698529 41% /mnt/f

    G:\ 9p 476310 378961 97349 80% /mnt/g

    H:\ 9p 1826900 1029079 797821 57% /mnt/h

    S:\ 9p 1826900 1029079 797821 57% /mnt/s

    Y:\ 9p 2861589 5637 2855952 1% /mnt/y

    Z:\ 9p 915188 224092 691097 25% /mnt/z

  15. Michael Wojcik Silver badge

    I do not think that word means what you think it means

    To pick a trivial example, Plan 9's version of C prohibits nested #include directives.

    That's hardly "trivial", and it means "Plan 9's version of C" is not C.

    C is the programming language defined by ISO/IEC 9899, Programming languages — C. And that specification does not allow an implementation to "prohibit nested #include directives". See for example 9899:1999 (the version I happen to have handy at the moment) 6.10.2 #6, which specifically requires nested inclusion, up to an implementation-defined limit; but that limit has to conform to 5.2.4.1 in a conforming implementation, and 5.2.4.1 requires "15 nesting levels for #included files".

    If what you wrote about Plan 9's "version of C" is correct, then it's not a "version of C". It's a version of a programming language similar to C, but it is not C.

    (Prolepsis: Yes, of course I know who was responsible for pre-standard C. I wrote software in K&R C before C was standardized. Since 1990, there has been only one C, and that's standard C. Everything not meeting that standard is not C.)

    1. røyskatt

      Re: I do not think that word means what you think it means

      Moot point since the main compiler suite isn't fully ANSI C/C89/C90 conformant anyway, but the compilers handle nested include directives just fine, the reason nested include directives don't generally work on Plan 9 is that the system header files don't have include guards and the compilers don't accept duplicate typedefs or enum constants (9899:1990 conformant behavior (6.5 Declarations - Constraints)) or duplicate macros (I think this violates 6.8.3 Macro replacement? Not sure.) Nothing technical would stop you from writing your own include guarded header files and including them from each other in your own work, you'd just have to not include any system header files in them.

  16. Paul Hovnanian Silver badge

    Network-centric workstations

    "In Plan 9, networking is front and center."

    And one JCB (backhoe) can bring your civilization to an end.

    "but reconsidered for a world of networked graphical workstations"

    There are numerous applications for which a workstation, let alone a graphical one, are not needed. If it can light up the power LED, plus maybe a blinking heartbeat one, that's all I need.

  17. shemminger

    First I was long time Unix developer, then was early adopter in one of the first commercial products to use Plan9 (nCube).

    There were many, many things that were broken in Plan9. The file server originally was an assembly program that only worked on certain PC hardware with certain network cards; complete crap.

    The network stack was a toy with a bad TCP implementation; ended up ripping it out and replacing it with BSD.

    The process scheduler was also a toy and performed like crap.

    And the VM system was even worse.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like