back to article Linux may soon lose support for the DECnet protocol

There's a proposal to remove the code for the DECnet networking protocol from the Linux kernel… but what was DECnet anyway? Microsoft software engineer Stephen Hemminger has proposed removing the DECnet protocol handling code from the Linux kernel. The timing is ironic, as this comes just two weeks after VMS Software Inc …

  1. UCAP Silver badge

    The protocol wars were over years ago. What we are doing now is cleaning up the battlefield of any leftover debris.

    1. R Soul Silver badge

      The protocol wars were over years ago. What we are doing now is cleaning up the battlefield of any leftover debris.

      Sadly, too much of that debris is still festering away and often mutating into new types zombies that are hard to kill off.

    2. bazza Silver badge

      Except there are some people looking to replace tcp inside data centres, coz it's too slow...

      1. jake Silver badge

        "Except there are some people looking to replace tcp inside data centres, coz it's too slow..."

        That's just fine. We didn't build TCP/IP for data centers to run internally. We built it to connect smaller computers to data centers. What the data center uses internally should be transparent to the outside world.

        As a small example, I still use UUCP to move stuff around internally to my Usenet farm. Outside connectivity is TCP/IP.

        I do the same for large email systems.

        Why I do it this way is left as an exercise for the reader.

        1. Munchausen's proxy

          " Why I do it this way is left as an exercise for the reader. "

          You are Ned Ludd and I claim my 5 pounds.

      2. Malcolm Weir

        Re: What's the problem?

        Well yes, but the replacements (QUIC, etc) generally use UDP/IP as a lower level protocol.

        Back in the Olden Days, we divided the world up into LANs and WANs, and had different protocols for each (IP and ATM, for example).

        These days, why bother...

      3. Bruce Ordway

        tcp... too slow...

        Reminds me of a site and user complaints after NetBeui was eventually replaced by TCP.

        TCP was required when upgrading from dumb terminals (coax/bnc) to PC's & terminal emulation (cat5/rj45).

        Complaints because of the noticeable increase in file transfer time on the local network after NetBeui was removed.

    3. Philip Storry


      Early in my career I specialised in Lotus Notes. Which had network drivers for all kinds - TCP, NetBIOS, SPX, Banyan VINES, serial connections... I don't specifically recall DECnet being in there, but that's probably because VAX was one of the few server options Notes never had...

      I met Notes back in 1996. I think I only ever had one production server that used SPX - a Notes server running as an NLM on a Novell Netware server. Very very rapidly everything went to TCP/IP. A decade later, those drivers were already a historical curiosity for 99% of computer professionals working with Notes.

      They got removed from later versions shortly after that.

      (Yes, there were later versions of Lotus Notes, no matter what it might have seemed like. Companies just took their own sweet time deploying them...)

      I'd say that TCP/IP had won by the year 2000. Everything since then has been mopping up operations...

      1. anothercynic Silver badge

        Banyan VINES? Christ... that's a ghost from the past.

        1. Somone Unimportant

          Long live StreetTalk!

      2. Anonymous Coward
        Anonymous Coward

        Lotus Notes .... you mean VAXNotes++. ... though the "++" is debatable :-)

      3. jake Silver badge

        "I'd say that TCP/IP had won by the year 2000."

        Earlier than that. I'd say TCP/IP won when we (figuratively and literally) flipped the switch, moving the ARPANET from NCP to TCP/IP ... Everybody rebooted at midnight, we waited on the edge of our seats for machines world-wide to come back up ... and it fucking worked, right out of the box.

        That was January 1st, 1983.

        Every other wide area Data networking protocol was doomed from that point forward. (Voice hung out in it's own niche for a couple decades, but even that's being subsumed now ... ). Most of the LAN protocols fought a valiant rear-guard action, but the writing was on the wall for them by 1990, earlier for those paying attention.

        1. Philip Storry

          True, but IPX/SPX was still being deployed for several years after that in businesses.

          By 1999 or 2000, nobody was deploying anything but TCP/IP, and maybe AppleTalk if you had a marketing department in the building.

          Hindsight is 20/20. For a while in businesses it did look like IPX/SPX might win, then Novell tanked and the protocol went with them. Eventually SPX was only deployed for access to file servers we were decommissioning, and being used for cheeky gaming sessions by technical staff.

          When Half Life replaced DOOM/Quake, SPX had absolutely no use and got stripped from the network. I was working for a consultancy at the time, and had conversations about this with a few customers. ;-)

          1. anothercynic Silver badge

            That's true... IPX/SPX still lived on for quite a while after 1983. Novell never really got over the switch of businesses moving to TCP/IP from their protocol in the late 1990s-early noughties...

            1. Philip Storry

              This is anecdote and opinion, but what I think really killed Novell wasn't the protocol switch - you could get a Netware server to speak TCP/IP.

              What killed Netware was Windows NT.

              The vast majority of clients were running some version of Windows by the mid 90's. But nobody in their right mind would run their printing and file sharing on Windows for Workgroups 3.11 - it was fine for small offices, but lacked the security controls and stability you needed for a larger site.

              Then Windows NT arrived. And suddenly you could feasibly replace both your file/print server AND run applications as services so replace your UNIX servers if you had them too. (Software availability allowing, that is.)

              Windows NT wasn't perfect. It wasn't as good as Novell for file sharing - in particular it didn't do complex file permissions well until Windows 2000 shipped, and the workaround was using the unsupported CACLS command. But it ran applications a lot better than Novell did - a heck of a lot better.

              So Windows NT was Good Enough(TM, Patent Pending) and due to the ongoing lowering of storage prices per MB you could consolidate several expensive old Novell machines onto a set of fewer, cheaper new Windows NT machines.

              And those new servers also worked very with all those new Windows clients. In fact, why not just bite the bullet and set up a Domain at some point, maybe migrate everyone over to that too?

              Novell Netware's file sharing was brilliant, but they were not that good at anything else. (Except directory services with Netware 4.x, but that's a different story.) The Windows NT/(Windows 95|Windows for Workgroups) combo landed at a time when people expected more from computers, preferably for less money. Windows NT delivered that, Novell didn't.

              The death blow was probably the atrocious Netware protocol/logon drivers for that Novell shipped for Windows 95 - for a while the advice was to use the "bare basics" drivers that shipped with Windows 95 itself! The official Novell drivers were buggy and unreliable for quite a while, as well as being rather a resource hog.

              That story was then repeated with the drivers for Windows NT4, which didn't help at all.

              But I think that the move away from Novell had started long before that, when people saw how easy it was to create a file share whilst also running a service on a Windows NT machine. It made it viable to run a branch office from one server without hassle. In theory Novell could do that, but in practice Novell just couldn't do it reliably enough so everyone moved to Windows NT.

              1. Pirate Dave Silver badge

                I disagree. What killed Novell was Windows 95. Mom-and-pop shops could go to CompUSA and pickup a shiny new piece of crap eMachine for $500 bucks, and not only did they get a new computer with an enormous 540 Meg hard drive, they also got Win95 which made sharing files stupid simple for free. No more giving me $500-$700 for 5 Novell licenses plus $1200 for a "server", when they could share files and printers just as well with Win95. Now Win95 was nowhere near as reliable or robust as Netware, but most mom-m-pops didn't care if they needed to reboot the "server" every Monday morning since they hadn't had to lay out $1500+ for it.

                That was my experience in a small town in north Georgia. Previously Netware reigned supreme, but by the time Win98 came out, it was getting rare to find Netware in any shop with less than the magical 10 users.

                1. Liam Proven (Written by Reg staff) Silver badge

                  I disagree, but hey, different courses in different markets and things.

                  In terms of peer-to-peer networking, Win95 didn't do anything much that Windows for Workgroups couldn't do. And because WfWg 3.11 had 32-bit *file* access as well as Windows 3.1's 32-bit *disk* access, WfWg 3.11 was substantially faster, making it a big hit and everyone wanted it... even people without networking at all.

                  Nobody sane would run Win95 as a file server. So, yes, it could do P2P stuff on a Netware backbone, but so could WfWg.

                  Whereas NT made a perfectly good, solid server, and NT 4 did that _and_ had the nice easy Win9x GUI.

                  It was as solid as Netware, if not as fast. But NT could do other stuff as well. I put in a lot of boxes with Mailgate, providing dial-up networking over POTS modems. WWW access was horribly slow, but it worked and was there if you really needed it. What was important in the late 1990s was that this gave everyone email to the desktop.

                  Win9x and NT included a built-in email client, which could talk to Mailgate or other POP3 servers. No licence fees, very easy to set up, and people could email each other (which was super fast) and email customers, clients, suppliers (which was slow, but 1990s email *was* slow.)

                  Netware couldn't do that without a lot of extra work. IntraNetware didn't have this in the box: you needed an additional email server (which is where Pegasus got started) and then you needed a gateway/firewall, such as Novell BorderManager:

                  Which meant extra $$$ of spend.

                  This came in the box with NT. NT did dial-up, PPP, and dial-on-demand. Mailgate added a POP3 and SMTP server and a proxy server.


                  It even sold to MS Exchange customers, because it didn't do POP3 out of the box. Exchange expected an always-on connection and SMTP mail delivery.

                  1. Pirate Dave Silver badge

                    Around here, there were very few NT shops prior to 1999 - I can only remember 2, and they were subsidiaries from big corps, but there were likely others that I never serviced. But the smaller shops either stuck with existing Netware until their hard drives died (RAID? What RAID???), or went to Win9x. I did have a few running WfW, but none of them were happy running that "old" Windows, and most wound up going to Win9x. Then there were the oddballs - Banyan, and one or two others I can't remember. Those were popular in pawn shops for some reason, maybe they were cheap or easy to pirate.

                    I also don't recall many shops having their own internal email. Most either shared a single AOL/CompuServ dialup account, or had something setup with their ISP. This was the day of the deskphones, so most internal communication was dialing extensions and using the fancy speakerphone.

                    I got out of the "service call" business in late '99, so I don't know when those shops migrated or where they went.

              2. atheist

                bare basics

                It was the other way around.

                The Microsoft bare basics client for Netware would corrupt databases hosted on the Netware file servers.

                This caused a lot of grief.

                NT Server emulating an Netware file server gateway lost Novell a lot of licensing revenue.

                A double whammy.

              3. Liam Proven (Written by Reg staff) Silver badge

                I agree with you... and indeed, I said so in the Reg a decade ago. :-)


          2. Michael Wojcik Silver badge

            By 1999 or 2000, nobody was deploying anything but TCP/IP, and maybe AppleTalk if you had a marketing department in the building.

            SNA is still holding on in some IBM shops.

            1. atheist

              Not on PC's as the client could not run reliably on anything faster than a 486/2/66Mhz

        2. Graham Cobb Silver badge

          January 1st, 1983. Every other wide area Data networking protocol was doomed from that point forward.

          Only with hindsight. It didn't look that way at the time at all!

          SNA was completely dominant in the business world - and remained so for quite a while. DECnet was probably best-of-the-rest at that time, mainly due to considerable take-up by academics and technical business users outside the US and then by the invention of clustering. But then the PC revolution happened and a load more fragmentation (Novell, Apple and eventually even Microsoft).

          DECnet formed a large part of the basis for OSI networking (in terms of architecture and some protocol design) which was intended by many commercial and government entities (particularly outside the US) to supersede not just the proprietary vendor protocols but also TCP/IP. But the standardisation process was just far too slow.

          1. Roland6 Silver badge

            >SNA was completely dominant in the business world

            By 1989 IBM knew the writing was on the wall for SNA. A colleague who was well into SNA showed me a release from IBM that contained some statement about the future strategy, his take on this statement was that IBM expected SNA to be dead within 10 years, hence start planning now for your (mainframe) network refresh.

          2. Anonymous Coward
            Anonymous Coward

            "DECnet formed a large part of the basis for OSI networking ...But the standardisation process was just far too slow."

            That's a rather unconventional view of reality. The OSI protocol suite failed because it was crap, complex* and close to unusable. Nobody in the real world would touch it with a shitty stick, including the OSI networking cheerleaders like the DoD. Even with the backing of GOSIP procurements, OSI failed to get anywhere. If the OSI suite was any good, it would have succeeded, no matter when it was standardised.

            * FWIW I was at a meeting 30-35 years ago where the lead architect of OSI (Zimmerman?) said he didn't know what the Session Layer in his reference model was for or why it was needed.

            1. Graham Cobb Silver badge

              My recollection of the events at the time is rather different from yours.

              The OSI network layer was good. So good that IS-IS routing, and the network layer address format were both adopted by the IP world. Reality is that, at the network layer, neither OSI nor the IETF equivalent have actually been successful - everyone is still using the 1970's IPv4 technology!

              The OSI higher layers were very slow to develop, and were unnecessarily complex (related issues, due to the crap standardisation process!). However some aspects of these are in daily use across the Internet today: parts of ASN.1, X.400 and X.500 for example.

        3. Roland6 Silver badge

          >January 1st, 1983.

          Remind me how many nodes were on the Arpanet back then... :)

          I'd say TCP/IP was still work in progress, what it (the ARPANet) demonstrated was that the future of networking was not proprietary.

          However, it was good enough to be included in the Berkeley Software Distribution along with another useful piece of software, namely Unix...

          I would however, agree with the author that it took until circa 2000 before TCP/IP and the IETF governed protocol suite really became the qwerty keyboard of networking.

          1. jake Silver badge

            "Remind me how many nodes were on the Arpanet back then... :)"

            None! But you have a point, there were around 400 hosts.

            TCP/IP is STILL a work in progress.

            "I'd say TCP/IP was still work in progress,

            I'd say "TCP/IP is STILL a work in progress."

            "what it (the ARPANet) demonstrated was that the future of networking was not proprietary."

            What it demonstrated was that networking was something that most people could find useful in their day to day lives, as long as the price of access wasn't artificially jacked up by corporations out to turn a profit.

            "However, it was good enough to be included in the Berkeley Software Distribution along with another useful piece of software, namely Unix..."

            BSD was a series of patches and additions for UNIX. One of those additions was TCP/IP. One could argue (and we did!) that it wasn't good enough to be included due to its lack of security, but Cerf obeyed his masters in the Pentagon and put the kibosh on building security in from the git-go. That's why the internet is not today, and never will be, a secure system ... and by extension, not yet ready to ship.

            "I would however, agree with the author that it took until circa 2000 before TCP/IP and the IETF governed protocol suite really became the qwerty keyboard of networking."

            Perhaps this is true in your niche. In mine, not so much.

      4. Youngone Silver badge

        I used to play a game against a mate using IPX/SPX in the mid 1990's. I can't for the life of me remember what it was called though. Some sort of real time strategy game I think, though not Age of Empires.

        1. An_Old_Dog Silver badge


          We used to play (the original) Quake after work on our company LAN via IPX/SPX, and copy files from old workstations to their new replacements using Frey Utilities, also via IPX/SPX.

          1. Philip Storry

            Re: IPX/SPX

            Gaming was the last refuge for SPX.

            I had a couple of customers that used it for that, and then moved from DOOM/Quake to Half Life - which used TCP/IP. So they then removed SPX from the network.

            Of course, this generated errors whenever the Notes server started because it was trying to bind to an SPX driver that was no longer there. Easy fix - remove the protocol from the server document in the directory, remove the driver from the line in notes.ini, restart the server. Well, easy when you know how...

            It was an interesting conversation with the technical staff at the company, and we had to slightly modify the problem description and solution so that neither their nor my management realised that the whole problem was caused by LAN gaming! The words "legacy file server access" got used a lot...

          2. Bruce Ordway

            Re: IPX/SPX

            ...Quake, oh yeah, I remember concerns about a slowing network sometime in the 90's.

            The sites hardware was pretty simple, two hubs and one bridge.

            When I checked the equipment room I noticed several ports indicating constant activity?

            A quick walk around the building revealed that the entire customer support department had discovered Quake.

            Multiple staff meetings ensued, new company policies and finally upgraded from hubs to switches.

            Shortly after I believe the internet was "discovered" by the same dept. and spawned a similar wave of policy changes.

          3. Liam Proven (Written by Reg staff) Silver badge

            Re: IPX/SPX

            It was Doom that allowed four-player deathmatches over IPX/ SPX:


            Quake originally launched as a DOS app and could use it too, but Quake quickly became a Windows app so it could use OpenGL... and in moving to Windows, that gave it TCP/IP deathmatches.

        2. FrankAlphaXII

          The first 3 Command and Conquer games had IPX/SPX LAN networking, the 4th one, Red Alert 2 may have had it as well but everything I was doing was TCP/IP by then.

          I think the first Starcraft and first two Warcraft games also may have had it but I know for a fact that Command and Conquer did.

      5. Yes Me Silver badge


        " I'd say that TCP/IP had won by the year 2000. "

        1989 in my book, and others would say sooner. Everything else became legacy long before Y2K. Of course, not all corporate IT departments realised it immediately.

        1. Graham Cobb Silver badge

          Re: When??

          Yes, I agree, around 89. If memory serves that is around the time we decided our big routers had to support TCP/IP properly (with line-speed routing, etc), not just DECnet/OSI.

          However, I still like to lay claim that our team built what is still the world's fastest OSI networking router (because no one has bothered to build one since)!!

          1. R Soul Silver badge

            Re: When??

            It was that fast because there was nowhere to route those OSI packets to. Dropping packets at line speed isn't hard, even for an OSI router.

    4. Ken Hagan Gold badge

      There are still a few laggards running IPv4, I hear.

    5. Anonymous Coward
      Anonymous Coward

      I'm rather uncomfortable that an individual from Microsoft has taken it upon himself to remove multiple protocols from the Linux kernel. Particularly as many seem to have been used by former Microsoft competitors.

      1. Anonymous Coward

        If there's one company that knows about the clutter that backwards compatibility wreaks on an OS, it's Microsoft (reserved A: drive? Still? Really?).

        I have no problem with them suggesting unmaintained and unused drivers that can be removed from the kernel. Keep in mind that, while they are there, these drivers need to be tested with each release, adding to the burden of the kernel maintainers.

        1. jake Silver badge

          "these drivers need to be tested with each release, adding to the burden of the kernel maintainers."

          That testing is automated. The so-called "burden" will not change for better or worse with the removal of a well known existing protocol.

          1. Malcolm Weir

            The security threat surface is improved by eliminating exotica that may or may not have vulnerabilities...

            1. An_Old_Dog Silver badge

              Monocultures and Malware

              At the same time, monocultures, whether in nature or technology, are far-more vulnerable to a single bug, virus, or exploit than are multicultures.

              1. Geoff Campbell Silver badge

                Re: Monocultures and Malware

                Whilst that's a good basic philosophy, running additional protocols in addition to TCP/IP doesn't make it a multiculture. It's a monoculture with some stuff on the side.


            2. Disgusted Of Tunbridge Wells Silver badge

              Presumably these protocols are modules that by default aren't loaded?

          2. An_Old_Dog Silver badge

            Maintenance Burden

            That testing should be automated! If it is not, then there's a problem with the culture and values of the programmers working on that project.

          3. atheist

            The automated testing is mostly on VMs not real hardware. Not actual physical idiosyncratic hardware. We are all Microsoft beta testers.

      2. rcw88

        I would basically agree, Microsoft completely borked Name Resolution over TCP/IP, when Netbios over TCP/IP was still a thing. I worked with DECNET/LAT, XNS, IPX/SPX, Netbui before TCP/IP took over - because interconnectivity JUST WORKED, with TCP/IP, when everything else was painful. OSI stacks were impenetrable and hindered by massive over-engineering. Even something as simple as telnet over OSI was hard.

        Its sad, because DEC's products and software stacks were far more stable and reliable than anything Microsoft have EVER produced.. When NT3.1 fell over in a heap, VMS just kept on delivering, VMS still delivers and Server 20xx still falls over [though not quite as often].

    6. Anonymous Coward
      Anonymous Coward

      ... cleaning up the battlefield of any leftover debris.

      UXO's - UneXploded Ordnance - Sometimes, one can find in, for example, old Telecom- and Electrical- Network management systems, ancient dinosaur protocols from the good old days, when security features were that Bad People didn't read manuals. Not firewalled too.

  2. karlkarl Silver badge

    Whilst I am not particularly attached to a DECnet card, it would be good if rather than removing it from the kernel and putting it in the bin (or in the depths of Git repo), if it could be isolated into a single .h and .c file and made portable and easy to maintain / re-integrate for those interested.

    Yes, there is more to it than just a driver, there is much plumbing (especially for older GPUs too which is where my main thoughts are). However it could be quite a unique feature of Linux to deprecate and remove things in a less "breakful" manner.

    It might not be possible, then fair enough but honestly I feel open-source operating systems are the only ones that could get close to achieving this.

    1. iron Silver badge

      Why would a network protocol require "much plumbing (especially for older GPUs"?

      WTF does a GPU have to do with networking? Especially a network protocol that predates the GPU by a couple of decades.

      1. karlkarl Silver badge

        Perhaps read my post again and you will see that I was talking about drivers in general by that point. My whole post was about seeing if drivers can be put into long term storage rather than destroyed.

        Most drivers need plumbling. Network drivers need the i.e TCP/IP stack, GPU drivers need i.e the DRM/DRI stack.

        1. OhForF' Silver badge

          As the article mentions there was no maintainer for the protocol for over a decade I'd say it was already in 'long term storage' for 10+ years.

          The proposal to remove it mentions that most changes done to it were attempts to clean up.

          This shows having the code in the kernel does create additional effort for the kernel programmers.

          When nobody is able and willing to maintain it and nobody steps forward to say he's still using it the logical thing is to remove it to get rid of effort necessary to maintain the code in future kernel versions.

    2. Ace2 Silver badge

      They are on record as not giving a fig about out-of-tree users. If it’s not in the kernel, they do not care if interface changes break it.

      The problem of unused drivers is not in testing or anything, it’s in the interface and abstractions. There are always parts of the core code that were done a certain way to enable specific drivers/whatever to do what they needed. (Hooks before or after, things done in a certain order, callbacks for this or that.) People want to improve the core code, but if that requires redesigning DECnet, it’ll never happen.

      1. jake Silver badge

        "They are on record as not giving a fig about out-of-tree users."

        Of course not! It's not in their remit, by definition. If you want something in the mainline tree, do the work to get it put there. If you can't be arsed to do the work, why should the kernel devs waste their time on something they have/had nothing to do with?

        "If it’s not in the kernel, they do not care if interface changes break it."

        Not exactly ... Linus is on record as saying "user space is inviolate".

        1. Ace2 Silver badge

          For “not in the kernel” I should have written “in the kernel but not upstream.”

    3. Malcolm Weir

      Good news! You can get the source for old, tested, and working versions of the kernel by using the magic of code repositories!

      If some unicorn shows up ready and able to polish the DECnet drivers, support them, and (critically) provide some semblance of a justification, they'll be sitting on the shelf ready to go!

      1. Geoff Campbell Silver badge

        The magic of Open Source

        I do sometimes wonder if the posters of "something must be done!" suggestions have any idea how Open Source projects work...


    4. RichardBarrell

      You can run custom network protocols over Ethernet from user space by opening a SOCK_RAW socket and send()ing and recv()ing raw Ethernet frames on it. This works without needing to completely take control of a NIC.

      You can also drive hardware from user space on Linux. You can write a program that mmap()s files in /sys/devices/pci${X}/${Y}/. I'm told this is a common way to interact with small-production-run custom hardware from Linux, because user space is much less difficult than kernel space (e.g. gdb & lldb mostly just work in user space, whereas they are less easy in kernel space). It's also how things like Snabb Switch and Intel DPDK work.

      1. TRT Silver badge

        Exactly! Kernel space is just that. It doesn't mean that something is now impossible, just a bit more involved if you have a reason to want it. And if the requirement is strong enough for you, a way will be found.

  3. Doctor Syntax Silver badge

    The only time I was involved with DECNet the Ethernet addresses assumed that the NIC was made by DEC and hardcoded the DEC-specific part of the MAC. If you wanted to use DECNet on some non-DEC kit you have to overwrite the MAC. How is this handled these days when the NICs are anything but DEC?

    We were using HP-PA hardware and first turned on the DECNet stack in the middle of the working day. This meant that all nodes using the HP server now had invalid addresses in their caches... The clamour from the users died down as their PCs re-synchronized.

    1. Anonymous Coward
      Anonymous Coward

      "TIf you wanted to use DECNet on some non-DEC kit you have to overwrite the MAC. How is this handled these days when the NICs are anything but DEC?"

      Who gives a shit? DECNet is dead, dead, dead. Even those overseeing the everything-but-the-kitchen-sink approach of the Linux kernel recognise this. Just kill the DECnet kernel code. Kill it with fire.

      1. J.G.Harston Silver badge

        I thought the underlying point with Linux was that you added whatever drivers to your installation that you wanted yourself, just like RMLoading a module on RISC OS. After all, I wouldn't expect the operating system I bought to come with a driver for my JayEx LED display board (what's that? exactly!), I'd load the driver myself when I needed it, or as /my/ default build.

        1. OhForF' Silver badge

          If you really want to you can still build a DECnet kernel module and load that, the kernel infrastructure to do that is there.

          DECnet protocol support will just no longer come out of the box with the vanilla kernel.

        2. david 12 Silver badge

          The underlying point of the linux model is that you can build the kernel yourself, from the Open Source. It doesn't traditionally come with a way of loading/unloading kernel modules, because it had to be portable to generic CPUs, which didn't generally have a hardware-supported method of loading / unloading kernel modules.

      2. Doctor Syntax Silver badge

        Did you miss reading the bit that said VMS is still alive? I doubt those maintaining it consider DECNet dead.

        1. Anonymous Coward
          Anonymous Coward

          No, I didn't miss that bit. VMS is not DECnet and DECnet is not VMS.

          If DECnet is still alive, name one person or company that currently supports, maintains or develops the protocol architecture. For bonus points, name just one enterprise that still runs DECnet (on what?) in a production setting.

          DECnet was given a lethal injection in 2000 (ish) when Compaq closed the VAX production lines. A protocol architecture which ended 20+ years ago is dead IMO. Even this article said the DECnet code in the Linux kernel became abandonware 10+ years ago. If nobody in Linux-land could be bothered with DECnet for over a decade, that's very compelling proof DECnet is dead.

          1. Tony Gathercole ...

            The beginning of the end for DECnet was much earlier in my view

            I was an administrator of various types of DEC kit from 1978 to the turn of the millenium. Supported "DECnet" on many of them ranging from VAXen and DECSYSTEM-20s at the high end through various PDP-11 (RSX) and PC systems as well as Ultrix and Tru64/Digial UNIX. Also worked with several third-party implementations for various flavours of UNIX - but never Linux. Saw three/four generations of the DECnet protocols (phases II, III, IV and V/OSI. Was a founder of, and part of, the oversight team for ICI's (remains of ICI form part of AstraZeneca today) world-wide DECnet network for several years.

            In my opinion the real inflection point for DECnet was in 1987 with the announcement of DECnet/OSI (or Phase V). Previous versions were relatively straight-forward to implement and support, but were significantly limited in the number of nodes that could be incorporated into the network (63 "areas" earch containing a maximum of 1023 members - routing being either within an area (level 1) or between areas (level 2). However, the OSI implementation seemed to me to be far more complex (at the DECnet level) and introduced (baffleing ?) multi-protocol capabilities (anybody else recall those swines called the DECnis 500/600 family?) including IP support that caused much extra work and needed (IMHO) a significantly higher level of training and understanding to implement a large network.

            I'm not saying that the move to Phase V wasn't necessary - if the world had adopted OSI rather then the IP stack then DEC would have been worlds ahead of virtually everybody - with the possible exception of ICL! But it didn't and IP became dominant as we all know, and DECnet was sidelined and effectively died with the corporation in the Compaq and ultimately HP takeovers.

            Do I care that support for DECnet (I assume Phase V from the article) support is being removed from the Linux kernal ? To be honest - No; mostly because I left that part of my working life behind more than 20 years ago and left work itself behind seven years back, but also because it's irrelevant in today's IT infrastructure world - with I guess very minor legacy exceptions.

            However it's place in IT history should be recognised as an approach to networking in the 1970s and 1980s that was a robust and commercial alternative to IBM's rigid SNA hiearchy. I can't be pedantic about it, but I do believe that for many organisations, their DECnet environment formed a pre-cursor to moving to IP rather than their IBM networks. I know it was for ICI in the late 1980s and early 1990s - even through as the corporate strategy architect in 1990/91 for "Open Systems" I backed OSI in my recommendations rather then IP as the direction for future networks. Clearly I was wrong - but at least I did recommend a tactical adoption of, and full support for, a corporate global IP network until OSI was more widely available - which of course never happened.

            RIP DECnet - you did a sterling job but your time is well passed.

            1. Roland6 Silver badge

              Re: The beginning of the end for DECnet was much earlier in my view

              >then DEC would have been worlds ahead of virtually everybody - with the possible exception of ICL!

              DEC would have still been worlds ahead, remember ICL was wedded to X.25 and Transport Class 3...

              Interestingly, I regarded the DEC and Sun implementations of OSI as being some of the easiest(*) to configure.

              (*)Relative term, not to be used when comparing OSI addressing to /etc/hosts

              1. ICL1900-G3

                Re: The beginning of the end for DECnet was much earlier in my view

                Don't forget CO3 which I vaguely remember was a bit like SNA? I know that you could get CO3 gateway software on VMS.

          2. Roland6 Silver badge

            >For bonus points, name just one enterprise that still runs DECnet (on what?) in a production setting.

            I wonder what those Canadian nuclear plants using legacy DEC kit are using for their networking...


            1. Liam Proven (Written by Reg staff) Silver badge

              [Author here]

              There has been some discussion about that on ClassicCmp.

              There is a TCP/IP stack for RT/11:


              And one for RSX-11:


              There doesn't seem to be one for RSTS/E but there may be a way to get SimH instances of it talking to one another.

              However, yes, true, for older DEC OSes it is non-trivial.

              However, one would hope that those machines are *not* connected to the public internet. ;-)

              1. Tony Gathercole ...

                Early ARPAnet and TCP/IP support in DEC OS

                Following your line - which actually seems to be 180 degrees away from the previous post i.e. DEC OS running TCP/IP rather than still active DECnet situations - worth remembering that the DEC 36-bit family were some of the very earliest ARPAnet hosts. Indeed, one of the DECSYSTEM-20 family options was for a specifically ARPAnet attached system - although I'm unaware of any being installed in the UK or Europe. The KL10- based systems used a dedicated PDP-11 as a TIP (IIRC) - but I don't recall howthe 2020 (KS10) were supported. Never had access to an Ethernet connected DEC-20 so can't recall whether that was an option.

                ARPAnet (I'm sure implying IP stack) capabilities were built into the TOPS-20 operating system (I know that I mis-apprpriated the ARPAnet capability bit to implement a more limited security override for academic service support staff than WHEEL or OPERATOR around 1981 for the Polytechnic I was working for at the time.)

                PS: not including the non-Digital OS (e.g. TENEX) for the DEC-10 family in this discussion, but most were key early ARPAnet building blocks.

                1. Roland6 Silver badge

                  Re: Early ARPAnet and TCP/IP support in DEC OS

                  > one of the DECSYSTEM-20 family options was for a specifically ARPAnet attached system - although I'm unaware of any being installed in the UK or Europe.

                  Back then ARPAnet was largely a US academic network. The UK had Janet.

                  I suspect if Hatfield poly didn’t have the option then no one in the UK did. (I am sure Someone here should be able to confirm this)

                2. jake Silver badge

                  Re: Early ARPAnet and TCP/IP support in DEC OS

                  "Indeed, one of the DECSYSTEM-20 family options was for a specifically ARPAnet attached system - although I'm unaware of any being installed in the UK or Europe. The KL10- based systems used a dedicated PDP-11 as a TIP (IIRC) - but I don't recall howthe 2020 (KS10) were supported. Never had access to an Ethernet connected DEC-20 so can't recall whether that was an option."

                  I have a printed out routing table dated March 5th, 1982 ... This is very early on in TCP/IP time, there are fewer that 40 nodes total listed ... Shows UCL at, with UCLNET behind it, listed at, and no other direct-to-GB links. This was before DNS most of us were using Jon Postel's list[0] for connectivity (RIP, Jon) ... I don't have (or can't find in my piling system) any info on what machines these numbers represent. I'll bet a plugged nickel that one or both of them was a big DEC box, though.

                  [0] With a nod to Jake[1] Feinler and her group.

                  [1] No relation.

            2. R Soul Silver badge

              I'd be more worried about who's maintaining the hardware and software.

              Besides, it's usually a very bad idea to put life-and-death critical systems on any kind of network.

          3. Doctor Syntax Silver badge

            "No, I didn't miss that bit. VMS is not DECnet and DECnet is not VMS."

            Fine. Now point out where I expressed any opinion whatsoever whether it should be removed from the Linux kernel or not.

          4. Down not across

            Name one person...

            If DECnet is still alive, name one person or company that currently supports, maintains or develops the protocol architecture.

            I do. I still have some old VAX and Ultrix boxen and some other ancient DEC hardware for example DECservers that boot using MOP over DECnet.

      3. Dirk Munk

        Overwring MAC address

        With DECNET Phase IV the MAC address of the NIC was indeed overwritten by an address in the shape of AA-00-04-00-AB-CD, whereby AB-CD was calculated from the DECNET address ranging from 1.1 to 63.1013, whereby 1 and 63 where the area addresses and 1 and 1013 the node address within an area. Contrary to IP, areas were not physical locations. Areas can be spread and connected by routers. DECNET addresses are allocated to hosts, not interfaces !

        Overwriting MAC addresses was not unique to DECNET, other networking protocols (SNA-400 etc.) did it as well. You can still change the MAC address of any NIC, it is a setting in the properties of a Windows NIC.

        DECNET Phase V , if it is not in DECNET Phase IV compatibility mode, does not overwrite the MAC addresses.

    2. Duncan Macdonald

      The NIC hardware address still has to be overwritten. DECnet uses a NIC address of AA-00-04-00-xx-yy where xx-yy are the six bits of the area address followed by the 10 bits of the node address within the area.

      This (at the time elegant) fudge allowed easy DECnet routing on an Ethernet without needing dedicated routing hardware or software.

      Note at the time that DECnet Phase 4 was developed 10Mbit/sec Ethernet over thick coax cable was high speed networking and the majority of the PDP-11s in use could not manage even 1 Mip - saving processor overhead was important.

      1. Warm Braw

        It's probably worth elaborating that a little further.

        There was no fundamental architectural reason to change the MAC address: endnodes sent out periodic "hello" messages that could have been used to map the 16-bit node address to a 48-bit MAC address. However, given that 1000 endnodes were permitted on a single LAN segment, it would have meant routers reserving potentially at least 6KB of space for the mapping - which is a big chunk of a 64K address space being used for other kernel things - and that endnodes (which may well be CPU-constrained) might have to look up a 48-bit key to find the associated 10-bit DECnet local area address. Fixing 32 bits of the MAC address got rid of those problems.

        Interestingly, the first Phase V routers were faced with a different constraint: the change from routing vectors to link state routing made it very difficult for contemporary hardware to find room for the routing database which led to some considerable arguments in the organisation.

        Also worth pointing out that DECnet made much more use of Ethernet architectural features: it used different multicast addresses to segregate endnode (host) traffic and Level 1 and Level 2 routing traffic and used different protocol types for discreet functional operations (like remote booting).

        While DECnet's day has clearly gone, I do regret that accessing those Ethernet architectural features is still rather more trouble than it needs to be on Unix/Linux: there's an implicit assumption that the network is there for IP and convincing the network driver otherwise requires some effort. And, indeed, privilege/capability.

        1. Malcolm Weir

          Many forget that DIX Ethernet is A Thing, and even if the D and the X are gone, the work they did has not.

          (Sadly, the quick-and-dirty hacks that one Mr Bill Joy got up to late one night lasted much longer than he, or anyone else, expected...)

      2. bill 27

        IIRC, and I'm probably wrong since it's been about 3 decades, the "AA-00-04-00-xx-yy" address was rewritten to "yy-xx-00-40-00-AA" as the MAC. That way, in hardware it would do a bit compare. First mismatch and it bailed because it was for a different address.

        1. pkoning

          Nope. MAC addresses are sent left to right, and multicast bit (LSB, except for 802.5) first.

          Hardware didn't care at all about the DECnet special address; it was purely a software hack created to keep DECnet/VMS implementers happy. (They claimed that doing 48 bit address matches was too hard.)

          There is a not well known extension of Phase IV that adds support for arbitrary MAC addresses, i.e., it removes the prefix setting requirement. It's called "Phase IV prime" and documented in a spec that adds 802.5 support to DECnet. It doesn't seem to be wide implemented but it wasn't difficult to do; the changes only take a few pages to describe.

    3. Bitsminer Silver badge

      How is this handled these days when the NICs are anything but DEC?

      In fact all Ethernet NICs can change their MAC address upon command. That's how you get "random MAC address" options as a security feature on phones or PCs.

      In DECnet, the feature allows to send a packet directly to the node in question since the necessary MAC address is a calculated number.

      Contrast IP over Ethernet where designers added the ARP protocol to answer the question "what is the MAC address for" Once you have that answer you can then send a packet directly.

  4. AlanSh

    I used to love DECnet

    I joined DEC back in 1986 and was one of the first to show PATHworks to customers (anyone remember the Vaxmate?). I was friends with the USA development teamteam (I pointed out a Netbios bug and showed what the fix would be) and used DECnet for DOS all over the place. DECnet itself was pretty good - and as the article says, could be accessed all over the world. They were good days back then - shame it's all just TCP these days.


    1. Anonymous Coward
      Anonymous Coward

      Re: I used to love DECnet

      The VAXmate .... the Oztralian version along with the DECMate ....

    2. Geoff Campbell Silver badge

      Re: I used to love DECnet

      Pathworks (and PCSA before that) was a great system. Worked a lot better once we got the memory management stuff fully sorted so it could co-exist with other network drivers.

      They were happy days, and a much simpler time. I lost track of the number of hard drives I replaced in VAXmates, and selling the special VT-compatible PC keyboards to banks and dealing houses to replace the ones that got stuff spilled on them during games of office rugby or cricket was a constant source of income for us.

      I think perhaps I might have worked with you at some point, I was at Network Connection Ltd. around 1987-1994 sort of era, we were (or considered ourselves to be, at least) one of the leading third party PC-VAX integration houses in the UK.


      1. Steve Graham

        Re: I used to love DECnet

        I've still got a keyboard! LK250? It has a PS/2 interface, of course: I wonder if you can still get PS/2 - USB converters.

        1. Geoff Campbell Silver badge

          Re: I used to love DECnet

          LK250 certainly sounds right.

          You certainly can get converters, for not a lot of money:


  5. OpenSauce

    Reminds me of the related phrase...

    How does a Decnet network work?


  6. jake Silver badge

    Not much to worry about.

    If you need a FOSS solution for DECNet, I rather suspect that FreeBSD will support it until roughly the heat death of the Universe.

  7. spireite Silver badge

    Why remove?

    I can't imagine that it adds much payload to a distro, and I don't imagine it needs much maintenance...

    Why not leave it?

    (Genuine question, and applicable to the other old stuff)

    1. Anonymous Coward
      Anonymous Coward

      Re: Why remove?

      I think it's a case of removing unused code which may hold the potential of creating unintended interactions - given that it's also unmaintained, such a problem would not be solved that quickly. Best to park it completely, also makes the code tighter yet again.

      AFAIK it'll still hang around in the archives anyway, and if anyone really wants it they're advised to put some resource into it.

    2. Graham Cobb Silver badge

      Re: Why remove?

      There are occasional significant changes to internal kernel driver interfaces or best practices. Sometimes to enable optimization available on new processors, sometimes to avoid or reduce security issues, sometimes to enable new kernel features.

      They are normally designed such that changes to existing drivers are minimised but sometimes can't be completely avoided. With virtually unused code no one is really sure the changes don't break something: sure the automated tests pass but if no one is doing real world testing there could be bugs. So, removing kernel code which does not have active maintainers is always encouraged.

  8. OldCrow 1975

    Good by PDP11

    I feel that there should be a funeral. Many hours of this man's life has gone silently into the night

    1. Anonymous Coward
      Anonymous Coward

      Re: Good by PDP11

      True. Digressing slightly, OTOH, I like the silence - there's nothing I find more annoying than Windows' default habit of pinging, bleeping and making other noises during the day like a child that is desperate for attention. In other OS I can at least control what makes noise so I can limit it to things that actually matter, but in Windows I eventually resort to just killing the sound altogether.

      But yes, let's not forget the gear and code that got us started. I support a two computer musea exactly for that purpose, but there isn't really a software museum now, is there?

      1. Anonymous Coward
        Anonymous Coward

        Re: Good by PDP11

        If it's not a software museum, WTF do you think Windows is?

        1. Doctor Syntax Silver badge

          Re: Good by PDP11

          Rubbish dump?

      2. jake Silver badge

        Re: Good by PDP11

        "there isn't really a software museum now, is there?"

        Several, actually. Here's a good place to start looking:

        Being rather biased on the subject, I'd be remiss in not mentioning The Unix Heritage Society, which you can check out for yourself at the rather imaginatively named

  9. Korgonzolla

    I've this impression that a lot of the commentators on The Reg are gnarly old Unix admins who still maintain the last boxes in their organisation running VMS, AIX, and OS/390. Getting nostalgic about VIM, having to deal with "the business" in getting time to backup to tape, and operating the first BBS in Norfolk.

    Time not spent in a state of nostalgia about how good their job was in the good old days is taken up with drinking cask ale, watching test cricket, and moving ever more slightly to the right each year.

    1. Nate Amsden

      nostalgic about VIM

      seems like a strange statement to say about a tool that is still used by many every day. I've never personally had experience with VMS or OS/390, and only a tiny bit of AIX at a Unix software development company 20 years ago. But vim I use daily as do many many others..

      As for tape, there is more capacity of tape being sold pretty much every year. I pushed for tape at my org a couple of years ago for the IT team to backup with Veeam. Offline backups(not in the tape drive) can't be hit by ransomware or any other online attack.

      1. An_Old_Dog Silver badge

        Mainframe utilization

        In my college days I visited a school with an IBM System/360-something and a Decsystem20. Admin boosted utilization of the batch-only S/360, and reduced the load on the much-more-popular, interactive, Decsystem20, by making S/360 time free. All you had to do was pay for your punch cards.

    2. Malcolm Weir

      OS/390?? That's new, that is.


    3. Warm Braw

      moving ever more slightly to the right each year

      Speaking as one of the gnarly oldsters, it seems to me that's mostly a feature of the jejune youngsters hereabouts self-consciously imitating their swivel-eyed brethren across the waters.

      I blame that new-fangled CB radio, myself. And MTV.

    4. R Soul Silver badge

      what's this vim shit?

      "Getting nostalgic about VIM"

      That's for PFYs. Gnarly old Unix admins like me get nostalgic over vi which predated vim by about 20 years.

      1. Doctor Syntax Silver badge

        Re: what's this vim shit?

        No need to get nostalgic. Just apt install nvi

      2. An_Old_Dog Silver badge

        Re: what's this vim shit?

        "Ed is the standard editor." -- Unix documentation from 6th Edition (or was it 7th?).

        1. jake Silver badge

          Re: what's this vim shit?

          ""Ed is the standard editor." -- Unix documentation from 6th Edition (or was it 7th?)."

          Bill Joy put a happy face on that back in '76.

    5. Doctor Syntax Silver badge

      "gnarly old Unix admins who still maintain the last boxes in their organisation running VMS, AIX, and OS/390."

      This gnarly old - and long retired - Unix admin-and-everything-else looked on VMS as something that was already from the past but still had to be lived with

      OS/390 was something with which I never had to deal although the same VMS-using business I mentioned in my story from c 30 years ago did also have some species of IBM mainframe.

      IBM mainframes, of course, are still current. I've no idea what lies at the bottom of their OS stack but I do know that not only is there a Linux port for them but they're capable of/very good at running multiple Linux instances.

      1. Doctor Syntax Silver badge

        I should have added that if anything is still running anything that depends on DecNet now it's likely to be some quite critical and not at all easy to replace legacy system.

    6. Anonymous Coward
      Anonymous Coward

      You're not wrong but they get grumpy when you point it out, expect a flood of downvotes. Read any of the article comments on Devuan or systemd, you can tell who's in a position where they have to work with what the company or client tells them to whether its a good idea or tool or not, who can dictate to the client what theyre going to use for a number of reasons, and who has a clearly antiquated idea of how IT works, at least where the rubber meets the road, as they haven't done anything but supervision and policy for 20 to 25 years.

  10. Wanting more

    DECNet new fangled...

    Last time I used a VMS VAX I was connected to it via a serial cable! Early 1990s

    1. Anonymous Coward
      Anonymous Coward

      Re: DECNet new fangled...

      Last time I used "Set host ..." over DECnet was yesterday (that's 3-Aug-2022)

      1. Anonymous Coward
        Anonymous Coward

        Re: DECNet new fangled...

        Gnarly old days apparently

        set host bonnet


        vtx vogon ....

        dtr plot wombat

  11. sebacoustic

    Protocol wars over?

    There was an article in a certain IT news source last week.. I distinctly remember. Google "There is a path to replace TCP in the datacenter"

    1. PhoenixKebab

      Re: Protocol wars over?

      Well we need to finish one protocol war before starting the next else things could get confusing.

  12. Morten Bjoernsvik

    metrics please

    Are there any metric gathering entity that can back up the decision?

    There is nothing as good as showing numbers for management to contradict their decision.

    And it happens a lot. in the IoT space there are lots of protocols and hardware now outdated, but still being used.

    When you see 10% of your customer-base still using a dead protocol with old hardware, it will cause riots to abandon it.

  13. GruntyMcPugh Silver badge

    Oh, DECnet,.. in the mid 90s I was a computer operator for the Starlink Project (a bunch of universities and research groups pooled resources to develop software for astronomy ) and we were allowed to run DECnet over JANET. We hosted various resources, like data archives for other researchers to access etc.

  14. IGotOut Silver badge

    Let's face it.

    If your supporting a decades old system, using a year or two old Linux box is hardly getting to be the most traumatic thing to deal with.

    I ran a Windows 98 box for 15 years to support a single bit of hardware until it was retired.

    People moan about the size of installs, then complain about removing old cruft. Things just need to be pruned from time to time.

  15. Dirk Munk

    This article is about DECNET Phase IV, and not DECNET Phase V / OSI.

    DECNET Phase IV was superseded by DECNET Phase V years ago. The management of DECNET Phase V is completely different from that of DECNET Phase IV, and that is why it wasn't adopted by many customers. Which is a shame really, because once you know how to manage Phase V in every of the 7 layers of the OSI protocol, the magnificent idea behind the whole OSI set up becomes clear.

    Although DECNET Phase IV and Phase V are completely different protocols, (far more different than IPv4 vs. IPv6) DEC managed to build a DECNET Phase IV compatibility mode into DECNET Phase V, so a DECNET Phase IV node can communicate with a DECNET Phase V node. No such thing in IP land between IPv4 and IPv6.

    Furthermore DECNET is completely integrated in VMS. A full file specification in VMS is node::disk:[directory]file-name.file-extension;version . An application is not aware if a file is on the same node or another node. By changing the file name you can read or write a file on another node. Just a simple example.

    Proper OSI routers are also superior to other routers. With the same single IS-IS process, you can route OSI (DECNET Phase V), DECNET Phase IV, IPv4, IPv6, and even apple-talk.

    Of course DECNET Phase V can also handle 'real' OSI protocols like FTAM.

    There was also a DECNET Phase IV stack for Apple. I used it to manage a large number of MAC stations. I could run a batch with very simple file copy commands on my VAX workstation to synchronize the font libraries of all these MAC stations for instance. I have no idea how this could have been achieved with the MAC OS itself.

    IP has always been a add-on to an operating system, DECNET was always an integrated part of VMS. I know what I prefer.

    1. pkoning

      Phase V did not, in a practical sense, supersede Phase IV.

      Yes, that was DEC's plan, but it didn't succeed. Part of the reason is the dramatically greater complexity of Phase V; part of the reason is the emergence of TCP/IP as a major market force.

      I don't remember that any OS except for VMS implemented Phase V. So in practice, any mixed-environment DEC customer would keep running Phase IV. And Phase IV was in fact entirely acceptable for just about any customer network. DEC's own internal network was large enough to make it a bit of a challenge (the solution was the DECnet analog of private subnets such as subnet 10). I never heard of any DEC customer with a network that big.

      1. Dirk Munk

        DECNET phase V was DEC's implementation of OSI, and OSI was used by many other operating systems. You might even go so far to say that DECNET was an OSI application, a way to tunnel DECNET traffic over OSI.

        DECNET Phase V can also use IPv4 as transport stack, completely transparent for the applications. The application has no knowledge that IP is being used. There even is an RFC to use IPv6 as transport stack, but it was never implemented.

        FTAM is an OSI application, not a DECNET Phase V application. It is part of the DECNET Phase V kit, and with FTAM you can do file operations between any OSI capable hosts.

        OSI has been set up to create transparent communication between different operating systems, while keeping the full functionality of each operating system. As far as I'm aware, OSI FTAM should translate an EBCDIC file to an ASCII file while it is being copied, and vice versa.

        There is also an OSI terminal application, with that you can access systems that use character mode terminals and systems that use block mode terminals. I know, terminals are a bit old fashion now, but it is just an example of what OSI was suppose to do.

        Yes, OSI is complicated, but it is very well designed. IP on the other hand, is a kind of out-of-control hobby project without any real standards. Sounds odd? Try to find out how many RFC's have truly become an official defined standard. Just a very few, most RFC's never became a real official standard. IP is a quick and dirty solution for networking, it works, but not be design.

        1. FrankAlphaXII

          Its funny, when I was in school for networking (Dual enrollment my senior year of high school, we did Net+ and A+ and started MCSE) around 2002 they were using the OSI model to teach TCP/IP.

  16. hnwombat

    Quorum call!

    /me ducks and runs

  17. Malcolm Weir

    Dual IP stacks do exist...

    "No such thing in IP land between IPv4 and IPv6."

    Well, except for dual-stack systems. The PC I'm writing this on seamlessly lets me interact with over IPv4 [] and over IPv6 [2607:f8b0:400f:803::2004], so the evidence is quite strong that there is such a thing in IP land...

    The point (as you illustrate when enthusing about OSI in general) is that TCP and UDP don't care whether you use IPv4 or IPv6, which is how it should be.

    1. Dirk Munk

      Re: Dual IP stacks do exist...

      Yes, obviously you can use IPv4 and IPv6 on a dual stack host. But those are two completely separate networks stacks, they don't know of each other's existence.

      If IP would have done it like it was done with DECNET, you could remove the IPv4 stack from your PC, just keep the IPv6 stack, and still be able to communicate with IPv4 hosts. Quite an essential difference.

      Furthermore, any IP application would have been able to use IPv6 without rewriting the code. The application would not know that it is using IPv6.

      Suppose IP could have done that, than IPv4 would have been long gone on the internet.

      By the way, since it is very obvious that IPv4 has been totally unsuitable for the internet from the very beginning (NAT is one of those dirty work-a rounds), I have been using IPv6 for many years now.

      1. Doctor Syntax Silver badge

        Re: Dual IP stacks do exist...

        "since it is very obvious that IPv4 has been totally unsuitable for the internet from the very beginning "

        In the same way that it's vary obvious from basic aerodynamics that the bee can't fly?

        1. Dirk Munk

          Re: Dual IP stacks do exist...

          Seems you've forgotten about the class A, B and C networks in the original version of IPv4?

          Half on the IPv4 address space was consumed by 125 organizations with a class A network.

          A quarter of the space was used by some some 16,000 class B networks

          And then there were some 2,000,000 class C networks.

          That set up was a joke for a world-wide network, so they changed the concept and introduced Classless Inter Domain Routing (CIDR). I remember working with equipment that could not handle CIDR, so it was a big change.

          That was still not enough address space, so NAT routing was invented. Effectively it means that port numbers are used to increase the number of possible IPv4 hosts (not addresses !!) on the internet.

          So it is very obvious that IPv4 in its original set up was totally unsuitable for the internet from the very beginning, and even after all these changes it is still unsuitable because of its limited address space.

          OSI with a 160 bit address space, was ready and working long before IPv6 with a 128 bit address space came into being.

          1. jake Silver badge

            Re: Dual IP stacks do exist...

            "That set up was a joke for a world-wide network"

            It was not a joke for a world-wide network built only to research networking. In fact, it worked incredibly well. As proof, I submit that you are using it to read this message right now, ~40 years later. And so can yer DearOldMum.

            "The OSI network" (whatever that is/was)? Not so much.

            Babble doesn't count. Running code trumps all.

  18. Pirate Dave Silver badge


    Not as dead as thought. When I last looked about 5 years ago, there were still a few Chinese print server boards using IPX/SPX for their initial setup. I know Trendnet was, and one other that I can't remember.

    TCP/IP might have won, but IPX/SPX was still a pretty decent protocol. If memory serves, I think it was in Win2k or maybe NT 4 where Microsoft's IPX/SPX stack was actually faster than their NetBIOS and TCP/IP stacks. But, like NLM executables, it lost its backing and slowly faded into the woodwork.

  19. VMS Software Inc.

    VSI OpenVMS V9.2 can use TCP/IP to communicate with Linux

    VSI OpenVMS V9.2, just like all the previous VSI OpenVMS versions, supports both DECnet and TCP/IP; the latter can be used to communicate with Linux systems. Native compilers for OpenVMS on x86 are currently tested by a number of our customers and will soon be available.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like