back to article OpenVMS on x86-64 reaches production status with v9.2

VMS Software Inc. has announced the release of OpenVMS 9.2, the first production-supported release for commercial off-the-shelf x86 hardware. The expectation is that customers will deploy the new OS [PDF] into VMs. Most recent hypervisors are supported, including VMware (Workstation 15+, Fusion 11+ and ESXi 6.7+), KVM (tested …

  1. karlkarl Silver badge

    This is actually very cool. Shame it isn't open-source so I personally would not recommend committing to it. Still fun to play with however!

    1. Norman Nescio Silver badge

      Microfiche

      Shame it isn't open-source

      We had the BLISS/Macro32 source on microfiche, and it came if very handy debugging obscure problems.

      Corroboration here: https://www.osnews.com/story/27871/hp-gives-openvms-new-life/

      So while you couldn't modify and distribute the source, it was available for inspection, and jolly useful it was too.

  2. cageordie

    I wonder how many people still remember how to use it?

    I wrote quite a few thousand lines of Ada on VAX/VMS 5.5 and 6. I also ran a few machines for several years before moving to SunOS, Solaris, and Linux. I wonder how many people would know how to stop the print queue without losing the in progress job. Not that I want to go back to that job.

    1. Stumpy

      Re: I wonder how many people still remember how to use it?

      I started my career as a VMS operator. Can't remember a blasted thing about it these days though. I'd be totally lost without a comprehensive manual now.

      1. Swarthy

        Re: I wonder how many people still remember how to use it?

        The only thing I still remember about VMS is the in-build versioning and that it had a database-based filesystem.

        That clustering though, that sounds tailor-made for a cloudy upstart.

        1. Paul Crawford Silver badge

          Re: I wonder how many people still remember how to use it?

          I remember that as well, we had to change some software/scripts to delete a small file instead of repeatedly overwriting it as after a while it chocked the disk up due to all those past versions!

          Also remember the Alpha machines as stupendous in their day, just a shame they were dropped for the Itanic

          1. coconuthead

            Re: I wonder how many people still remember how to use it?

            SET DIR/VERSION_LIMIT=3 [.mysubdir]

            CREATE/DIR/VERSION_LIMIT=3 ...

            would ensure that when a new version was created, older versions were removed.

            1. Vometia has insomnia. Again. Silver badge

              Re: I wonder how many people still remember how to use it?

              My daily tidy at college was PURGE, because my quota was only 1,000 blocks of storage, followed by RENAME *.*.* *.*.1 because the wonky numbering offended me. Dunno what that might've done with the backups tho'.

          2. Primus Secundus Tertius

            Re: I wonder how many people still remember how to use it?

            What choked off a VAX I worked on was the Common Data Dictionary, that somehow got larger every time we did something I have now forgotten involving Datatrieve, DEC's query language for various types of files and for their relational database.

            Various types of files: serial, sequential, indexed sequential. And various types of text: implicit CRLF, explicit CRLF, and indexed text accessible by a line number part of each record.

            But at least it was all ASCII, this was before the days of UTF-8, UTF-7, UTF-16LE, UTF-16BE.

            1. Norman Nescio Silver badge

              Re: I wonder how many people still remember how to use it?

              As a means of querying arbitrary data in structured files, DATATRIEVE was really quite neat. I never used the plotting routines (PLOT WOMBAT anyone?), but for querying what were quite large files for the time, it was very useful tool for me.

        2. Liam Proven (Written by Reg staff) Silver badge

          Re: I wonder how many people still remember how to use it?

          > a database-based filesystem

          I think you may be confusing it with Pick there.

          RDB was the standard database for VMS. Oddly, it is now owned by Oracle.

          https://www.oracle.com/database/technologies/related/rdb-doc.html

          1. Steve Kerr

            Re: I wonder how many people still remember how to use it?

            I think I know the confusion here.

            From memory, (probably wrong), the filesystem was called Files11 or some such.

            The file system had extensions to create databases as part of the filesystem with which you used FDL? (File Definition Language) that specified things like fields, sizes, types, bucket sizes, indexes and with that you can basically create a database file.

            it was efficient, meant you could use standard backups as everything looked like a file, some really funky stuff.

            My best effort was being able to pass commands to batch jobs real time using the OPER console functionality so could to a REPLY/TO the OPER prompt to direct the program what to do next, even done sanity checks on input - best one was a colleague saying it was impossible and couldn't be done and then me presenting, yeah, here's how to do it, I wrote this in a previous company. :-)

            1. rnturn

              Re: I wonder how many people still remember how to use it?

              > The file system had extensions to create databases as part of the filesystem with which you used FDL? (File Definition Language) that specified things like fields, sizes, types, bucket sizes, indexes and with that you can basically create a database file.

              Wasn't that RMS (Record Management System)?

              I recall working at a place that relied on it heavily. Someone in the Finance department would regularly use EVE to open up one of the RMS files to look up information and "save" instead of "quitting" and would screw up the indices for that file causing that night's batch processing to crash. We wound up getting permission to restrict the offender's account to accessing those files in read-only mode. (Thanks to VMS's ACLs and right identifiers.)

          2. Kubla Cant

            Re: I wonder how many people still remember how to use it?

            I was working on Rdb systems when it was sold to Oracle. We all went to a presentation at the local Oracle office. There they cheerily told us that future support contracts were going to be very, very expensive, because that's the only way they could fund development. This may be a software fact of life, but only Oracle would boast about it.

          3. Anonymous Coward
            Anonymous Coward

            Re: I wonder how many people still remember how to use it?

            My only experience of VMS was porting a Unidata based software solution to one down in the Reading office of DEC. Unidata being a Pick clone. Got it working but maintaining two versions of the application would have been painful so it got binned off.

          4. captain veg Silver badge

            Re: confusing it with Pick

            One similarity is that Dick Pick (yes, really) was not averse to restarting version numbers at 1 in the same manner as OpenVMS. The shop where I started my "career" used Advanced Pick Open Architecture in both version 1 and 2 flavours, which followed, naturally enough, from R83. The main difference between OAs 1 and 2 was that the latter's BASIC (or perhaps Basic, or basic) was case-insensitive.

            But yes, in Pick everything is a record in a database. Well, a no-SQL key-value store, as no one never referred to it at the time. Other "modern" features included a runtime virtual machine and demand-paged memory virtualisation, so no distinction between RAM* and disk.

            It was a pretty amazing -- and amazingly efficient -- system, except for PICK BASIC. That really sucked. Still, we got it to do useful stuff.

            -A.

            *Even better, "our" hardware had SRAM, so you could switch it off and, some arbitrary time later, switch it back on again, and it would just continue where it had left off.

          5. Vometia has insomnia. Again. Silver badge

            Re: I wonder how many people still remember how to use it?

            Possibly due to the inclusion of RMS: native (at least from an application perspective) support of ISAM was a nice thing to have. Then again, MVS did the same and I wouldn't describe its bizarre filesystem (well, bizarre to people used to Unix & DOS hierarchies) as database-based; OS/400 was IBM's offering in that regard, or at least described to me as such.

      2. Yet Another Hierachial Anonynmous Coward

        Re: I wonder how many people still remember how to use it?

        The one thing I remember is that it had plenty of comprehensive manuals. By the shelffull, or even the roomful. Documentation galore. There must have been a impressive team churning out and updating all that documentation.

        1. Ken Smith

          Re: I wonder how many people still remember how to use it?

          As an ex-Deccie I often wondered whether DEC was really a publishing business that sold computers as as side line.

          1. Scotthva5

            Re: I wonder how many people still remember how to use it?

            Back in the day I asked one of our Dectites (the DuPont name for those brave unfortunate souls) a technical question and he referred me, with some reverence I might add, to the VAX/VMS documentation library holding forth in its own vast storeroom. Had to supply my own breadcrumbs to find my way back. Never did find the answer.

            1. smithwr101

              Re: I wonder how many people still remember how to use it?

              Was that the Big Orange Wall, or The Big Grey wall?

              1. rnturn

                Re: I wonder how many people still remember how to use it?

                At one point, it became the Big White Wall. That was all paperbound volumes instead of the loose leaf binders. (Kind of a stupid transition, IMHO, when it came time issue updates/corrections to documentation.)

                It was about that time when someone from DEC called asking if we would still want hard copy documentation now that it was available on CD-ROMs (which was likely why they didn't care about the previous ability to insert updated pages into the old binders). The caller was puzzled when I said "Yes". And they didn't seem to understand when I tied to explain that DEC's CD-ROM drives were crazy expensive, you needed several of them in order to keep all the documentation online, and that, with our being an educational institution, we didn't have pallets of cash laying around.

            2. VAX Wizard

              Re: I wonder how many people still remember how to use it?

              In my days with VAX/VMS, we called them DEChands.

              I chaired many sessions at the twice-yearly DECUS symposium. In the Q&A following a Digital presentation on Install, I found that I had more answers than the Digital person. He turned to me and suggested that I do a presentation. Next DECUS in Dallas, I presented Creation of an Install Package for a VAX/VMS version of GMNet. It did amaze me that the large room was packed.

              Of course, the Digital folk had trouble comprehending Position Independent Coding (PIC) in DCL. I was a strong advocate for PIC, especially in boot time setup scripts for applications. The lexical functions in DCL were much easier to use than commands like 'argv' in all those UNIX variants and imitators. It was actually fun to write and made subsequent system administrators job easier.

              At $DAYJOB, where I produced GMNet for the VAX, desktops migrated from IBM and ASCII terminals to Windows PCs. This was quite a step down from VAX/VMS -- until I managed to acquire a Macintosh Iici to support wide area Apple networking for $BIGCLIENT.

          2. Vometia has insomnia. Again. Silver badge

            Re: I wonder how many people still remember how to use it?

            As another one, I think the documentation is one of our largely unsung greatest products. Especially compared to practically any e.g. Micros~1 manual that waffled on for hundreds and hundreds of pages without conveying any useful information, the DEC manuals were well-organised, covered so much stuff in a very readable manner and were full of examples and as much technical information as one could imagine. It's quite rare to see documentation as good elsewhere, but it was just The Standard™ at DEC.

            Though I also lived through The Logo Years, where the greatest corporate achievement was swapping the red and blue channels on the d|i|g|i|t|a|l logo and giving the "i"s round dots instead of square ones. And making a big point to get it right while the hardware groups just completely ignored them and continued doing dark grey or white logos. And were we Digital or digital or DEC or Digital Equipment Corporation or Digital Equipment Company? argh.

            1. david 12 Silver badge

              Re: I wonder how many people still remember how to use it?

              The early MSDOS (and PCDOS) manuals were very good, "well-organised, covered so much stuff in a very readable manner and were full of examples and as much technical information as one could imagine."

              As were many other op systems at the time. Of course, DOS (and many of the other op systems) were really simple, with nothing more complex in the MS DOS 2.x manuals than how to write a device driver using edlin and debug.

              unix really was the standout exception, where (in spite of the man pages), the standard way of learning the OS was to learn at the feet of a guru.

              1. Vometia has insomnia. Again. Silver badge

                Re: I wonder how many people still remember how to use it?

                I learnt a lot from printing off bits of the Ultrix source code when I was at college, including a certain S.R. Bourne's cmd.c; I dunno if I should be ashamed or proud to say that, in my first job, I too used ALGOLesque macros in my C. It seemed like a good idea at the time.

              2. Alan Brown Silver badge

                Re: I wonder how many people still remember how to use it?

                Yup. It was for that reason I kept a copy of PCDos 3.2 documentation around for decades - it explained commands in ways that the great unwashed could understand, in plain english, with illustrations

                I list it when someone never returned it (as is the way of many things)

        2. rnturn

          Re: I wonder how many people still remember how to use it?

          I was *never* unable to find in the gray/white wall the information I needed to get something done on VMS (heck, RT-11 and RSX as well). It might have been in Appendix J of the Device Driver manual but, darn it, everything was in that docset.

          1. Anonymous Coward
            Anonymous Coward

            Re: I wonder how many people still remember how to use it?

            I got to the point that I was a walking index to it. People in the office would ask me "where is the info on xxx" and I could usually reply "Try yyy manual, chapter nnn". A far cry from today, where you have to search online & probably only get responses of 27 useless YouTube videos, and a hundred people asking the same question.

            1. katrinab Silver badge
              Flame

              Re: I wonder how many people still remember how to use it?

              And if it is a Micros**t product, the Standard Generic Response to reboot your computer and delete the cache and browser history.

          2. Vometia has insomnia. Again. Silver badge

            Re: I wonder how many people still remember how to use it?

            I was looking at some documentation on BitSavers and really disappeared down a rabbit hole. I'm still having a bit of a retro interest in PDP-10s, having never got to properly know the pair we had at college (AFAIK none were still operational within DEC by the time I started there; if they were, they were few and far between) and it's interesting playing around with their various OSes on emulators. But the documentation is incredibly detailed and has far more info than I could ever want to know, even going into immense detail about the logic gates used by the CPU. That was admittedly the field service engineers' manual (not sure if customers could get that particular stuff) but everything was described in enormous amounts of detail; though it can be a bit overwhelming for someone whose knowledge of TOPS never amounted to much and who'd forgotten more than she could remember...

        3. Kabukiwookie

          Re: I wonder how many people still remember how to use it?

          There must have been a impressive team churning out and updating all that documentation.

          No, just developers who don't claim their code is their documentation.

          1. Warm Braw

            Re: I wonder how many people still remember how to use it?

            Actually, there was an impressive team. In fact, several teams.

            Firstly, you have to realise this was before on-demand printing so even the logistics were impressive. Estimates of product sales determined initial print runs for manuals - and indeed the number of distribution tapes for the software itself. Inventory had to be warehoused. Update pages had to be distributed periodically to relevant customers.

            And whole teams of skilled technical writers worked with developers from day one to capture and explain the concepts, interfaces, commands and provide programming examples.

            In some cases, the documentation was better than the code: sometimes there was a scramble to remove features that had been dropped at the last moment to get the product out the door and occasionally it was too late...

            1. Anonymous Coward
              Anonymous Coward

              Re: I wonder how many people still remember how to use it?

              And whole teams of skilled technical writers

              My wife was one of them. In her later jobs it was always a cause of great frustration that she constantly had to convince managers that tech writers should sit with the engineers and actually use the products as they were being developed/tested, it wasn't sufficient to simply polish up the spelling & grammar of whatever notes the developers felt like sharing afterwards.

              1. Scene it all

                Re: I wonder how many people still remember how to use it?

                I was a developer at DEC in those days and we worked closely with the tech writers all the time. They were in a separate "cost center" but sat with us and attended all the group meetings. Sometimes they were even involved in technical decisions like if we had to choose between a few approaches and we would ask them "which of these is the easiest to describe accurately?"

            2. Kabukiwookie

              Re: I wonder how many people still remember how to use it?

              Though I completely agree, that this is impressive compared to current practices, this level of interaction should be the base-line when developing, not the exception.

              It seems now that IT is full of people glueing together pieces of code with sticky tape and pushing it out to production with the threshold of pushing it out being as low as 'it works'

              1. Anonymous Coward
                Anonymous Coward

                Re: I wonder how many people still remember how to use it?

                That may be too high a bar. "It must work, it compiled."

                1. rnturn

                  Re: I wonder how many people still remember how to use it?

                  > "It must work, it compiled."

                  Ha. I used to jokingly reply to the people who said that: "Well, that just means you probably spelled everything correctly."

              2. Norman Nescio Silver badge

                Re: I wonder how many people still remember how to use it?

                It seems now that IT is full of people glueing together pieces of code with sticky tape and pushing it out to production with the threshold of pushing it out being as low as 'it works'

                It's the concept of 'Minimal Viable Product', sadly approached from below. Often found in concert with the 'Minimal Marketable Product'.

                Roman Pichler, 9th October 2013: The Minimum Viable Product and the Minimal Marketable Product

            3. Stoneshop

              Re: I wonder how many people still remember how to use it?

              Firstly, you have to realise this was before on-demand printing so even the logistics were impressive. Estimates of product sales determined initial print runs for manuals - and indeed the number of distribution tapes for the software itself. Inventory had to be warehoused. Update pages had to be distributed periodically to relevant customers.

              When I started at DEC I was assigned as site engineer for DEC's Nijmegen (JGO) site[0]. Housed in a former Singer[1] plant, about half of it was repair and parts stock, the other half was documentation which was nearly all stock with maybe three or four VAXen running the inventory housekeeping (repair had a far bigger bunch of VAXen plus a few PDPs, but they were basically the network and parts hub for all of Northern Europe). During the years I was there repair got a separate building that was designed around cleanrooms, not cleanrooms sitting in a former and notsoclean warehouse. And I saw the documentation group installing a serious Kodak laser printer so that they could now do on-demand printing of manuals.

              #201462

              [0] the Philips, now NXP, chip bakery that looks like a DIL package was next door.

              [1] Yes, the sewing machine company. They had a division that made papertape-controlled typewriters that could even do bits of arithmetic using a box full of relays and the guts of an actual mechanical calculator with solenoids pushing the buttons. This gave you a bit of business automation, in the 1950s.

      3. Mark 65

        Re: I wonder how many people still remember how to use it?

        About all I can remember is purge /keep=n

        1. Wellyboot Silver badge

          Re: I wonder how many people still remember how to use it?

          It's twenty years since I last used VMS on a daily basis, I think I'd have to sit for a while dredging my memory for the correct command syntax.

          Around 1990 my companies secretarial pool* moved from Wang WP to VMS WordPerfect, That was the only time I've ever seen 'user training session' with zero issues about a new system, they loved the long names & file versioning VMS brought to the party.

          *Filled with typists waiting to become a senior managers personal secretary, anthropologists didn't need to paddle up the Congo to observe the law of the jungle, just wait for a promising junior manager to walk in needing some urgent work done.

          1. Alan Brown Silver badge

            Re: I wonder how many people still remember how to use it?

            There are secretaries and there are typists

            My mother was city secretary for a while - which is a senior managament role. One of the MBA-type managers on the same floor thought he could dump his typing jobs on her and found out the hard way what "secretary" means (literally - "keeper of the secrets")

            The CEO and others thought it was hilarious (Mother wore the label "Office Doberman" proudly and was very protective of all her staff)

            1. Vometia has insomnia. Again. Silver badge

              Re: I wonder how many people still remember how to use it?

              Ha, that is quite amusing. Encountered a few of the MBA type in the early days of "a computer on every desk": some refused because "typing is women's work".

              One of my earliest mentors was a former secretary who was often given typing work and ISTR rationalised that if she was having to type up other people's programs, she may as well start writing them herself. Not only did I learn technical stuff from her but she also made significant improvements to my absolutely lamentable grammar. She really had her work cut out there.

      4. trindflo Bronze badge
        Devil

        Re: I wonder how many people still remember how to use it?

        I remember it being painfully easy to impersonate someone else in mail with the right privileges, and how to compose an 'exit' command that resulted in an access violation from the mount subsystem that could be setup as an easter-egg alias for frustrated users.

        1. Little Mouse
          Terminator

          Re: I wonder how many people still remember how to use it?

          I remember the error "Terminator not found" being used in hilarious ways by some of my colleagues.

          Those long nights just flew by.

      5. Liam Proven (Written by Reg staff) Silver badge

        Re: I wonder how many people still remember how to use it?

        [Author here]

        > lost without a comprehensive manual

        I see what you did there.

        The Grey Wall and Orange Wall have now been replaced with a Silver Disc, I suspect.

        1. Phil O'Sophical Silver badge

          Re: I wonder how many people still remember how to use it?

          I remember the first CDROM we ever received, it was MicroVAX documentation. One of the guys in the office had an early portable audio CD player and someone said "I wonder what happens if we try to play the CDROM". He put it in, put the headphones on, and pressed buttons. Nothing happened, he turned up the volume, and then...we heard the screech through the headphones at the other side of the room. He dropped the CD player in shock, and was not a happy bunny :-)

      6. fidodogbreath

        Re: I wonder how many people still remember how to use it?

        I'd be totally lost without a comprehensive manual outdated wiki and some surly forums now.

        Updated to reflect modern software documentation practices.

    2. John Gamble

      Re: I wonder how many people still remember how to use it?

      As an end-user and writer of DCL files? Still a lot, let me get back to using F$PARSE functions.

      As an administrator? I'd need a crash course.

      I wonder if the Whitesmith C compiler is still available? And of course I'd have to search down all those ZOO archives.

      1. Stoneshop

        Re: I wonder how many people still remember how to use it?

        As an administrator?

        I'm one. Although the systems are slated to be phased out, but that should have been over with six years ago already.

        The main sticking point is that a lot of the programs that were running on them were written in Pascal, and the pool of competent developers with the right domain knowledge was getting rather dry. So stuff was rewritten in C on Linux, or turned into whatever it is that you feed into WebLogic or JBoss.

        Oh well. Less than five years to retirement.

        1. award

          Re: In the bad old days of WordStar, WordPerfect, DisplayWrite, MultiMate

          Ahh.... lots of fond memories of programming using VAX Pascal (circa mid-80's)

          I suspect if I had to go back to it now I would be disappointed :-( (Having done a lot of work since with Delphi/Object Pascal)

  3. sorry, what?

    VMS...

    Verbosely Meaningful Syntax.

    I used VMS (on VAXes) for about 4 years as a user. The one real benefit I remember from that use in the late 80s was how meaningful the commands were. The disadvantage - lots of typing due to the verbosity of the CLI.

    1. Phil O'Sophical Silver badge

      Re: VMS...

      You could abbreviate any cli word to its lowest unique form and, at least in the early days, everything was guaranteed to be OK down to 4 letters. No need to type "directory" when "dir" sufficed.

      1. Warm Braw

        Re: VMS...

        DCL was (in part) intended to unify the CLI across the various DEC operating systems.

        There was also a DCL command interpreter for RSX-11 (amongst other earlier OSes) that was basically just a massive macroprocessor that consumed DCL and produced MCR commands.

        dir
        was actually more succinct (and indeed meaningful) than
        pip /li
        but it came at the cost of some very heavy string processing!

        1. Vometia has insomnia. Again. Silver badge

          Re: VMS...

          I was quite impressed to find it in even some of the oddest places. Lost the bus ID plug for that DSSI drive? No problem, just login to it across the DSSI bus and talk to it using its own DCL interpreter to ask it nicely to use the bus ID in question.

      2. Ian Johnston Silver badge

        Re: VMS...

        And if you were defining a new command you indicated its truncation point with an asterisk, so "dis" or anything more would match "dis*assemble"

      3. Steve Graham

        Re: VMS...

        It took me years to completely unlearn command abbreviation when we switched to SunOS.

        1. Roland6 Silver badge

          Re: VMS...

          After TOPS-10 and VMS, combined with a focus on usability, I thought the terseness of the Unix 2-finger typist mentality command-line was bit of a joke.

          In some respects it is amazing Unix (and its imitators) took off.

  4. dboyes
    Go

    It would be really nifty if this ran on s390x. VMS was one of the most credible competitors to z/OS, and the sophisticated clustering would be a fantastic option for the mainframe crowd.

    1. Warm Braw

      Given that clustering first showed up in 1984 - the same year as the first Apple Macintosh - it was pretty remarkable for its era. I'd argue that the technology presaged the notion of cloud computing given the way processes could be migrated from failing nodes and capacity added and removed as required.

      It was also an important example of system design rather than simple hardware or software design: the early versions depended on both on the CI (Computer Interconnect) hardware and the operating system support.

      Having spent a chunk of my early career working on both RSX and VMS I admit to some nostalgia, but they are little more now than evolutionary curiosities in their specific implementations. Many of the concepts they spawned, though, remain incredibly relevant, even 40ish years later.

      1. Bitsminer Silver badge

        It was also an important example of system design rather than simple hardware or software design: the early versions depended on both on the CI (Computer Interconnect) hardware and the operating system support.

        I agree. The OS and VAX hardware were mutually co-dependent in a good way. They had a vision including the clustering that would have lasted decades....if only Sun had not killed VAX with SPARC performance.

        That was one of the big holes in the VAX/VMS plan: CISC including ridiculous instructions like MOVC5 and SOBGEQ. Compiler writers couldn't use them. Ordinary C code would outperform them.

        CRC calculated an arbitrary cyclic redundancy code. It was in firmware for most implementations, except the MicroVAX II implemented it by an instruction exception branching to software emulation. So the backup utility which did CRCs on tape blocks was terribly slow. A hand-built code was quicker.

        I think it was VMS 5.3 or 5.4 where the CRC in the backup utility was finally replaced with hand-tuned code.

        The Alpha was a big improvement on CPU performance, but it was late to the party.

        And DEC / Compaq marketing could never let go of their "big iron" vision despite ordinary workstations matching the CPU power of the big boxes. They couldn't admit they would make more money selling 100k or a million workstations instead of a few thousand servers.

        1. Liam Proven (Written by Reg staff) Silver badge

          > And DEC / Compaq marketing could never let go of their "big iron" vision [...]

          It wasn't just marketing. The company management had massive difficulties coming to terms with the microprocessor revolution, and the VAX 9000 disaster nearly killed them.

          One analysis:

          https://www.sigcis.org/files/Goodwin_paper.pdf

          The company spent about a billion building a mainframe-class VAX, although their own engineers warned them that the company's own microprocessors would outperform it for less money before the R&D was done.

          Management overruled them.

          This is why DEC nearly went under, AIUI, and Compaq saved them. But as a PC vendor, Compaq was too beholden to MS to push ahead with the Alpha.

          Intel now owns the Alpha tech and IP.

          Given that Arm is stomping all over x86 these days in performance, price, MIPS/FLOPS per Watt, any measure you like, and given just how far ahead of the rest of the industry Alpha was in its day -- remember that the Arm only became a performance contender because of the StrongARM, designed by, you guessed it, DEC -- I reckon Intel could even now successfully revive Alpha.

          The code is there. The compilers etc. are there. Alpha today still has better supporting infrastructure than Itanium ever attained.

          But for Intel, Alpha is NIH.

          1. Alan Brown Silver badge

            "The company management had massive difficulties coming to terms with the microprocessor revolution"

            If you look into things you find the same story repeated even in smaller spaces

            Motorola fixated so heavily on its 6800 family (@ $250 each) that it ignored customer demands for cheaper simpler CPUs, leading a bunch of engineers to leave and form MOS to produce the simpler 6501 (pin compatible) and 6502 (@ $45 each)

            Intel fixated on being a "general silicon" company and neglected its CPU development, so its three best engineers left to form Zilog - and the Z80 rewrote history

            IBM refused to use Zilog's Z8000 in the IBM PC because they were owned by Exxon (who were attempting to move into the same space IBM owned) and ended up using the vastly inferior Intel 8088 instead - which eventually forced Intel to stop focussing on being a "memory" company and concentrate on CPUs

            Commodore nearly killed the 6502 when they bought MOS because Jack Tramail was concentrating soley on pocket calculators. They lost a deal selling the Steves' (Woz and Jobs) version 2 product as a Commodore because Tramail and Jobs couldn't stand each other (but they still used 6502s). They also lost a deal for another 6502 computer sold through Radio Shack (not the PET) because Tramail insisted on including millions of pocket calculators in any sales deal. Radio Shack went to Zilog instead and the TRS80 was born. The story of the Pet is similiarly complicated and only happened because Tramail was finally convinced that pocket calculators were a dead-end market (but he did manage to put a sharp deal over on a very young Bill Gates in the process which made an enemy for life out of Microsoft)

            Intel is a one-hit wonder whose other products have virtually all turned to custard in their hands.

            They got supremely lucky with the 8086 because of corporate politics - not because it was good. If the 9000 pound gorilla hadn't used their CPU, they'd be an afterthought in silicon history.

            It's a bit like warfare - if you win you keep using the same tactics (more and bigger) whereas if you lose you adapt. Intel kept winning with x86 for so long that they could simply afford to dump everything else and follow the MS "Embrace, extend, extinguish" approach to competition at times. At the start of the 21st century the x86 cpu was the least efficient of all the architectures available (wattage and silicon cost for performance) but it was on every desktop already

          2. Warm Braw

            a billion building a mainframe-class VAX

            Digital was always rather profligate in its development spending. There was not, in the end, a huge differentiation between the first VAXes - the 11/780, 750 and 730 were all designed by different teams but didn't have the range of price/performance that could really justify three separate manufacturing lines and inventory. Not to mention the proliferation of other overlapping hardware ranges and operating systems,

            The first terminal servers had two completely different architectures, one based on PDP-11 and one based on the Motorola 68000. There was perhaps too much latitude for people to just go off and build stuff.

            But most IT companies end up being consumed by their legacy - their cashflow depends on supporting their existing customers whereas their survival depends on leaving them behind.

            1. Stoneshop

              the 11/780, 750 and 730 were all designed by different teams but didn't have the range of price/performance that could really justify three separate manufacturing lines and inventory.

              Not when you consider the machines themselves, but if you add their operating environment you can see why a customer would buy an 11/750 or an 11/730 where they could not have justified an 11/78x.

              11/78x: needs 3-phase power, drawing 3kW minimum, and is the size of a decent wardrobe[0][1]

              11/750: half as high, half as wide, single phase power, you could run it from a wall socket provided you didn't have a toaster or a kettle on that circuit too, and a standard domestic aircon unit could manage the cooling. Storage still needed at least one extra cabinet.

              11/730: standard-width 3ft high cab that had room for a disk unit as well and it didn't object to sitting in a corner of an office area (but defensive and offensive anti-cleaner options were never offered). The 11/725 was essentially a repackaged 11/730 explicitly meant to sit next to someone's desk, like the BA23 and BA123 packaged mVAXes appearing a little later.

              [0] I know of someone who had an 11/780 in his bedroom, and built his bed on top of it. I suspect he didn't run it when we wanted to sleep.

              [1] one time I was asked to gut a decommissioned 11/780 so that it could be turned into a bookcase for the Orange Wall.

          3. Stoneshop

            This is why DEC nearly went under, AIUI, and Compaq saved them.

            "Save" is rather charitable. "Plundered" is more like it, as Compaq wanted DEC for its services and got the hardware side to go with it that it didn't know what to do with. Which in turn made Compaq a nice morsel for HP.

            Also, the VAX9000 happened late in the 1980s; I was still with DEC FS and got sent to Galway for a month of training. Two years later Alpha was everywhere, along with the 6xxx and 7xxx as the biggest VAXen, and the few 9xxx that had been installed (about a dozen in the whole of NL) were about to be scrapped if they hadn't been already.

            Compaq only came along after several years of Bob <spit> Palmer, close to Y2K.

            1. Anonymous Coward
              Anonymous Coward

              Bob <spit> Palmer

              Amen to that. I'd add more, but I have issues with high blood pressure already, so instead, why not try e.g.

              https://www.nytimes.com/1998/09/10/business/digital-employees-tell-of-threats-by-gates-over-product.html (paywall)

    2. Vometia has insomnia. Again. Silver badge

      VMS and MVS did a lot of stuff quite differently tho'. I'd say VMS kinda straddled what MVS and VM did; VM was much more pleasant to use than MVS for interactive computing (I mean CMS compared to TSO... bleh; though ISPF certainly has its fans) but I'd much rather use VMS than either. MVS probably did huge amounts of batch better and I guess they were much of a muchness with TPS and DB, though VMS had the advantage of clustering.

      Looking back, one of the interesting things is that the S/3x0 adherents always point out the huge number of IO channels and multiple paths but during the same time period DEC (and ICL, for that matter) were moving to NAS with what's essentially RAID 0+1 in common parlance. I'm not sure IBM had anything like that at the time.

      I would say that there would be zero chance of VMS ever running on an S/3x0 given that architecturally, the IBM CPUs were so very different to Vax processors, though I am reminded of some stuff I read about the Jupiter project where the KC10 was intended to have sufficiently flexible reprogrammable microcode that it could be in essence a 36-bit Vax if that's what you wanted it to be. plus the intention of the KC10's IOPs to handle bus-and-tag natively since so many PDP-10 shops ran IBM-compatible storage connected to their 10s with converter boxes. I'm not sure if any of that was in any sort of development or just a wish list, tho'; And I'm not sure if IBM ever intended its 3x0/z-series to be anything other than what they are.

      1. Liam Proven (Written by Reg staff) Silver badge

        I don't really know anything much about MVS or the S390 architecture in detail, TBH.

        But VMS on x86-64 could easily incorporate KVM support -- the hard parts are all already done in hardware.

        If VMS were to embrace this, it could potentially be competitive essentially as a hypervisor. A VMScluster of VMS boxes as a single-image hypervisor, load-balancing by passing virtual machines to one another, could in principle do stuff that VMware, Xen, Hyper-V etc. have no good way of doing.

        1. Vometia has insomnia. Again. Silver badge

          True, though that's VM's territory rather than MVS (z/OS) and it's a very different beast. But now you mention it, I've often thought a mix of VM-style LPAR/hypervisor/whatevs and VMS's clustering technology would be quite an interesting combination.

          1. Roland6 Silver badge

            >a mix of VM-style LPAR/hypervisor/whatevs and VMS's clustering technology would be quite an interesting combination.

            Wasn't Parallel Sysplex IBM's attempt at this?

            1. Vometia has insomnia. Again. Silver badge

              I dunno, that's way beyond anything I know about; a gap I should probably try to remedy. I vaguely remember hearing about IBM's variously close-coupled and loose-coupled technologies way back but never in all that much detail. The only time I had a proper chance to play around with IBM big iron was at DEC, ironically (ahem) and by which time there were other distractions like the burgeoning interweb.

  5. TiptreeGeek
    Thumb Up

    Best OS I have ever worked on!

    Started working on OpenVMS way way back in 1990 - still the best OS IMO & I have worked & continue to work on many. No OS even now has the features that were available back in the 80's, Linux/Unix clustering products - MEH! Shared storage - nothing has a patch on it, stability - nothing now is a rock solid apart from mainframes.

    Perhaps current OS (& cloud) vendors should take a look & what VMS can do instead of adding hard to support bells & whistles with 40 min fail over times for hosted VMs.

    1. elDog

      Totally agree.

      It was rock solid. Had a wonderful versioning file system. Multi-level hardware protection.

      I loved DCL and the very rich set of process variables. I even enjoyed TPU and EVE - trying to parlay it into it into a real IDE.

      What a pity the DEC was sabotaged. Not sure by whom. But VMS still lives on in the innards of Windows NT....

      1. Mark 65

        Re: Totally agree.

        I also remember the CMS source control system. Spent many a day working with that.

      2. Liam Proven (Written by Reg staff) Silver badge

        Re: Totally agree.

        As I said earlier: I think the only company that sabotaged DEC was DEC itself.

        Principally, the VAX 9000 project very nearly bankrupted the company.

        https://en.wikipedia.org/wiki/VAX_9000

        However, some Soviet-era DEC clones show what could have been; for instance, this:

        https://www.facebook.com/groups/retrocomputers/posts/5616564051706763/

        «

        In 1990, a computer appeared in the USSR on a 16-bit Soviet-designed K1801VM2 (N1801VM2) processor with the PDP-11 command architecture and it was called Soyuz Neon PK-11/16. the computer had several graphics modes: 832x300 4 colors to pixel, 416x300 16 colors to pixel , 208x300 256 colors to pixel, from a palette of 65536 colors. The processor worked at a frequency of 8 MHz. The amount of RAM could be from 512 to 4096 KB. The amount of ROM was 16 KB. The computer had a 3-channel digital sound implemented on two KR580VI53 (K859VI53) counters installed in series, functional copies of the Intel 8253 chip with volume control or FM modulation. The computer also had a connector for a computer mouse of the MSX standard, a parallel port implemented on the KR580VV55 chip (a functional analog of the Intel 8255), a serial port on the K1801VP1-035 chip of the RS-232C standard with an additional highly sensitive input for organizing a local network, a port for connecting a floppy drive and hard disk MFM and backbone parallel interface (MPI) for connecting additional controllers and memory units. DEC RT-11 operating system with ASPekt graphic manager. Do you think the Soyuz Neon PK-11/16 computer could compete with the Atari ST and Commodore Amiga 500+ that existed then, if it had access to the same software library as the above machines?

        https://www.youtube.com/watch?v=rkvFMyKIkrs&t=12s

        https://www.youtube.com/watch?v=mk6XmiTs_fg&t=13s

        https://www.youtube.com/watch?v=WX30aIOrInA&t=22s

        https://www.youtube.com/watch?v=F32pon-Aibo&t=76s

        https://www.youtube.com/watch?v=NSSBYgkBhjM&t=129s

        http://dgmag.in/N17/DowngradeN17.pdf

        https://zx-pk.ru/.../29407-proekt-otkrytoj-repliki-soyuz...

        http://www.emuverse.ru/wiki/Союз-Неон_ПК-11/16

        https://vk.com/wall-96248735_1595

        https://ru.wikipedia.org/wiki/Союз-Неон_ПК-11/16#:~:text=

        https://ru.wikipedia.org/wiki/1801BMx

        »

        1. Vometia has insomnia. Again. Silver badge

          Re: Totally agree.

          Even having worked there around the time, the story behind the Vax 9000's demise is quite opaque. IMHO the problem wasn't so much with the concept or the technology but that there was a lot of in-fighting, particularly between the large and medium systems groups that went back very many years. Some of it spilled out into the public domain and I remember some unhelpful claims that DEC's large systems were always problematic which probably didn't do a great deal to improve customer confidence. :| Similar story with the Jupiter project to build the next gen PDP-10 (and the cancellation of which pissed off a *lot* of people) with a similar rationale, but looking at it a bit further I have to wonder if the politicking and frequent cancellations of insufficiently trendy systems were bigger factors than the same "a smaller Vax has just as much performance". I remember one particularly vocal ex-DECcie bemoaning the problem of "small computer thinking" that she saw as blighting the company.

  6. securityfiend
    Thumb Up

    Aah, the memories...

    Decnet, LAT, Pathworks, batch queues, autogen, Clustering.. That was a fun time...

    1. Bitsminer Silver badge

      Re: Aah, the memories...

      Although DECnet was never going to scale beyond a few hundred or a thousand nodes.

      I recall a big European research network that had dozens of international links on the then-speedy X.25 packet switched network. I recall a friend mentioned in passing he had once accidently reset the DECnet config of his Swiss microVAX to the wrong address, thereby somehow crashing the whole network. Until he put it back.

      There were days of complaints about why the network had crashed but no-one ever determined who or how.

      1. Graham Cobb Silver badge

        Re: Aah, the memories...

        There is a reason Radia Perlman (one of the main DECnet Routing architects) wrote her PhD thesis on Byzantine Routing failures (faulty or malicious nodes disrupting Routing). DEC's own DEcnet was probably the largest network dependent on fully-automatic routing at the time (although Arpanet was bigger, it was more hierarchical and less automated in those days before OSPF and IS-IS were created).

        See http://www.vendian.org/mncharity/dir3/perlman_thesis/ for the thesis.

      2. Vometia has insomnia. Again. Silver badge

        Re: Aah, the memories...

        There was DeathNet Phase V which could do so; though it was conceptually quite different and sufficiently complex that its configuration could be quite a headache for the ill-prepared, i.e. me.

        1. Roland6 Silver badge

          Re: Aah, the memories...

          Agree Phase V aka OSI networking wasn't for the ill-prepared.

          I found configuring Phase V relatively trivial, but then I had spent a year or so digesting the OSI documentation and working with all OSI implementations being prepared for market.

          1. Vometia has insomnia. Again. Silver badge

            Re: Aah, the memories...

            I remember what seemed like years of swearing from some of the people working on getting Phase V (I think; though it may have been some Philips/Motorola variant of OSI) to talk to ICL's OSLAN, which struck me as turning out to be much more difficult than I thought it should be. I saw some of the documentation which seemed... well, it was pretty full-on, anyway.

            1. Graham Cobb Silver badge

              Re: Aah, the memories...

              Unfortunately Phase V wasn't exactly OSI. OSI networking had got bogged down in the standards process so Phase V was an "intercept" (meaning - it had to ship so we made guesses as to the standard). The network layer was close, and some of the applications (like X.400) came later and so were fully standardised, but others were way off (CMIP, for example).

              But X.400 and X.500 were notoriously difficult to get interoperating even with fully compliant implementations.

              1. Vometia has insomnia. Again. Silver badge

                Re: Aah, the memories...

                I remember a few organisations throwing everything behind X.400 in the mid '90s because they were convinced it was The Future™. I think they gave it at most a couple of years before concluding that it was a massive headache to configure and for something that was slower and much more likely to lose your mail than just sending it by post.

            2. Roland6 Silver badge

              Re: Aah, the memories...

              >ICL's OSLAN...

              Yes, I think that was the only one that implemented Transport Class 3, whereas everyone else implemented Class 4...

              Much of the differences and interop issues were resolved by the MAP/TOP initiative, however, MAP/TOP wasn't "purist" OSI ... but it was the only OSI initiative that actually delivered anything to market.

              Aside: By the time this was achieved, the rise of Unix workstations had done their work and laid cuckoo's eggs of TCP/IP across many organisations - leading to the (valid) question: why pay for a network stack when one is bundled "for free" with Unix and been using it internally for a few years to move stuff around the office LAN...

        2. Anonymous Coward
          Anonymous Coward

          Re: DECnet was never going to scale beyond a few hundred or a thousand nodes.

          OSI networking fixed issues like network size, routing, autoconfig, *proper* multi vendor interoperability , etc, long before IPv6 ever will.

          DECnet Phase V integrated both DECnet classic (Phase IV) and a decent implementation of OSI networking. You needed to have a bit of a clue to get by on (anybody's) OSI though, whereas people capable of appearing to understand IPv4 were widely available at low low prices, and if it didn't work, you could always blame (by that time) Windows.

          Shame Teracloud seem about as commercially clueless as HPQ.

      3. Stoneshop
        Boffin

        Re: Aah, the memories...

        Although DECnet was never going to scale beyond a few hundred or a thousand nodes

        Um? Within DEC they had a sufficiently high number of nodes that they had to increase the number of hidden areas to accommodate all the systems. DECnet IV addresses consist of a 6 bit area and a 10 bit node number, with the possibility to make area 63 'hidden'. Nodes in that area could not be reached directly from systems in other areas, only through the area router so that you had to use commands like COPY 18.271::63.119::USERS:[WHOEVER]FILE.EXT []DESTINATION.EXT , with 18.271 the hidden area router and 63.119 the hidden node that you needed to get a file from (of course you would actually use names instead of numerical addresses). Similar when going the other way. With a lot of handwaving you can think of it as somewhat like private IPv4 ranges.

        But DEC had sites where they needed to have >1023 nodes hidden. And of course you could have multiple areas 63 each behind their own hidden area router, but you would have to go via the hidden area routers if one of the nodes was in the 'other' area 63. Messy. So they tweaked DECnet to allow configuring areas 60 and up (not quite sure there) to be hidden, so that you could connect from 63,xyz to 62.abc without involving those hidden area routers.

        This all from memory; it's been over a decade since we phased out DECnet at my current workplace, and we didn't have hidden areas anyway.

        1. Vometia has insomnia. Again. Silver badge

          Re: Aah, the memories...

          Wasn't it 62.*? I thought 63 was... actually, I can't remember, just that life became a bit easier when I managed to wangle a 41.* address for my VaxStation. Which lived on the wrong end of a Kilostream in my spare bedroom.

          Edit: actually I see on a second reading that you already explained all that. I shouldn't try to reply to stuff at 4am...

    2. Twanky

      Re: Aah, the memories...

      Pathworks??

      Nooooooo!

      1. Red Or Zed

        Re: Aah, the memories...

        Pathworks on Macs!

        Eventually it worked, mostly. Huge amount of network storage available for Computer Aided Drawing people on Macs, down to the main servers. Lovely.

        except when the Mac crashed and showed the Sad Face of "I have ennui" instead of, say, an actual error message.

  7. Lorribot

    Memories?

    Still in daily usage on Itanium where i work.

    Can't wait to move it to VMware as getting hardware to work with it can be tricky. Hardware requirements and drivers have always been very specific.

    Dave Cutler was involved in its conception before he moved on to NT4, Shame he never bothered to copy the Clustering in to Windows as MS version has always been on the wrong side of pants in comparison.

    1. Anonymous Coward
      Anonymous Coward

      Re: Memories?

      Some of the clustering folks moved to Sun, and worked on what became Sun Cluster.

    2. Liam Proven (Written by Reg staff) Silver badge

      Re: Memories?

      Cutler didn't move on to NT4.

      NT 4 was, essentially, the second major version.

      NT started with NT 3.1 and then 3.5 and 3.51 before NT 4. Dave Cutler was the project lead on all of NT, not just NT 4.

      I would argue that NT 4 is when the MS rot set in, myself. Moving the GDI into the kernel was an egregious error, trading design cleanliness for performance.

      1. R Soul Silver badge

        Re: Memories?

        "I would argue that NT 4 is when the MS rot set in"

        Some of us would argue that rot started long before MS-DOS shipped.

      2. AndyMulhearn

        Re: Memories?

        I would argue that NT 4 is when the MS rot set in, myself. Moving the GDI into the kernel was an egregious error, trading design cleanliness for performance.

        I remember being surprised at this choice at the time and it doesn't look any better looking back. I think DC had moved on before the change was implemented

    3. Anonymous Coward
      Anonymous Coward

      Re: Memories? Cutler not VMS

      Cutler was involved in a team in DEC that pitched a new version of VMS. It was already well established by this stage. Cutler's team lost. Mainly for technical reasons. Cutler left DEC and took that work to MS and it became the basis of the NT kernel. Its easy to spot the code written at DEC in the NT source code. Its the very clean stable easy to read code. The stuff on top is the usual MS spaghetti.

      Cutler was only made team lead for the VMS project pitch not because he was most senior tech on team but almost the most junior. None of the much more experience tech people on the team wanted to be lead. For the hassle of a mostly management paper pushing position. A familiar story.

      1. Vometia has insomnia. Again. Silver badge

        Re: Memories? Cutler not VMS

        Cutler's somehow gained this reputation as being "DEC's OS guy" which irritates a lot of people and quite justifiably so; not least as some of his outspoken attitudes can be problematic and really sell other people and other teams short as a result. I suspect I don't know the half of it; but I've certainly seen enough to know that whatever boasts it might like to make about its provenance, NT 3.51 is not VMS, and while I like VMS a lot, it's also quite controversial to say it's the pinnacle of DEC systems. Particularly among the TOPS-10 and TOPS-20 guys who would even stop taunting each other long enough to comment! But not just them.

        1. Anonymous Coward
          Anonymous Coward

          Re: Memories? Cutler not VMS

          The story out of the Mill in Maynard at the time was that Cutler did not jump but was pushed. No one sad to see him gone. The DEC influenced code in NT was purely the low level kernel services. That code looked very DEC. Everything else on top was very un VMS. The usual incoherent MS mess. But for Win32 dev NT 3.51 was great when compared with the alternative. But NT 4.0 broke everything and it was back to MS BSOD land on a regular basis.

          Never had much dealings with DEC big iron. So no adventures in TOP's land. Went the PDP 8, PDP 11, VAX route. So mini's all the way. Only dealings with VMS was a dev machine host environment. On VAX 11/780's. Rock solid. Would have been mid 1980's. Always funny when some vendor rolled out some "revolutionary" new feature on PC's 10 / 15 / 20 years later and it was something that was bog standard on VMS setups many years before. The big difference was it actually worked flawlessly on the DEC setup.

          I really miss DEC. It was one of those great tech companies. Destroyed by those idiots at Compaq.

          1. Vometia has insomnia. Again. Silver badge

            Re: Memories? Cutler not VMS

            Destroyed by the greedy and short-sighted board ousting Ken in favour of Greasy Bob. I know Ken wasn't without his faults but DEC under Palmer was just a really painful experience. I couldn't bear it any longer and jumped ship before Compaq happened (in hindsight not my best decision but that's another story) but I've heard some horror stories about the culture clash: even under several years of Palmer's misrule, Compaq's dubious ethos came as quite a shock by all accounts.

  8. spireite Silver badge

    Obligatory one word question?

    Why?

    I mean its cool in many ways, but much like the Rad BASIC article.... Why?

    1. Alan Brown Silver badge

      Re: Obligatory one word question?

      Because VMS is still needed (think decades-long support projects) and the few remaining working alpha boxes are becoming fewer every year

      There's a reason OpenVMS can charge $9k-plus per instance and have people gladly pay up

  9. Mister Dubious
    WTF?

    Rampant age-ism

    "Alpha kit is long in the tooth," so what can we say about x86? That it bites?

    1. F. Frederick Skitty Silver badge

      Re: Rampant age-ism

      "what can we say about x86?"

      It might try to gum you to death.

  10. Anonymous Coward
    Anonymous Coward

    No, though I did a lot of VAX/VMS work in my University days and early career, it isn't a platform I particularly miss. I found it rather painful in many ways, and it is depressing to think the Microsoft essentially took some of the worst features of VMS, stripped all the good clustering features, and bastardized it as WindowsNT.

  11. rmullen0

    Switching to OpenVMS after Windows 10

    If Microsoft doesn't fix the hardware requirements for Windows 11, I'm switching to OpenVMS when Windows 10 support ends.

    1. Benegesserict Cumbersomberbatch Silver badge

      Re: Switching to OpenVMS after Windows 10

      VMS++ == WNT

  12. Chris King

    I started with VMS on a VAX 8800 nearly 35 years ago - one of DEC's first true SMP boxes - at Uni, then went on to run it on various VAX amd Alpha platforms for a number of employers, eventually ending up as the "VMS Wizard's Apprentice" in my current gig.

    The central VMS cluster and various offshoot machines went out the door nearly ten years ago and the Wizards have all retired, except for me.

    I can see this as being useful to extend the life of existing code, and to help as a migration tool, but I don't see any new deployments happening. Not many folks are getting trained in it these days, and I regularly get approached on LinkedIn by people looking to recruit an experienced VMS greybeard on the cheap,

    1. Stoneshop
      Facepalm

      You want a *what*?

      and I regularly get approached on LinkedIn by people looking to recruit an experienced VMS greybeard on the cheap,

      Just a few years after Y2K a recruiter called me to ask where he could find a junior VMS operator.. "I am not one, nor do I know of one. You're just not going to find someone today who's started working as a VMS operator recently enough that you could still rate them as junior. I wish you a good day, and good luck if you decide to keep on searching."

    2. Anonymous Coward
      Anonymous Coward

      That is the thing with so-called "agencies." They pay pennies and reap thousands if they can get away with it.

      I stopped dealing with them years ago. There are better ways to find work than to let yourself be ripped off.

  13. Chris King

    Public clusters?

    There's not much in the way of training material and knowledge exchange going on these days.

    Until a few years ago, there were public-access VMS clusters out there, like Deathrow. These acted like the BBSes of old, and that one used VTX to maintain a knowledgebase and host forums. I found it useful to develop DCL scripts without having to run an emulated VAX or Alpha of my own.

    That cluster had machines named after serial killers - I usually hung out on DAHMER:: (their Alpha) but they also had VAXes (GEIN:: and what was the other one?)

    I wonder if there would be any mileage in VSI reviving such platforms as public demonstrators and a way to rekindle interest?

    1. Vometia has insomnia. Again. Silver badge

      Re: Public clusters?

      Blimey, I'd forgotten all about VTX! I remember a keyboardless VT320 next to the coffee machine that would broadcast variously interesting information to anybody hanging around who paid it any attention. Until they replaced it with one that still had the screen-saver set to half an hour, the sort that needed input from the absent keyboard, serial traffic not being enough to reawaken it. Still, watching its blank screen was more informative than any of Greasy Bob's infamous broadcasts.

    2. Roland6 Silver badge

      Re: Public clusters?

      Well, given all the work being done on the Pi with respect to clustering, perhaps a port of OpenVMS to Pi is in order - not much good for production but good for learning, particularly if it were priced either free or nearly free.

      1. Graham Cobb Silver badge

        Re: Public clusters?

        I think that, unfortunately, VMS clustering had different goals from today's PI clusters (generally).

        The primary goal of VMS clustering was to allow shared filesystem access: disk files could be shared transparently, efficiently and safely from any cluster node (and tolerating failures in nodes and interconnect). The primary mechanism for achieving this was the distributed lock manager, but combined with significant re-engineering of the filesystem code. It was optimised for fast, LAN connectivity between the cluster members.

        Of course, lots of other things were also made to work cluster-wide (particularly later) but filesystem sharing was the main driver.

        In the Linux world, shared access to the same file (even without clustering) is very rare. In most cases (such as databases) a daemon process owns the file access. In effect, VMS has the same daemon, of course, but it was built into the kernel (in Exec mode) and an integral part of the clustering software. So, VMS-style clustering would not really be a kernel feature in Linux.

        Of course, there are cluster filesystems on Linux. These are typically aimed at wider area sharing - so have different tradeoffs - and do not (mostly) include a distributed lock manager (such things also exist separately as well, of course).

        I'm not at all sure that VMS-style clustering helps very much with the scenarios that Pi-clustering is being used for. But I am not in the Pi community.

        1. Stoneshop

          Re: Public clusters?

          Of course, there are cluster filesystems on Linux. These are typically aimed at wider area sharing - so have different tradeoffs - and do not (mostly) include a distributed lock manager (such things also exist separately as well, of course).

          There are at least three different clustering models to use with Linux, and you'll have to pay your dues to Ken, Dennis and the ghost of Brian to help choose the right one for your situation.

        2. Roland6 Silver badge

          Re: Public clusters?

          >I think that, unfortunately, VMS clustering had different goals from today's PI clusters (generally).

          I was looking at the Pi hardware platform as a vehicle on which to learn about clustering. Ie. give the user a choice of OS's - plug in either a Raspbian or RaspVMS SD and away you go...

          One of the challenges many of today's computing students have is gaining firsthand experience of different approaches. Looking back to the 70's~80's, it was normal for people to be exposed to multiple networking arcitectures and OS's (as evidenced by ElReg comments)- today getting exposure to anything outside of the Windows/Linux and TCP/IP networking domains is difficult.

  14. Jake Hamby

    The OpenVMS hobbyist program is great fun

    I first encountered OpenVMS in college (Cal Poly Pomona, a small state university in Los Angeles), where the campus VAXcluster, which was later upgraded to Alpha CPUs, felt both intriguing and anachronistic at the same time. With the online help and docs, it was verbose and cumbersome but also powerful. It was very popular for email, IRC (there was an IRC client .EXE that everyone copied a link to), and for our programming assignments, which were all in Ada (too bad DEC Ada was never kept up to date and is no longer one of their supported languages).

    Since 1999, Compaq/HP/HPE and now VSI have had a hobbyist / noncommercial license program where you can request and receive by email license keys good for a year at a time, not just for VMS but the majority of layered products as well (compilers, LSE, DECwindows Motif, disk defrag, etc.). DEC foolishly sold off their Rdb database to Oracle, but apparently Rdb is still so mission-critical that Oracle keeps issuing statements that they plan to support it for the foreseeable future due to its use in telecom and lottery systems (while Oracle has already dropped support for VMS in their eponymous db). Rdb is only available for VMS. As a hobbyist / developer, you can download Rdb and all the add-ons for free from Oracle's site and use it without any license keys.

    I've been able to accumulate vintage Alpha and Itanium (rx2620 blade servers which I've just this week set up and configured with VMS V8.4-2L3 after being in storage at my mom's house for several years) systems, and even two VAXstations bought in 2000, which are worth a lot more today than I paid for them, but not for running VMS, since the VAX version is so relatively lacking in features now.

    My sales pitch for OpenVMS is that it's mission-proven, has the best clustering ever (and it's been IP routable since V8.4, so you're no longer limited to a LAN or attempting to route weird "LAVC" Ethernet frames), and most importantly, it has an asynchronous I/O architecture very similar to Windows NT (gee, I wonder why). The $QIO function has been part of VMS since the beginning, while async I/O is alien to the UNIX world and still very nonportable. Linux has its own libaio, POSIX async I/O, and now the new hotness is io_uring, which is very interesting but also brand new.

    Async didn't seem important during UNIX's long reign, but with middleware like Node.js becoming so popular, and with languages being more suited to writing code that uses callbacks (newer versions of C#, RxJava, Kotlin, Rust, anything with lambdas), I foresee a potentially bright future for OpenVMS on x86-64, perhaps even for new customers. It's certainly been a lot of fun over the years for me to play with and learn how to write code for. I find it aesthetically pleasing, although others find it ugly (all the "$" and capitalized words).

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: The OpenVMS hobbyist program is great fun

      You might enjoy this:

      https://virtuallyfun.com/wordpress/2022/04/18/ready-to-run-openvms-vm-student-kit-from-vsi/

    2. Vometia has insomnia. Again. Silver badge

      Re: The OpenVMS hobbyist program is great fun

      I'm a bit despondent that there's no longer any sort of hobbyist programme for Vax: even if it would be quite limited by today's standards, there's something nice about running it on original hardware. Okay, "original" in my case would be Vax KA43 and KA46 CPUs which are microcoded but YKWIM. From what I can ascertain, the main reason is because the paperwork is too confused to figure out who owns it though it might just be a case that someone decided it was too much bother. PAK expiry was always a PITA.

  15. Rob Davis

    ARM and RISC V?

    Perhaps a port to ARM and RISC V might be something to consider.

    DEC alpha was RISC and could run VMS, though the alpha's ISA would be significantly different from ARM and RISC-V tand so go against being able gain a lot of the original knowledge gained to re-use for these newer chips.

  16. Lloyd A

    How clustering should be

    I have fond memories of VMS. The clustering in particular was so far ahead of its time. One of the sites I looked after had a cluster of over 100 devices, even though the official supported limit was only 96. The use of 10Base2 did make for some interesting cluster events occasionally.

    1. Stoneshop

      Re: How clustering should be

      Clustering started with CI interfaces, stubbornly stiff coax cables (four per system, two in, two out) and Star Couplers. Maximum cable length was 30m/100ft IIRC, so all supported systems (11/750 and up) basically had to be in the same computer room, or maybe ones on both sides of a corridor. LAVC allowed a bit more leeway, but that was primarily meant for workstations as a way to have them being served the storage from the big'uns down in the computer pen.

      As various newer/faster interconnects appeared you could also cluster over DSSI, FDDI and SCSI, and for about a decade we had a cluster running over dedicated gbit Ethernet, with five nodes in three DCs 70km apart.

  17. Boo Radley

    MicroVAX

    I started working at DEC right as the MicroVAX was introduced. I installed VAX 8600 systems for a year or so, and installed the first 8500 in Ohio. 8 years later I found that 8500, since upgraded to an 8550, with all it's peripherals, sitting on the dock at OSU Surplus. I asked what they had planned for it, and they told me I could have it, free of charge, if I'd just get it off their dock.

    It took three trips to get it home, then four days to disassemble everything, move it into the basement, then reassemble it. I never fired up the CPU though, since my house didn't have 3-phase power, but I used the RA and TU drives with my MicroVAX II ($35 at OSU Surplus). Not a terribly efficient use of power, but it networked nicely with my Win 3.1 and Win 95 machines. I still have my VMS manual set (the Orange Wall) in storage.

    1. Stoneshop
      Boffin

      Re: MicroVAX

      Most systems that claim to need 3-phase power actually just need several circuits, as their power draw would exceed what the average single circuit can supply. And if you're designing a system's power input and need it to use more than one circuit you may as well expect it to sit in a computer room or a data center, and go 3-phase as that is what data center floor managers prefer. Vastly.

      But even if such a system has a bunch of hefty power supplies, sitting on separate phases, what comes out is DC and as long as their inputs can draw what they need it's extremely rare for them to actually care which of the phases they're on.

      Fans, if they're hefty squirrel-cage ones like the 11/78x had, may care about getting supplied from a real 3-phase feed, but that can be faked using a properly dimensioned capacitor or a motor speed controller meant to run 3-phase motors on a single-phase feed.

  18. trevorde Silver badge

    Oldie but a goodie

    Worked at a company about 20 yrs ago which had single customer on their sole, remaining VMS product. The customer enacted the escrow provisions and took over maintenance from the last, disinterested developer with VMS experience. The system is probably still going and hasn't been rebooted!

    1. F. Frederick Skitty Silver badge

      Re: Oldie but a goodie

      That sounds familiar - it wasn't a warehouse management system in a rice factory by any chance?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like