back to article Before the PC: IBM invents virtualisation

Virtualisation is not a novelty. It's actually one of the last pieces of the design of 1960s computers to trickle down to the PC – and only by understanding where it came from and how it was and is used can you begin to see the shape of its future in its PC incarnation. As described in our first article in this series, current …


This topic is closed for new posts.
  1. TheOpsMgr
    Thumb Up

    IPL your own S/360 guest on VM... ahh happy days!

    Thanks for this trip down memory lane!

    My first job as a graduate programmer in 1990 was writing code for IBM's NetView network management product and to test our code we needed our "own" NetView system.

    So - fire up some JCL to IPL a OS/360 guest on top of VM. Job done.

    So 20 years later when I send the guys in my teams on VMWare training courses and they come back all fired up on Virtualisation nirvana I have to chuckle and remember that there is nothing new under the sun :-)

    1. ps2os2

      Ahh Happy Days??

      Ahh first of all there is no "JCL to IPL a os/360"

      IPLING is at first a microcode function then readinging a bootstrap off a drive then reading the drive for a specific dataset then reading the dataset and then loading the OS(at least for OS/360, 3/70 and OS/390 and z/os) . JCL doesn't really exist in the OS till you start an initiator (OS 360 days not now) for Z/os Master is started with dummy JCL (it doesn't exist).

      Ahh as for second either your timing is off or your narative is wrong NETVIEW did not exist for OS/360 and the first OS that support NETVIEW wasn't unti the 70's and then it was called NCCF. Netview came as a follow on the NCCF perhaps in the late 70's or 80's . SO you are off at least 10 years on the netview.

  2. StevieB

    Shared Memory

    As I recall, the LPARs are managed by PR/SM and are fairly fixed divisions of the hardware. The VM operating system would then run in an LPAR and allocate those resources in a more dynamic, fluid way. Of course, I last did this in the late 80's/early nineties, all may have changed, as it does.

  3. John Doe 6

    The HARDWARE is the problem

    As I see it the PC hardware is simply not suited for virtualization, we need Intel-based heavyweight servers which are NOT necessary DOS/Windows compatible. We need hardware supporting much more throughput than DOS/Windows compatible servers can provide. I may be wrong... but again... I may me right.

    1. TeeCee Gold badge

      Re: The HARDWARE is the problem

      Er, I think you've just specified a mainframe, but one crippled with the x86 baggage.....

      1. Paul Crawford Silver badge

        @hardware is the problem

        Yes and no:

        Yes because the whole x86 is a horrible kludge that only became successful due to DOS/Windows (and the resulting applications) running only on it, and so Intel was able to spend sh*t loads of money to make a basically crap design good value for money.

        Shame it was not spent on the ARM...

        No because of why you are likely to want a VM, and this is where I disagree with the author saying "the VM should emulate the same hardware as the host":

        Typically a strong reason for a VM is you want or need to run some horrible old OS+application that you have no practical or economic method of replacing. As such the VM has to look like *supported hardware* for possibly a decade or older OS.

        It is so nice being able to move a Windows VM from one type of x86 host to another without needing to change drivers, re-install, re-licence, etc. Host could be Windows or Linux, CPU could be AMD or Intel, running 32 or 64 bit modes, and my old w2k and XP VMs and the various useful but difficult (or expensive) to replace applications work just fine!

  4. Mike 140

    down memory lane

    Aah! CP/67. I remember it well from my days at Big Blue. Another of those not-quite-official products that kicked the arse of the mainstream offerings. And so became the grudgingly (to the suits) acceptable VM/370.

    1. AJames

      System 370 hardware

      Along with the System 370 hardware that added the hardware support for virtualization that the System 360 lacked.

  5. Bruce Hoult

    what took the x86 so long?

    Something that hasn't been mentioned is that both the 68000 family (from the 68010 onward) and the PowerPC family supported full hardware-assisted virtualizaton without the kind of partial-emulation hacks needed in VMWare and without the active cooperation of the guest operating system as required by the likes of Xen.

    1. TeeCee Gold badge

      Re: what took the x86 so long?

      See the first article in the series, in particular the bit about execution privilege rings.

      If you are like me, the words "pig's ear" will spring irresistably to mind......

  6. Admiral Grace Hopper

    See also: VME

    The virtual machin environment done properly.

  7. Mickey Finn

    No Mention of the System38/AS400/iSeries...

    That if IBM had thought about it a bit, could have been the the personal computer of choice rather than the Wintel PC itself.... It was and still is far more advanced than the PC/Mac with seamless movement of user data from one piece of hardware to another and a file system level database as standard, the parts of the design that aren't 64 bit, are 128 bit and have been for nearly 20 years...

    Instead what has actually happened is that these systems are reduced to having a poor stab at emulating the inferior unix concepts.

    1. TeeCee Gold badge
      Thumb Up

      @Mickey Finn

      Yes, but.

      The '38 required a room[1] to put it in, air conditioning was de rigeur and so was three-phase power. There was a certain amount of fresh air in the box, but that's only 'cos the internal "piccolo" drives went out of fashion in favour of external arrays over its long lifespan.

      By the time the small '400s came out, commodity PCs already had their feet well under the table, Novell had the fileserver business sewn up and Windows was busily pwning the desktop. Also here you have to remember that the price premium on small '400s was waaay over that of a PC server and they still sold like hot cakes as thousands of '38 shops, who'd been hammering on the limits for ages, all sought to upgrade and expand at once. IBM have never cut prices when they don't *have* to. Also here you have to remember that the original "small" AS/400s were only small by comparison to the full-fat ones. It wasn't 'til the "F" (I think??) series shipped that the PC fileserver sized ones came out.

      Bloody brillant the '38 was though. 200 users hammering the shit out of it in realtime, half a dozen heavy batch tasks running and that "all the grunt of a 286" CPU kept it all spinning away nicely. The one that always makes me smile though is the '38s 4TB memory model (as seen in that nice "memory pyramid" piccy in the "welcome to your shiny new System/38" manual). Effectively saying: "Nobody will ever need more than 4TB"[2]. How Bill reckoned 640k would be adequate in the light of that..........

      The only Achilles' heel of the '38 was its communications. Firstly in that they were the last of the Great Black Arts in this business and secondly 'cos it never got any LAN capabilities and 64kbits[3] was as fast as it went.

      [1] Ok, you could squeeze one into a shipping container. Just.

      [2] Ok, as that's a single-level model and includes disk, tape etc. it's looking shonky now, but still.....

      [3] Yes, I *know* the winnie connectors on the comms controllers only went to 56. There was another way......

      1. John Smith 19 Gold badge


        Saw my first AS400 in a cupboard.

        Thought it was an oversize air conditioning unit till I looked closer.

        I think that was a B model, so pretty much a baby even by the standards of the day.

      2. Mickey Finn


        The comparison in the article was about virtualisation being invented a long time back as part of the S360 environment...

        Well I started my DP career on the "new" at the time S370, in reality it was a bit of 360 and a bit of 370... And the three machines in my computer room, and their peripherals occupied somewhere around 1/3 acre of floor space...

        I was taking it as read that miniaturisation was a fact.

        Oh, and along with the air conditioning, we had our own emergency power supply which ran on diesel, and a halon gas fire protection system.

        And I realise that SNA was a proprietry comms system, but that would also have sped up over the years.... (as a proprietry system though, it was almost faultless and failsafe).

        Basically what I am saying is... That in their hey day, the AS400 had far more potential than the crude PC that we actually got, and that apart from the obvious room for improvements that every system mentioned by commenters and the original author, it would have been a better starting point, had IBM had a bit more vision.

      3. lynn

        s/38 single-level-store

        one of the shortcomings of simplified s/38 single-level-store was it treated all disks as common pool of storage with scatter allocation across the pool. As a result all disks had to be backed up as a single integral filesystem and any single disk failure would require a whole filesystem restore (folklore about extended length of time to do a complete restore after single disk failure) . single disk failures were fairly common failure mode and s/38 approach scales up poorly to environment with 300 disks (or more) ... aka on any disk failure take down the whole system while the complete configuration was restored (or length of time the system would be down for complete backup).

        this shortcoming was motivation for s/38 to be early adopter of RAID technology ... as means of masking single disk failures.

    2. aircombatguy

      concepts from the much maligned TSS/360

      The "single-level store" concept around which the S/38, AS/400 and i-series were built came from TSS/360 -- which was unfortunately way ahead of its time (and hardware capable of running it with decent performance).

    3. lynn


      The massive (failed) Future System effort in the early 70s was going to completely replace 370 and drew heavily on single-level-store design from TSS/360. The folklore is that when FS failed, several people retreated to rochester and did a simplified, FS subset as S/38. misc. past posts mentioning future system

      I had learned a lot at the univ. watching tss/360 testing and its comparison with cp67/cms. Later at the science center in the 70s (during the future system period), I continued to do 360/370 stuff ... including a page-mapped filesystem for CMS (which never shipped in standard product) ... avoiding a lot of the tss/360 pitfalls (I would also periodically ridicule the FS effort ... with comments that what I had already had running was better than their bluesky stuff).

  8. Mr Spindles

    Another Trip Down Memory Lane

    Still have my "green card" somewhere in a box...

  9. Peter Gathercole Silver badge

    When considering multiprogramming on S/370

    You just cannot ignore the Michigan Terminal System (MTS).

    When IBM was adamant that it would not produce a time-sharing OS for the 360, the University of Michigan decided to write their own OS, maintaining the OS/360 API, allowing stock IBM programmes to work with no change, but allowing them to be multi-tasked.

    IBM actually co-operated, and the S/360-65M was a (supposedly) one-off special that IBM made just for Michigan, and provides a dynamic address translation which allowed virtual address spaces for programs, and which resulted in the S/360-67 which was one of the most popular 360 models, and influenced the S/370 design.

    I used MTS between 1978 and 1986 at university at Durham, and when I worked at Newcastle Polytechnic on a S/370-168 and an Amdahl 5870 (I think), and I found it a much more enjoyable environment that VM/CMS which was the then IBM multitasking offering.

    Look it up, you might be surprised what it could offer. There are many people with fond memories of the OS.

    On the subject of Amdahl, they produced the first hardware VM system with their Multiple Domain Facility (MDF), which I later used when running UTS and R&D UNIX on an Amdahl 5890E. During an oh-so-secret-under-non-disclosure-agreement, we were told by IBM in about 1989 about a project called Prism, which was supposed to be a hardware VM solution that would allow multiple processor types (370, System 36 and 38, and a then unannounced RISC architecture, probably the RS/6000) in the same system, sharing peripherals and (IIRC) memory. Sounds a lot like PR/SM on the zSeries! Took them long enough to get it working.

    1. Fred Bauer
      Thumb Up

      Fond memories

      Wow! I'd forgotten that there were UK Universities running MTS. Somewhere I still have some MTS manuals from my days at Rensselaer Polytechnic Institute, along with my yellow card. Thanks for the memories!

      1. Werner McGoole
        Thumb Up


        I too used MTS in Durham from 1976. It still staggers me that we could do interactive image processing on it while there were a zillion other users also hammering away. Happy days.

  10. esv

    Marketing baby, marketing.....

    Nothing new under the sun, just better marketing, wasn't VMware started by ex-ibmers too?

  11. Anonymous Coward


    OK, so I'll admit this is incredibly pedantic - but windows 3.1 did not bluescreen (the blue screen only appear in XP), instead it would throw 'General Protection Fault's at the drop of a hat (and sometime while you were still wearing it) ... IIRC these screens were black with white text, but it's been a while since I saw one :s

    1. Nick Ryan Silver badge

      re: Bluescreen

      Not that wikipedia is always the fountain of all truth, but:

      Personally I have more fond memories of the Guru Meditation errors...

      1. ceebee

        oh Amiga :)

        those orange boxes with the meditations used to be the bane of my Amiga days!

    2. Anonymous Coward
      Thumb Down

      First in XP...?

      NT4 *certainly* BSoD'ed - there was even a screensaver doing the rounds which looked just like one, with which to scare your fellow admins :-)

      1. Charlie Clark Silver badge

        Blue or black screens

        Doesn't really matter but it was NT that introduced the blue ones. Windows 3.x was still DOS and would crash if the wind changed. The error messages were pretty useless and based on the more informative OS/2 ones. You got no longing, but hey we had pretty colour icons.

  12. Jon Massey


    Lies, the BSOD existed well before XP, in both the DOS and NT lineages.

  13. Ryan 7

    So, Linux is kinda short for

    "LINus' Unitary Computing Service"? (multiCS -> Unix (UniCS) -> Linux (LinUCS)

    1. lynn

      CTSS, Mutlics, CP40, CP67, etc

      note that some of the CTSS people (MIT IBM 7094) went to the 5th flr of 545 tech sq and did MULTICS; others went to the science center on 4th flr of 545 tech sq and did (virtual machine) cp40, cp67, vm370, etc

  14. Beachrider

    Time sharing vs Virtualization...

    There were several early-mainframe attempts at 'what timesharing meant'. Only the CP67-folk actually went down the operating-system virtualization approach. There was a perfectly-viable, often used time-sharing component of OS/MVT. 'TSO' was rather inefficient for another 7 years, though. There were other timesharing systems on OS/370 like Wylbur, Roscoe and others.

    BUT the focus here is on Virtualization, so I wanted to add that AIX also has WPARs which are Solaris-contains-like virtualization mechanisms.

    1. lynn

      360/67, tss/360, cp67, mts, orvyl/wylbur

      There were quite a few customers sold 360/67 with the promise of running tss/360. when tss/360 looked like it was going to be difficult to birth ... many switched to os/360 or cp67. Michigan did its own (virtual memory) MTS system and Stanford did its own (virtual memory) Orvyl/Wylbur system. Later the Wylbur part was ported to os/360

  15. Wisteela
    Thumb Up

    Like it

    Great article. Very interesting.

  16. Chris 69
    Thumb Up

    Fascinating Stuff

    You can read a load more, and beautifully written if I recall correctly, about the history of CP/CMS and VM if you google for "melinda varian"

  17. John Smith 19 Gold badge
    Thumb Up

    Sort of 2 half OS's working together.

    While not a popular approach one serious software architects *should* keep in mind in case they hit a tricky situation where perhaps the hardware is not quite up to the job

    Thanks for the article and one again reminding the yoof that in the computer business it is *very* unlikely that the new game changing totally unique tech you just invented is actually *anything* like as unique as you think it is.

  18. Yes Me Silver badge

    Credit where credit's due, please

    When IBM announced VM, with the slogan 'Today IBM announces tomorrow' (or words to that effect), a patched copy of their ad appeared the same day on the notice board of the Computer Science Dept at the University of Manchester, reading 'Today IBM announces yesterday.' Because, of course, that Department had invented virtualisation some years before, and it was implemented, in a simple form, in the Ferranti Atlas designed mainly in the Department.

    1. John Angelico
      Thumb Up

      Finally! Something recognized from YOUR side of the pond...

      The Ferranti, later swallowed up by ICT into what became ICL(IIRC).

      The GEORGE series of OSs on 1900 range hardware had a lot of stuff that PCs took a long time to catch up on.

      Virtual store, flat memory, device independence, workfiles, OS-level file versioning, user management and accounting. Ah, nostalgia:

    2. noah


      In the summer of 1970 I had a summer job programming the Atlas at the University of London. An amazing machine -- the tape drives would have looked right at home in an Avenger's episode.

      Four years later I joined IBM, and my first job was to write the core of what eventually evolved into VM/'Passthru.

      I should say that I never was directly involved in anything to do with virtualization on the Atlas, as I just programmed it in FORTRAN (FORTRAN V, actually, with recursive ALGOL-inspired block structure!)

      I do recall learning later a couple of interesting facts about the Atlas: 1) I believe it was the first machine to use inverted page tables, decades before IBM trumpeted that "innovation" on its RISC machines and 2) as I recall, it took an interrupt to fire each hammer on its high speed line printer, thereby simplifying the control logic in the printer, but putting some significant timing constraints on responsiveness of the OS interrupt handlers.

  19. Anonymous Coward


    sorry, but the statement on VMware doing full software virtualization is not correct.

    in their current versions, KVM, Xen and VMware do the same thing:

    they use assistance in hardware/firmware from AMD-V or Intel-VT processors,

    if that's not available / possible, they do not work / they do a software emulation.



    1. Liam Proven Silver badge

      Re. ABEND - the author responds

      I will cop to some errors in this piece, including missing IBM's latest name-change from zSeries to the System z, and that VMware does indeed now use hardware VT if it's available - and indeed, according to several comments, /requires/ it for 64-bit guests.

      This article series was a long time in gestation and when I started researching it, VMware was still adamantly maintaining that its software virtualisation was better than Intel's hardware implementation.

      However, this comment titled ABEND is notably incorrect in almost every detail.

      KVM does not fall back to software VT; if no hardware VT is available; it *requires* hardware VT support. Without it, you can't use KVM at all.

      Xen falls back to paravirtualisation, meaning that it needs modified guest OSs.

      VMware and VirtualBox both use software VT if no hardware VT is available; in VirtualBox, enabling hardware support is an option - you can run without it. I have not tried this yet in VMware but it might be possible.

      This is not "do[ing] the same thing"; it is doing 3 different things: failing, offering different, incompatible functionality, or switching to an emulation-based alternative.

  20. John Hughes

    S360 first to use microcode for compatability?

    "What’s more, the S/360 was the first successful platform to achieve compatibility across different processors using microcode,"

    Well, maybe if you ignore the ICT 1900 series.

    The 1901, 1902/3, 1904/5 and 1906/7 were all different processors, some microcoded, that had the same programming interface.

    (System 360 announced 7/4/1964, 1900 announced 29/9/1064, 1st 1904 delivered January 1965, first deliveries of the System/360 were in "mid 1965")

    (Start work on a family of compatible machines in April, announce in September, demo in October, deliver in January. Doesn't sound like the UK of today, does it?)

  21. Wolfhound57

    ICL v IBM dress codes

    I think anyone who worked on ICL machines might dispute whether IBM were first with virtual computing we used and some of Fujitsu's ICL derived kit still use VME (Virtual Machine Environment) this originated in the late '60s early '70s. At the time strange though it might seem strange ICL were technically superior to IBM machines and if they could have go their reliability problems sorted and their Engineers to wear suits a la IBM engineers rather than cardigans they might still be around today.

    1. Julz

      VME History

      Modern virtualisation still has a way to go to catch up with the past...

    2. John Hughes

      ICL VME was later than System/370

      System/370 (the major introduction of virtual memory and hence virtual machines(*)) was released in 1970.

      The ICL 2900 series (with the VME operating system) was relesed in 1974.

      And the ICL concept of a virtual machine was not realy comparable with what IBM was doing:

      "The 2900 Series architecture uses the concept of a Virtual Machine as the set of resources available to a program. The concept of a "Virtual Machine" in the 2900 Series architecture should not be confused with the way the term is used in other environments. Because each program runs in its own Virtual Machine, the concept may be likened to a process in other operating systems, while the 2900 Series process is more like a thread."

      (Some claim that IBM was forced to come up with the whole "virtual machine" thing because they just never managed to make a simple multi user os - it was easier for them to build a system that allowed one OS per user).

      ((*) Yes, there was _one_ member of the System/360 range that had virtual memory, but it was a special case).

  22. Mainframer

    Reflections on MDF

    Two points I will make from the perspective of a developer of Amdahl’s Multiple Domain Facility (MDF), perhaps one of the firsts ;-) “hardware-assisted” virtualization platforms;

    - MDF began as an offshoot of a strategy for minimizing the lead time required for Amdahl to respond to changes IBM was making in their mainframe architecture of that era, i.e. S370 and beyond. These changes included hardware that provided microcode underlying S370 architecture. It was observed that these underlying hardware structures could be used to offer an efficient hardware-based virtualization platform that became MDF. IBM promptly followed suit with offerings providing logical partitions or LPARs. An interesting side story is that we didn’t have many engineering models to develop and test our MDF code on. Consequently, another team inside Amdahl developed a simulator that simulated the modified S370 architecture and it was actually based on IBM’s VM/370. This simulator was vital for developing our MDF code on.

    - Inside IBM 370 mainframes were a set of high-level instructions that assisted the execution of operating systems, including MVS and VSE, when executed under VM/370. These instructions reduced the overhead associated with paging and I/O of the “guest” operating system when executed under VM. The nature of these instructions is comparable to techniques Intel and AMD provided years later in their products. Performance improvements associated with these instructions (VMA, PMA, …) were dramatic. There is a certain feeling of déjà vu watching virtualization unfold in the Intel world.

  23. Jon Press

    Other VM trivia...

    A primitve form of e-mail was possible by directing your virtual card punch to another user's virtual card reader - punch a virtual deck from a local file and it would turn up magically in the other VM from where it could be read back. By the further magic of RSCS and some judicious source routing you could get your file around the world.

    There was, as I vaguely recall, also a prototype "desktop" machine that supported the S/370 instruction set and ran VM.

    Although VM/370 was used a lot both for OS development work and for migrating customers from DOS to OS/370 it was also (with CMS) a refreshing alternative to IBM's Time Sharing Option, famously subject to the criticism from Stephen "yacc" Johnson that "Using TSO is like kicking a dead whale down the beach".

  24. davejesc

    Don't forget MUSIC/SP

    MUSIC/SP - Multi User System for Interactive Computing, brainchilded at McGill University was an exceptional late 70's - early 80's multi-user Operating System which also ran under VM (or autonomously) allowing hundreds of simultaneous users in the resource space of just a few CMS users.

    While not really a virtualization system itself, it gave the end-user the impression that they were working on their own interactive "machine".

    TSS (and later the premier transaction processing system in the world, CICS - Customer Information Control System) also provided (provide) multi-user environments which give the user impression of one's own machine, but not really virtualization, per se.

    The combination of VM/370 and MUSIC/SP ended up being one of the most dynamic, efficient, cost-effective, amazing and user-friendly multi-user environments that IBM has ever produced.

    Thanks for the walk down memory lane.

    BTW, was not PR/SM really just a specialty version of VM?


    1. lynn

      VMA, virtual machine microcode assist

      cp40 & cp67 provided virtual machine support by running the virtual machine in problem state and taking the privilege/supervisor state interrupts for supervisor state instructions and simulated them. Later for vm/370 and 370, virtual machine microcode assist was provided on 370/158 and 370/168 which would executive frequently executed supervisor state instructions according to virtual machine rules.

      A superset of this was extended for 370/138 & 370/148 called ECPS ... which included dropping parts of vm370 supervisor into microcode. There was an attempt to ship all 138/148 machines with VM370 pre-installed ... sort of a early software flavor of LPARS ... which was overruled by corporate hdqtrs (at the time there were various parts of the corporation working on killing vm370).

      A much larger and more complete facility was done for 370/xa on 3081 called SIE.

      Amdahl came out with a "hardware" only "hypervisor" function ... sort of superset of SIE ... but subset of virtual machine configuration.

      IBM responded with similar facility PR/SM on the 3090 ... which was further expanded to multiple logical partitions as LPARS. PR/SM heavily relied on the SIE microcode implementation ... and for a long time a vm/370 operating system running in an LPAR couldn't use SIE ... because it was already in use for LPAR. It took additional development where vm370 running in an LPAR (using SIE) could also use SIE for its own virtual machines (aka effectively SIE running under SIE).

  25. Beachrider

    The VM fanbois are sunning themselves..

    OK. I loved VM in the 70s. Mainly because CMS was more usable than TSO on tty. Then the terminal technology changed and full-screen mechanisms improved. Since 1981, TSO has been fine and presents a richer programming environment.

  26. Wile E. Veteran
    Thumb Up

    MTS Rocked!

    I had the good fortune (and I really mean that) to be a user of MTS at Wayne State U. in the little town of Detroit, a few miles from the center of the Earth located in Ann Arbor). I was also the first president of Wayne's MTS Users' Group which provided feedback to Michigan for bugs (very few) and system improvements (very many).

    Under MTS, the 360/67 supported hundreds of simultaneous users, giving each the illusion of having the complete mainframe to themselves. That's what mainframes are good at. Individual workstations, even PC's can beat them soundly on a MIPS vs MIPS basis, but what they CAN'T do is deliver those MIPS to hundreds (or thousands on newer iron) simultaneously.

    1. lynn

      MTS & LLMPS

      Lincoln Labs did LLMPS ... a stand-alone monitor that ran some number of applications (in pure 360 mode). Lincoln Labs got a duplex 360/67 (originally for running tss/360) ... but was then the 1st place (outside of science center) to install cp67 (univ. I was at, was 2nd place outside of science center to install cp67).

      Folklore is that MTS started out being built off LLMPS at its core.

  27. Stephen Channell

    Punch me when you're done!

    Ah VM’s virtual punch was ground breaking for providing inter-process messaging that they build a very good email system (profs) on top of it.. it was cool punching a daemon with work and getting punched when it finished.

    Has VMWare, Xen & Hyper-V emulated CP/CMS.. not quite yet, even 20 years ago when I last used it VM could:

    • Emulate older devices that we’re manufactured any more (handy for DOS/VME)

    • Emulate new kit on older boxes (VM/SP could emulate XA on older machines)

    • Emulate a vector facility on a box without one

    • Lash several mainframes together as one (when using the punch for IPC)

    • Operate a reverse proxy with VTAM.

    But sadly it was also the host for the first global email virus & denial of service.. when somebody wrote a REXX script that displayed a Christmas tree and punched itself to everyone in you profs address book.. took down the whole VNET.

    1. ps2os2

      Christmas tree

      I do not believe it was a REXX exec that brought down the system(s) rather it was an VM exec that was the villian.

      REXX didn't come out got maybe 10 or so years after the culprit had been identified. I suppose the valid question was the exec a worm or virus or malware. If push came to shove I would call it malware.

      1. lynn

        xmas tree

        vmshare reference 10dec87 to xmas exec on bitnet

        risk digest reference (21dec87)

        misc. past posts mentioning bitnet (used technology similar to internal network)

        this is old post where I try and reproduce the effects of a 1981 rexx xmas tree that used FSX for 3279

        note that the bitnet 1987 xmas exec was almost exactly a year before the morris worm on the internet.

  28. bumpy


    I remember working on a VM/370 system. I was testing an MVS/370 virtual machine during business hours and used an VM command in the wrong way - the production VM just froze- much rumbling was heard from the console room. I quietly quiesced my test system and all was well. Good times. Nice to be able to test your new sysres volume during the day (when done correctly!)

    1. lynn

      virtual paging under virtual paging

      VM370 supported the memory of the virtual machine with demand paged virtual memory managed by an approximation to global LRU. MVS/370 running in a virtual machine, managed its virtual pages (in what it thot was "real memory") with an LRU approximation.

      LRU or least recently used ... assumed that a page that hasn't been used for the longest time is the least likely page to be used in the future. It can be paged out and the real storage allocated for some other use.

      It was possible for MVS/370 with its LRU paging to get in pathological situation when running under vm370 (with its LRU paging). VM370 will select an MVS/370 virtual machine virtual page to be replaced (paged out) because it hasn't been used for a long time (aka least recently used). However, if MVS/370 is also paging (using LRU page replacement), that same page is also the one that MVS/370 will decide to use next (i.e. invalidating the assumptions behind vm370's least-recently-used page replacement)

  29. lynn

    vm history

    lots of history in Melinda's history document, i recently provided her with a single file PDF version

    and kindle version

    built from her multi-file postscript version

    Science center first did cp/40 having added virtual memory hardware to 360/40. cp/40 morphed into CP/67 when they were able to obtain 360/67 that came standard with virtual memory hardware. Later cp/67 morphed into vm370 when virtual memory became standard on 370s. lots of past posts mentioning science center

    TSS/360 was the "official" software for 360/67 ... but they had numerous difficulties. Running same exact simulated application script for fortran edit, compile and execute ... I got better throughput and responses for 35 simulated CP67/CMS users than the IBM SE got with 4 simulated TSS/360 users (running on same identical 360/67 hardware).

    In the 70s, the massive (failed) Future System effort (was going to completely replace 370) heavily used the single-level-store from TSS/360.

  30. david 12 Silver badge

    Multics/unix/toy operating systems

    And it seems ironic now, that unix was widely seen as a toy operating system, not just because it couldn't virtualise, but also because (after the multi was stripped out of MULTICS), security was so broken.

    1. Peter Gathercole Silver badge

      @david 12

      It is quite clear that the security model for UNIX is one of the weakest remnants of the original UNIX development.

      In a lot of cases it is actually much *weaker* than that provided by Windows NT and beyond.

      But the difference is that it is actually used properly, and has been almost everywhere UNIX has been deployed. It was fundamental to the original multi-user model, and you always had the concept of ordinary users and a super-user.

      Multics, VAX/VMS, and possibly several other contemporary OS's had better security models, but the UNIX model was adequate for what it had to do, and was well understood. In fact, the group model on UNIX, with non-root group administrators has so far fallen from use that it is practically absent in modern UNIXes (ever wondered why the /etc/group file has space for a password? Well this was it)

      When it comes to virtual address spaces (programs running in their own private address space mapped onto real memory by address translation hardware), UNIX has this from the time it was ported to the PDP/11. Virtualized memory (i.e. the ability to use more memory than the box physically has), first appeared on UNIX on the Interdata 8/32, with the 3BSD additions to UNIX/32V, and then in BSD releases on the VAX.

      The first AT&T release that supported demand paging was SVR3.2, although there were internal version of R&D UNIX 3.2.5 which supported this.

      1. Peter Gathercole Silver badge


        The R&D version of UNIX was 5.2.5, not 3.2.5. This equated to SVR2 with some AT&T internal developments, including demand paging, enhanced networking (STREAMS [which could have Wollongong TCP/IP modules loaded], RFS), an enhanced multiplexed filesystem (not that I remember exactly what that gave us) and many more I can't remember.

      2. Paul Crawford Silver badge

        @Peter Gathercole

        "the UNIX model was adequate for what it had to do, and was well understood."

        I think that nails it on the head. Unix has a simple model of file (and thus essentially everything) permissions. Groups allow a more complex take on that, but most users glaze over when you get beyond the you/others can read/write/execute point of an explanation.

        Windows NT+ on NTFS has ACLs that potentially offer much finer grained control, but is a bugger to understand/follow the consequences, so is rarely used effectively. A lot of legacy windows programs just breaks if you try to implement a properly secure system, so it becomes rather useless to the end user :(

        So in reality, and most certainly for home users, Windows in basically broken and Linux is fine. Not by capability, but by complexity and working defaults.

  31. pizzafritte

    zVM - Mainframe VM today

    Hi..i enjoy reading your posts about VM history. If you want to learn about today's VM, z/VM, feel free to visit

    You might be interested reading about the currently available z/VM V6.1 and the statements of direction for SSL and Live Guest Relocation. (2009)

    See also, (2010)

    Regards, Pam C (VM-retired)

    1. lynn

      From Annals Of Release No Software Before Its time

      posts from two years ago ... after new feature presentation at hilltopics meeting

      in the later half of the 70s ... the internal, virtual-machine based HONE system (provided online world-wide sales & marketing support) implemented loosely-coupled single-system-image support ... with front-end load-balancing and recovery (but not live migration/relocation). In the early 80s this was extended when the US HONE datacenter had a 2nd and 3rd replicated datacenter at geographic distances.

      however, in the late 60s, there was two cp67 commercial online timesharing service bureaus startups (sort of precursor to modern day cloud computing). By the mid-70s, at least one had migrated to VM370 based and had made numerous enhancements ... including single-system-image, front-end load balancing and live guest migration (be able to transparently vary offline a processor complex offline for service/maintenance with all the virtual guests migrated to other processors in the complex).

  32. pizzafritte

    Gather at the July VM Workshop 2011 - technical and affordable

    If you would like to gather with other z/VM and Linux on System z enthusiasts, consider attending the upcoming VM Workshop 2011. A very affordable $100 (yes one hundred) registration fee for 2 and1/2 days of presentations, roundtables, and peer interaction.

    July 28-30, 2011 at Ohio State Univ. in Columbus, Ohio.

    if you are interested in other events, or you have mainframe VM events to be posted, send a vm web feedback form (click contact- vm on vm web page) so that events can be included on the events calendar.

    Regards,Pam C

    1. Grey Dave

      VM Workshop - Looks like fun

      Still working for a living, I can't jump on the nostalgia opportunity. It's been 45 years since my first interactive computing on M44/44X, almost 43 since I first logged on to CP-67/CMS. I shipped the first three releases of VM/370, explained shadow tables to the microcode people, and ran a couple of the Future Systems task forces. Enjoy.

  33. dlc.usa
    Thumb Up


    Dang. Seriously late to the party again. 65 comments and here I am.

    Great article, especially if you didn't know any of this before reading it.

    This is mostly about history. Today you can order a z114 for less than a hundred USD grand (but a working environment is at least one order of magnitude more). What will you get for that vis-a-vis other platforms? In short, your money's worth.

    z/VM can virtualize itself, as could VM/370.

    Those z cores can run 24/7 at 100% and PR/SM and CP enable that (yes, I'm ignoring spin cycles).

    My point? If you are in a position to check this out for the benefit of your employer and you choose not to because some college professor told you the mainframe is dead, you are not doing your job very well at all. That's it, plain and simple.

  34. Anonymous Coward


    I spent two years at IBM Mohansic labs youktown, working on the 360/67 duplex system. IBM was its own worst enemy, work began on TSS coding a duplex system. Months went by before all coding was scrapped and coding for a simplex system could be in a somewhat working state.

    IBM was able to deliver to customers a small number of working systems. By that time hardware was reaching the point of having a computer on your desk, and the need for a monster system like the 360/67 was dead.

    The computer room we worked in was approx 100' by 75' for just one system with memory and i/o devices. The TSS project was run 24 hours per day 7 days a week, with Christmas day and new years day off. It was not unusual to see a program walk in at 2am in pajamas for her one our slot on the machine.

    A little more involved then writing an apk for android. How time has changed my world. I still think a computer is not a computer unless you can stand inside it.


    1. lynn

      TSS/360 duplex

      somebody recently quoted a soviet press article about a us/soviet track meet .... where the soviets came in 2nd and US came in next to last.

      In the early 70s there was TSS/360 duplex benchmark showing it ran 3.8 times faster than same TSS/360 on single process. Somebody wrote it up as TSS/360 having far superior multiprocessor support (much superior than any other operating system in handling multiprocessors ... i.e. any other system would at most get twice the thruput).

      It turns out that the both TSS/360 benchmarks actually had much inferior thruput compared to cp67. The scenario was that single processor 360/67 had 1mbyte storage and TSS/360 kernel was so bloated that only small amount was left for application execution (and single process benchmark was page thrashing). The two-processor 360/67 had 2mbytes storage .... which was enough to have more efficient application (after what was taken by the bloated tss/360 kernel) ... getting nearly four times the thruput.

This topic is closed for new posts.

Other stories you might like