back to article Why won't you DIE? IBM's S/360 and its legacy at 50

IBM's System 360 mainframe, celebrating its 50th anniversary on Monday, was more than a just another computer. The S/360 changed IBM just as it changed computing and the technology industry. The digital computers that were to become known as mainframes were already being sold by companies during the 1950s and 1960s - so the S …


This topic is closed for new posts.
  1. John Smith 19 Gold badge

    You say "cloud" I say mainframe. You say "browser" I say "dumb terminal."

    Which of course means that who runs the mainframe runs you.

    Something to think about.

    1. RobHib

      @John Smith 19 -- Re: You say "cloud" I say mainframe. You say "browser" I say "dumb terminal."

      "Which of course means that who runs the mainframe runs you."

      Perhaps, but not quite. We hacked our 360!

      Fellow students and I hacked the student batch process on our machine to overcome the restrictive limitations the uni place on our use of it (that's long before the term 'hacker' came into use).

      (I've fond memories of being regularly thrown out of the punch card room at 23:00 when all went quiet. That 360 set me on one of my careers.)

    2. Anonymous Coward
      Anonymous Coward

      Re: You say "cloud" I say mainframe. You say "browser" I say "dumb terminal."

      True, IBM had the "cloud" in place in the 60s.

    3. Michael Wojcik Silver badge

      Re: You say "cloud" I say mainframe. You say "browser" I say "dumb terminal."

      Which of course means that who runs the mainframe runs you.

      Whereas with PCs, you're untouchable?

      Honestly, why do people still trot out these sophomoric oversimplifications?

  2. naylorjs
    Thumb Up

    S/360 I knew you well

    The S/390 name is a hint to its lineage, S/360 -> S/370 -> S/390, I'm not sure what happened to the S/380. Having made a huge jump with S/360 they tried to do the same thing in the 1970s with the Future Systems project, this turned out to be a huge flop, lots of money spent on creating new ideas that would leapfrog the competition, but ultimately failed. Some of the ideas emerged on the System/38 and onto the original AS/400s, like having a query-able database for the file system rather than what we are used to now.

    The link to NASA with the S/360 is explicit with JES2 (Job Execution Subsystem 2) the element of the OS that controls batch jobs and the like. Messages from JES2 start with the prefix HASP, which stands for Houston Automatic Spooling Program.

    As a side note, CICS is developed at Hursley Park in Hampshire. It wasn't started there though. CICS system messages start with DFH which allegedly stands for Denver Foot Hills. A hint to its physical origins, IBM swapped the development sites for CICS and PL/1 long ago.

    I've not touched an IBM mainframe for nearly twenty years, and it worries me that I have this information still in my head. I need to lie down!

    1. Ross Nixon

      Re: S/360 I knew you well

      I have great memories of being a Computer Operator on a 360/40. They were amazing capable and interesting machines (and peripherals).

    2. QuiteEvilGraham
      Thumb Up

      Re: S/360 I knew you well

      ESA is the bit that you are missing - the whole extended address thing, data spaces,hyperspaces and cross-memory extensions.

      Fantastic machines though - I learned everything I know about computing from Principals of Operations and the source code for VM/SP - they used to ship you all that, and send you the listings for everything else on microfiche. I almost feel sorry for the younger generations that they will never see a proper machine room with the ECL water-cooled monsters and attendant farms of DASD and tape drives. After the 9750's came along they sort of look like very groovy American fridge-freezers.

      Mind you, I can get better mippage on my Thinkpad with Hercules than the 3090 I worked with back in the 80's, but I couldn't run a UK-wide distribution system, with thousands of concurrent users, on it.

      Nice article, BTW, and an upvote for the post mentioning The Mythical Man Month; utterly and reliably true.

      Happy birthday IBM Mainframe, and thanks for keeping me in gainful employment and beer for 30 years!

      1. Anonymous Coward
        Anonymous Coward

        Re: S/360 I knew you well

        I stated programming (IBM 360 67) and have programmed several IBM mainframe computers. One of the reason for the ability to handle large amounts of data is that these machines communicate to terminals in EBCDIC characters, which is similar to ASCII. It took very few of these characters to program the 3270 display terminals, while modern X86 computers use a graphical display and need a lot data transmitted to paint a screen. I worked for a company that had an IBM-370-168 with VM running both os and VMS. We had over 1500 terminals connected to this mainframe over 4 states. IBM had visioned that VM/CMS. CICS was only supposed to be a temporary solution to handling display terminals, but it became the main stay in many shops. Our shop had over 50 3330 300 meg disk drives online with at least 15 tape units. These machines are in use today, in part, because the cost of converting to X86 is prohibitive. On these old 370 CICS, the screens were separate from the program. JCL (job control language) was used to initiate jobs, but unlike modern batch files, it would attach resources such as a hard drive or tape to the program. This is totally foreign to any modern OS. Linux or Unix can come close but MS products are totally different.

    3. ItsNotMe

      Re: S/360 I knew you well

      "I've not touched an IBM mainframe for nearly twenty years, and it worries me that I have this information still in my head. I need to lie down!"

      "Business Application Programming"..."Basic Assembly Language"..."COBOL"...hand written Flowcharts...seems as though I have those same demons in my head.

    4. Stephen Channell

      Re: S/360 I knew you well

      S/380 was the "future systems program" that was cut down to the S/38 mini.

      HASP was the original "grid scheduler" in Houston running on a dedicated mainframe scheduling work to the other 23 mainframes under the bridge.. I nearly wet myself with laughter reading Data-Synapse documentation and their "invention" of a job-control-language. 40 years ago HASP was doing Map/Reduce to process data faster than a tape-drive could handle.

      If we don't learn the lessons of history, we are destined to IEFBR14!

      1. Steve I

        Re: S/360 I knew you well

        IEFBR15, surely?

  3. Pete 2 Silver badge

    Come and look at this!

    As a senior IT bod said to me one time, when I was doing some work for a mobile phone outfit.

    "it's an IBM engineer getting his hands dirty".

    And so it was: a hardware guy, with his sleeves rolled up and blood grime on his hands, replacing a failed board in an IBM mainframe.

    The reason it was so noteworthy, even in the early 90's was because it was such a rare occurrence. It was probably one of the major selling points of IBM computers (the other one, with just as much traction, is the ability to do a fork-lift upgrade in a weekend and know it will work.) that they didn't blow a gasket if you looked at them wrong.

    The reliability and compatibility across ranges is why people choose this kit. It may be arcane, old-fashioned, expensive and untrendy - but it keeps on running.

    The other major legacy of OS/360 was, of course, The Mythical Man Month who's readership is stil the most reliable way of telling the professional IT managers from the wannabees who only have buzzwords as a knowledge base.

    1. Amorous Cowherder

      Re: Come and look at this!

      They were bloody good guys from IBM!

      I started off working on mainframes around 1989, as graveyard shift "tape monkey" loading tapes for batch jobs. My first solo job was as a Unix admin on a set of RS/6000 boxes, I once blew out the firmware and a test box wouldn't boot. I called out an IBM engineer after I completely "futzed" the box, he came out and spent about 2 hours with me teaching me how to select and load the correct firmware. He then spent another 30 mins checking my production system with me and even left me his phone number so I call him directly if I needed help when I did the production box. I did the prod box with no issues because of the confidence I got and the time he spent with me. Cheers!

    2. ItsNotMe

      Re: Come and look at this!

      "It was probably one of the major selling points of IBM computers (the other one, with just as much traction, is the ability to do a fork-lift upgrade in a weekend and know it will work.) that they didn't blow a gasket if you looked at them wrong."

      Maybe...maybe not.

      When I was in school for Business Applications Programming way back when...I was entering code on an S/370 for a large application I had written...and managed to lock it up so badly...the school had to send everyone home for the day, so the techs could figure to just what the heck I had done wrong.

      Took the operators a good part of the evening to sort it.

      First time anyone had ever managed to hose one they told me. Felt quite proud, actually. :-)

      1. Mark Cathcart

        Re: Come and look at this!

        I'm sure that's what you think, but the 360 was very simple to debug and problem solve, it also had a simple button to reboot and clear the total RAM memory. So it's not clear what you could have done that would have persisted across a reboot.

        Sure, you could have written on the 29Mb hard drive, or the stand alone loader on the front of the tape if your were booting from tape... or got the cards out of sequence if booting from the card reader... but that was hardly the mainframes fault...


        IBM Mainframe guy and Knight of VM, 1974-2004.

    3. G.Y.

      Re: Come and look at this!

      an ex-IBM guy told me this was the reasoning behind the coat&tie rule: if you see a man in suit &tie, you figure he assumes the machine will work, and he will not have to crawl on the floor fixing it.

  4. John Hughes

    16 bit byte?

    A typo for 6 bit? (E.G. ICT 1900)?

    "The initial 1900 range did not suffer from the many years of careful planning behind the IBM 360."

    -- Virgilio Pasquali

    1. David Beck

      Re: 16 bit byte?

      The typo must be fixed, the article says 6-bit now. The following is for those who have no idea what we are talking about.

      Generally machines prior to the S/360 were 6-bit if character or 36-bit if word oriented. The S/360 was the first IBM architecture (thank you Dr's Brooks, Blaauw and Amdahl) to provide both data types with appropriate instructions and to include a "full" character set (256 characters instead of 64) and to provide a concise decimal format (2 digits in one character position instead of 1) 8-bits was chosen as the "character" length. It did mean a lot of Fortran code had to be reworked to deal with 32-bit single precision or 32 bit integers instead of the previous 36-bit.

      If you think the old ways are gone, have a look at the data formats for the Unisys 2200.

      1. Bob Armstrong

        Re: 16 bit byte?

        One of the major design issues thru the '60s and '70s was word size , Seymour Cray's CDCs were 60 bits which has seductively many factors . But in the end , 2 ^ n bit words won out .

  5. John Hughes


    Came with the S/370, not the S/360, which didn't even have virtual memory.

    1. Steve Todd

      Re: Virtualisation

      The 360/168 had it, but it was a rare beast.

    2. Mike 140

      Re: Virtualisation

      Nope. CP/67 was the forerunner of IBM's VM. Ran on S/360

      1. Grumpy Guts

        Re: Virtualisation

        You're right. The 360/67 was the first VM - I had the privelege of trying it out a few times. It was a bit slow though. The first version of CP/67 only supported 2 terminals I recall... The VM capability was impressive. You could treat files as though they were in real memory - no explicit I/O necessary.

    3. David Beck

      Re: Virtualisation

      S/360 Model 67 running CP67 (CMS which became VM) or the Michigan Terminal System. The Model 67 was a Model 65 with a DAT box to support paging/segmentation but CP67 only ever supported paging (I think, it's been a few years).

      1. Steve Todd

        Re: Virtualisation

        The 360/168 had a proper MMU and thus supported virtual memory. I interviewed at Bradford university, where they had a 360/168 that they were doing all sorts of things that IBM hadn't contemplated with (like using conventional glass teletypes hooked to minicomputers so they could emulate the page based - and more expensive - IBM terminals).

        I didn't get to use an IBM mainframe in anger until the 3090/600 was available (where DEC told the company that they'd need a 96 VAX cluster and IBM said that one 3090/600J would do the same task). At the time we were using VM/TSO and SQL/DS, and were hitting 16MB memory size limits.

        1. Peter Gathercole Silver badge

          Re: Virtualisation @Steve Todd

          I'm not sure that the 360/168 was a real model. The Wikipedia article does not think so either.

          As far as I recall, the only /168 model was the 370/168, one of which was at Newcastle University in the UK, serving other Universities in the north-east of the UK, including Durham (where I was) and Edinburgh.

          They also still had a 360/65, and one of the exercises we had to do was write some JCL in OS/360. The 370 ran MTS rather than an IBM OS.

          1. Steve Todd

            Re: Virtualisation @Peter Gathercole

            As usual Wikipedia isn't a comprehensive source. See for example

          2. henrydddd

            Re: Virtualisation @Steve Todd

            The 370-168 was real. I worked for GTEDS which used it. We had to run VM because we had old dos programs which ran under DUO. We evaluated IMS/DC, but it was DOA as far as being implemented. The 360-67 was a close imitator. IT was a 65 plus a DAT (Dynamic Address Translation) box on it. IBM system 38 was a dog that no one wanted. Funny thing is that it had a revolutionary approach, but I was part of an evaluation team that evaluated one, it was a slow dog. We chose a prime 550 instead. Any of you remember much About Amdahl computers and Gene Amcahl. They produced a competitor to the 360.

  6. Anonymous Coward

    Not everything was invented for the S/360 or came from IBM. Disk storage dated from IBM's 305 REMAC - Random Access Method of Accounting and Control machine - in 1956. COBOL - a staple of mainframe applications - dated from 1959 and was based on the work of ENIAC and UNIVAC programmer Grace Hopper.

    That'd be Admiral Grace Hopper… it was in the US Navy that she became acquainted with the ways of computer programming.

    1. Admiral Grace Hopper

      Quite so.

    2. bob, mon!

      Also, it was RAMAC, not REMAC

      As for COBOL --- Grace Hopper retired three times, and was brought back by the Navy twice, before they finally let her go as a Rear Admiral.

      1. Anonymous Coward
        Anonymous Coward

        Re: Also, it was RAMAC, not REMAC

        In fact Grace Hopper was one of the few people for whom what was normally a euphemism, "let her go", was strictly factual.

      2. FrankAlphaXII

        Re: Also, it was RAMAC, not REMAC

        You may want to remember that she wasn't Active Duty, and I don't believe ever was, Women couldn't hold Active Duty billets until the 70's. She was in the US Navy Reserve, which is a different component than the active duty Navy with different personnel policies. They use Reservists as full-timers an awful lot in that Service Branch, a bunch of the Frigates and almost all of the few remaining non-Military Sealift Command Auxiliaries are crewed by full-time USNR crews with Active component Officers. The entire US Department of the Navy (including the Marine Corps) also has a real penchant for letting Officers retire and then bringing them back, just to let you know. They're better about preserving skills and institutional memory than most of the Army and just about the entire Air Force is though.

        Knowing the Defense Department, she was probably brought back to active duty status and then dropped back to selected reserve status more than twice. Probably more along the lines of at least once every year that she wasn't considered in the Retired Reserve. With Flag Rank it is most probably different, and there is probably a difference between the way the Navy manages Officers, and the way the Army does which I'm used to. Oh and to make things complex, we also have full-timers in the Army Reserve, we call them AGR, they're mostly recruiters and career counselors, some other niche functions are also reservists alot of times, usually because the Combat Arms that run the Regular Army don't especially see the need for them. If they could do the same to Signal and MI and get away with it, believe me, they would.

        I'm a reservist and every year when I go back on Active Duty for Annual Training or get deployed and then when my AD period or deployment ends I technically get "let go", meaning that after its over I go back into the Selected Reserve at part-time status and get another DD-214 for my trouble with my extra two weeks or year to 18 months calculated in days added to my days of service (effectively my retirement tracker at this point) until the contract that the US Government has with me ends and I choose not to renew it, take my pension and transfer into the Retired Reserve, or Human Resources Command choose to not allow me to renew my contract if I really bomb an evaluation or piss someone off and don't get promoted and am dismissed or retired early.

  7. Chris Miller


    This was a big factor in the profitability of mainframes. There was no such thing as an 'industry-standard' interface - either physical or logical. If you needed to replace a memory module or disk drive, you had no option* but to buy a new one from IBM and pay one of their engineers to install it (and your system would probably be 'down' for as long as this operation took). So nearly everyone took out a maintenance contract, which could easily run to an annual 10-20% of the list price. Purchase prices could be heavily discounted (depending on how desperate your salesperson was) - maintenance charges almost never were.

    * There actually were a few IBM 'plug-compatible' manufacturers - Amdahl and Fujitsu. But even then you couldn't mix and match components - you could only buy a complete system from Amdahl, and then pay their maintenance charges. And since IBM had total control over the interface specs and could change them at will in new models, PCMs were generally playing catch-up.

    1. David Beck

      Re: Maintenance

      So true re the service costs, but "Field Engineering" as a profit centre and a big one at that. Not true regarding having to buy "complete" systems for compatibility. In the 70's I had a room full of CDC disks on a Model 40 bought because they were cheaper and had a faster linear motor positioner (the thing that moved the heads), while the real 2311's used hydraulic positioners. Bad day when there was a puddle of oil under the 2311.

      1. Chris Miller

        Re: Maintenance

        That was brave! I genuinely never heard of anyone doing that, but then I never moved in IBM circles.

        1. tom dial Silver badge

          Re: Maintenance

          Done with some frequency. In the DoD agency where I worked we had mostly Memorex disks as I remember it, along with various non-IBM as well as IBM tape drives, and later got an STK tape library. Occasionally there were reports of problems where the different manufacturers' CEs would try to shift blame before getting down to the fix.

          I particularly remember rooting around in a Syncsort core dump that ran to a couple of cubic feet from a problem eventually tracked down to firmware in a Memorex controller. This highlighted the enormous I/O capacity of these systems, something that seems to have been overlooked in the article. The dump showed mainly long sequences of chained channel programs that allowed the mainframe to transfer huge amounts of data by executing a single instruction to the channel processors, and perform other possibly useful work while awaiting completion of the asynchronous I/O.

          1. David Beck

            Re: Maintenance

            It all comes flooding back.

            A long CCW chain, some of which are the equivalent of NOP in channel talk (where did I put that green card?) with a TIC (Transfer in Channel, think branch) at the bottom of the chain back to the top. The idea was to take an interrupt (PCI) on some CCW in the chain and get back to convert the NOPs to real CCWs to continue the chain without ending it. Certainly the way the page pool was handled in CP67.

            And I too remember the dumps coming on trollies. There was software to analyse a dump tape but that name is now long gone (as was the origin of most of the problems in the dumps). Those were the days I could not just add and subtract in hex but multiply as well.

    2. John Smith 19 Gold badge

      @Chris Miller

      "This was a big factor in the profitability of mainframes. There was no such thing as an 'industry-standard' interface - either physical or logical. If you needed to replace a memory module or disk drive, you had no option* but to buy a new one from IBM and pay one of their engineers to install it (and your system would probably be 'down' for as long as this operation took). So nearly everyone took out a maintenance contract, which could easily run to an annual 10-20% of the list price. Purchase prices could be heavily discounted (depending on how desperate your salesperson was) - maintenance charges almost never were."


      Back in the day one of the Scheduler software suppliers made a shed load of money (the SW was $250k a pop) by making new jobs start a lot faster and letting shops put back their memory upgrades by a year or two.

      Mainframe memory was expensive.

      Now owned by CA (along with many things mainframe) and so probably gone to s**t.

    3. Mike Pellatt

      Re: Maintenance

      @ChrisMiller - The IBM I/O channel was so well-specified that it was pretty much a standard. Look at what the Systems Concepts guys did - a Dec10 I/O and memory bus to IBM channel converter. Had one of those in the Imperial HENP group so we could use IBM 6250bpi drives as DEC were late to market with them. And the DEC 1600 bpi drives were horribly unreliable. The IBM drives were awesome. It was always amusing explaining to IBM techs why they couldn't run online diags. On the rare occasions when they needed fixing.

      1. henrydddd

        Re: Maintenance

        Any of you guys ever play around EXCP in BAL. I had to maintain a few systems that used this because of a non-standard tape unit was used.

  8. Peter Simpson 1

    The Mythical Man-Month

    Fred Brooks' seminal work on the management of large software projects, was written after he managed the design of OS/360. If you can get around the mentions of secretaries, typed meeting notes and keypunches, it's required reading for anyone who manages a software project. Come to think of it...*any* engineering project. I've recommended it to several people and been thanked for it.

    // Real Computers have switches and lights...

    1. Madeye

      Re: The Mythical Man-Month

      The key concepts of this book are as relevant today as they were back in the 60s and 70s - it is still oft quoted ("there are no silver bullets" being one I've heard recently). Unfortunately fewer and fewer people have heard of this book these days and even fewer have read it, even in project management circles.

      1. Yet Another Anonymous coward Silver badge

        Re: The Mythical Man-Month

        Surely "there are no silver bullets" doesn't apply anymore now that we have the cloud and web and hadoop and node.js ?

        1. Lapun Mankimasta

          Re: The Mythical Man-Month

          Silver bullets don't kill managers, only werewolves, vampires and the like. You'd still have managers even with the could, etc.

      2. Anonymous Coward
        Anonymous Coward

        Re: The Mythical Man-Month

        Indeed -- I usually re-read the anniversary edition once a year or so. (Amusingly, there is a PM book out called "The Silver Bullet" that is not bad but won't slay the werewolf.)

  9. robin48gx


    The first to use transistors instead of valves, and a binary front panel.

    24 * 24 bit multiply too, and if you paid extra you got FORTRAN (I only used assembler

    on it though)

  10. WatAWorld

    Was IBM ever cheaper?

    I've been in IT since the 1970s.

    My understanding from the guys who were old timers when I started was the big thing with the 360 was the standardized Op Codes that would remain the same from model to model, with enhancements, but never would an Op Code be withdrawn.

    The beauty of IBM s/360 and s/370 was you had model independence. The promise was made, and the promise was kept, that after re-writing your programs in BAL (360's Basic Assembler Language) you'd never have to re-code your assembler programs ever again.

    Also the re-locating loader and method of link editing meant you didn't have to re-assemble programs to run them on a different computer. Either they would simply run as it, or they would run after being re-linked. (When I started, linking might take 5 minutes, where re-assembling might take 4 hours, for one program. I seem to recall talk of assemblies taking all day in the 1960s.)

    I wasn't there in the 1950s and 60s, but I don't recall any one ever boasting at how 360s or 370s were cheaper than competitors.

    IBM products were always the most expensive, easily the most expensive, at least in Canada.

    But maybe in the UK it was like that. After all the UK had its own native computer manufacturers that IBM had to squeeze out despite patriotism still being a thing in business at the time.

    1. Roland6 Silver badge

      Re: Was IBM ever cheaper?

      Good question particularly as we now have several decades of experience of 'cheap' computing and the current issue of vendor forced migration from Windows XP.

  11. PyLETS

    Cut my programming teeth on S/390 TSO architecture

    We were developing CAD/CAM programs in this environment starting in the early eighties, because it's what was available then, based on use of this system for stock control in a large electronics manufacturing environment. We fairly soon moved this Fortran code onto smaller machines, DEC/VAX minicomputers and early Apollo workstations. We even had an early IBM-PC in the development lab, but this was more a curiosity than something we could do much real work on initially. The Unix based Apollo and early Sun workstations were much closer to later PCs once these acquired similar amounts of memory, X-Windows like GUIs and more respectable graphics and storage capabilities, and multi-user operating systems.

  12. Gordon 10 Silver badge

    Ahh S/360 I knew thee well

    Cut my programming teeth on OS/390 assembler (TPF) at Galileo - one of Amadeus' competitors.

    I interviewed for Amadeus's initial project for moving off of S/390 in 1999 and it had been planned for at least a year or 2 before that - now that was a long term project!

    1. David Beck

      Re: Ahh S/360 I knew thee well

      There are people who worked on Galileo still alive? And ACP/TPF still lives, as zTPF? I remember a headhunter chasing me in the early 80's for a job in OZ, Quantas looking for ACP/TPF coders, $80k US, very temping.

      You can do everything in 2k segments of BAL.

      1. yoganmahew

        Re: Ahh S/360 I knew thee well

        Likewise started at Galileo and still programming z/series mainframes.

        Amadeus have been saying for years that they don't do mainframe, so I'm a little surprised that they still are :-/

        One quibble with the article: "The mainframe is expensive and, at its core, it is also proprietary." The mainframe is not expensive per transaction. Most hardware is proprietary. Ask Mr. Intel to tell you how his chips work and he'll give you a big PFO. Most bits of the mainframe are open system (the source is available for them to registered users), OSA cards being one of the few exceptions I'm aware of.

      2. Mark Cathcart

        Re: Ahh S/360 I knew thee well

        "David Beck" - Yes of course there are people still alive, and some of us are still at work and less than 60 that worked on Galileo.

  13. No, I will not fix your computer

    Too early for a Godwin?

    I wonder why it is that many articles and discussions about the history of IBM (and mainframes) avoid the subject of their part in the holocaust?

    While it's true they deny an awareness of the use of their counting machines, it's a matter of record that they did supply the punch card census machines (mainly through Dehomag, the IBM subsidiary in German, and Watson Business Machines in the US).

    There's the financial impact to IBM, they made a lot of money from Nazi Germany before and during WW2, then there's the technological importance of drawing all information to a central place for processing (which is in essence "mainframe" technology).

    Surely you can discuss these two crucially important historical milestones without implicitly approving of it, or is it really that insignificant? or is the subject avoided purely because of a "ooh.. we better not discuss that mentality?"

    1. Anonymous Coward

      Re: Too early for a Godwin?

      "I wonder why it is that many articles and discussions about the history of IBM (and mainframes) avoid the subject of their part in the holocaust?"

      I wonder why it is that many articles and discussions about the history of Boeing (and long-range passenger travel) avoid the subject of their part in the burning 100,000 civilians to death in Hiroshima and Nagasaki simply to intimidate Stalin?

      Because it's not relevant? Many, many US companies made money selling stuff to the Nazis (the US being the only non-Axis country to increase exports to Nazi Germany); Norway sold them iron ore which was used to build tanks.

      1. Madeye

        Re: Too early for a Godwin?

        The key difference between IBM and Norway is that the Norwegian iron ore was, for the most part, used to annihilate combatants. The IBM/Dehomag kit was used to tabulate civillians with a view to calculating when their usefulness to the Reich had expired. The book "IBM and the Holocaust" by Edwin Black makes the case that the Holocaust would not have been possible on such a large scale without IBM tabulating machines.

        I take your point that an article on the S/360 may not be the place to raise this issue, unless you were to take the view that the later mainframes were a derivation of the earlier tabulating machines. They did largely the same thing, after all, just substantially faster.

        1. Destroy All Monsters Silver badge
          Paris Hilton

          Excel enables the next Holocaust!

          The book "IBM and the Holocaust" by Edwin Black makes the case that the Holocaust would not have been possible on such a large scale without IBM tabulating machines.

          So how did Stalin and Mao do better without the devil punch card machines?

          1. Madeye
            IT Angle

            Re: Excel enables the next Holocaust!


            You are effectively telling me that size matters, such that Mao and Stalin's purges are more noteworthy as they killed more people. Leaving aside that this is highly debatable, it is a question of efficiency (interesting use of the word "better" btw ;-) )

            Suppose we asked IBM whether it would prefer to be the largest tech company in the world or the most efficient. Even pre-'93 I expect the answer to that would be "most efficient", because efficiency will lead to growth (well, unless your competitors all club together and start shooting at you, as happened to the Germans).

            In truth, the situations are different. Neither Stalin nor Mao were facing a war on 2 fronts at the times of their worst excesses. They had no need for the people they purged. On the contrary, the Nazis needed to make efficient use of the workers in the death camps and they made a telling contribution to the war effort. How much sooner would the war have ended had the Nazis not had access to IBM tabulating machines?

            Important though this is, is it relevant to this thread? Possibly. Is it relevant to this site? Certainly. Who were operating the machines? I was chilled to the bone when I understood how the IT consultants of their time were so complicit in the deaths of millions. Previously I had always considered my profession to be mostly harmless.

            1. Anonymous Coward
              Anonymous Coward

              Re: Excel enables the next Holocaust!

              In fact, as Stalin discovered in 1941, he did need the people he purged, the brightest in the command of the Red Army. Oops. In WW2, Stalin did not so much commit mass murder, as have an army with very high casualty rates because of the poor quality of the remaining leadership. It took until 1944 for the Red Army to have battle hardended generals again, with the notable exception of Chuikov (who was a military near-genius, his tactics at Stalingrad were superb and he thoroughly deserved to take the German surrender...however, I digress)

              Mao's death rate was due to simple incompetence; the biggest disaster being the Cultural Revolution, when the transformation of agriculture took place without anybody considering what would happen to the rural farmers. Stalin's was similar. The high death rate of Russians and Chinese in WW2 was at the hands of the Germans and Japanese. No computers necessary.

              We are fortunate too that Hitler made very inefficient use of slave labour, allowing a high death rate and ensuring that nobody ever got to be any good at their jobs. More intelligent bureaucrats who suggested using technicians in technical jobs were simply ignored by the SS. I suggest that the tabulating machines of the day were of little use to the Germans, not because they were themselves defective, but because Nazi "philosophy" meant that they could never be used in an efficient manner.

              Fortunate indeed for the rest of us.

              1. Vociferous

                Re: Excel enables the next Holocaust!

                > Mao's death rate was due to simple incompetence

                That's being too kind. The ruling junta knew millions would die, they simply didn't care: the individual was worthless, only the collective mattered. Insofar as they cared about the suffering and death at all, it was in a "one must crack eggs to make an omelet" way.

                > Nazi "philosophy" meant that they could never be used in an efficient manner.

                There were conflicting goals wrt the German use of slave labor: to produce products and services, and to exterminate undesirable humans.

              2. Madeye

                Re: Excel enables the next Holocaust!


                While I can't argue with your observations on Mao and Stalin, I do question your last paragraph.

                The Germans took the personal details of all those interned in the death camps and recorded them on the punch cards. The details recorded fell into 3 broad categories: their undesirability (Jewish, Communist, agitator etc), their useful skills (machinist, baker etc) and their current state (sick, unproductive etc)

                They would use the tabulating machines to run queries such as "find me the most undesirable people without currently useful skills who may well die in the next 3 months". Those returned by the queries would be lucky to see out the day.

                I would say the Nazis were far from inefficient at identifying the individuals with whom they could most easily dispense. You are, however, right in that these skilled workers could have been far better utilised by a non-genocidal regime.

          2. G.Y.

            Re: Excel enables the next Holocaust!

            Stalin DID have punch-card machines to run the purges on -- this is one of the first items sold after the embargo on the USSR disappeared.

          3. No, I will not fix your computer

            Re: Excel enables the next Holocaust!

            >>So how did Stalin and Mao do better without the devil punch card machines?

            Stalin was "killing" from 1927 - 1953 (26 years) as opposed to WW2 which was broadly 6 years.

            The majority of Stalin's targeted killing was regional (people on specific land) and very easy to identify.

            Mao was a similar story, there was no specific "target", just people on land.

            I'm not sure whether you intended it or not, but you're emphasising my point of the post, the machines IBM supplied enabled accurate targeting and filtering of people integrated into an existing community with specific traits - unlike Mao and Stalin, if you honestly don't see the difference between the implementation of targeting sectors of society using technology and states that treated all their people as animals then you have no appreciation for history.

          4. Acme Fixer

            Re: Excel enables the next Holocaust!

            I once read about the bodies exhumed being identified as the Czar's family. Also something about Stalin saying once they're dead they're no problem. I once read there were four times as many Russians killed as there were Holocaust victims. That's 25 million people. So it looks like Stalin had too many problems to be solved by a punch card. Sorry for being so OT.

      2. Lars Silver badge

        Re: Too early for a Godwin?

        "Norway sold them iron ore". No that was Sweden, mostly shipped from Narvik in Norway though. Norway was occupied by Germany and to use "sell" in a situation like that would be silly.

      3. No, I will not fix your computer

        Re: Too early for a Godwin?

        @Robert Long 1

        You seem to be conflating using "an aircraft" with "machines specifically designed to enable the final solution".

        Don't get me wrong, I do understand what you're saying, load of people traded with the Nazis before, during, between and after the wars (The Bush family made their fortune from trading war bonds and other financial instruments, the presidential race may not have been possible without it), but there's a huge difference (or possibly a fine line, depending how you look at it) between war profiteering and explicitly supplying a ground breaking technology specifically designed to enable extermination of sectors of society.

        Put another way, if this technology (on which IBM is founded) was used for constructive social issues, say a national insurance or healthcare system, don't you think that IBM would have held it up as a pioneering tech? I'm not even talking about the relative right or wrongs of the system, merely the fact it's not discussed, because technologically, it was an achievement, on which IBM is founded, it's not just the money they made, it's the central database, standard interfaces, correlated data, centralised de-duping, automatic data processing, all those properties that define a mainframe.

        I've got absolutely no axe to grind with respect to the American involvement in war, I just find it interesting that the subject is avoided, and given the impact of IBM the concepts on which mainframes are based, It seems to me relevant history.

        1. Anonymous Coward
          Anonymous Coward

          Re: Too early for a Godwin?

          "national insurance or healthcare system, don't you think that IBM would have held it up as a pioneering tech?"

          Mainframes were, and still are, used for all of those systems. This WWII we are talking about. They were not selling mainframes in the 1930-40s. They were selling tabulation machines. Nothing to do with the mainframe architecture.

          "explicitly supplying a ground breaking technology specifically designed to enable extermination of sectors of society."

          It wasn't specifically designed for extermination, any more than Excel is specifically designed to count dead bodies. It can be used for that purpose, but hardly what IBM/Microsoft had in mind.... What IBM had in mind, both in the US, Germany, and everywhere else, was to use tabulation to provide census information. That was the original purpose of the tabulator. It was created for the US census. The Nazis chose to use their census information and further applications created on the tabulators for evil purposes, but you can't blame IBM for what they chose to do with information. There is nothing inherently malicious about a tabulator or computer. This isn't the case of a gun manufacturer providing guns to a sociopath and then claiming they didn't know what was going to happen. This is the case of a hammer manufacturer supplying hammers to someone assuming they were going to use those hammers for the same things everyone uses hammers for and then that person turning out to be a sociopath who uses hammers to kill people.

      4. Acme Fixer
        Thumb Down

        Re: Too early for a Godwin?

        I don't see the relevance of the airplanes in the WW II effort to The Holocaust. I suppose the next thing you'll come up with is that the Colossus was responsible for millions of deaths. Duh! Besides, there were many more people killed in the fire bombings of Tokyo and Dresden then there were people killed in Nagasaki and Hiroshima.

    2. disgruntled yank

      Re: Too early for a Godwin?

      Considering that a) S/360 development started a decade and a half after V-E day, and b) somehow the author doesn't get around to mentioning Thomas Watson, Sr., yes, too early. And since NASA is mentioned, why don't we discuss the folks brought over from Peenemunde?

      And I think that ca. 1945 the central place for processing to which information was drawn was general called a "file cabinet".

    3. Anonymous Coward
      Anonymous Coward

      Re: Too early for a Godwin?

      Hundreds of international companies sold the Nazis goods prior to WWII. IBM sold the Nazis tabulating equipment prior to the start of WWII, but they sold everyone tabulating equipment. IBM was the dominant force in data processing. It is like saying Microsoft is complicit in the Darfur genocide because they used Windows and Outlook to communicate or consolidate information. For all I know they did, but that doesn't say anything about Microsoft other than they sell software which people can use to accomplish their good or evil aims. The blame lies not with the computer or the maker of the computer, but the people using the computer.

      1. david 12 Silver badge

        Re: Too early for a Godwin?

        IBM sold to the Nazis DURING WWII. They weren't 'on the American side', they were on both sides of that war. They didn't just let the German division work for the Germans: they oversited the German division, and sold required supplies to it.

        How did they get away with it? Well, apart from the corruption of the American politcal process, after the war the American government hired German scientists and engineers and traitors like IBM, to help fight the emerging cold war.

      2. Acme Fixer

        Re: Too early for a Godwin?

        Microsoft doesn't sell software, they sell a license to use their software. Wow, think about that one for a while!

    4. Starkadder

      Re: Too early for a Godwin?

      This article is not about the history of IBM but about the the S/360 and its descendants. These machines did not appear until twenty years after the end of WW2. Axe grinders should post their complaints elsewhere.

      1. Anonymous Coward
        Anonymous Coward

        Re: Too early for a Godwin?

        Actually, Apple and Microsoft windows Vista caused the Holocaust.

  14. JPL

    The Naming of Parts

    I believe the S/360 was named after the number of operators required to keep it running.

    1. Anonymous IV

      Re: The Naming of Parts

      On the contrary; the /360 referred to 360° of coverage! The IBM reps had a lot of trouble trying to explain /370 when that series got introduced...

      1. Michael Wojcik Silver badge

        Re: The Naming of Parts

        On the contrary...

        HTH. HAND.

  15. Miss Config

    "On the latter, a total of five S/360s had helped run the Apollo space programme, with one of IBM's mainframes being used to calculate the data for the return flight of Neil Armstrong, Buzz Aldrin and Michael Collins - the team that put boots on the Moon for the first time."

    But the model they used was NOT the most powerful available

    at the time and even as Apollo 11 was waiting for launch

    NASA were terrified that somebody would actually ask them about that.

    Actually there was a good reason for that attitude.

    In the space programme, where reliable systems are literally

    a matter of life and death, what counts is reliability

    rather than being 'technically advanced'.

    And reliability can only be tested the hard way over time,

    by which stage there is something more 'advanced' somewhere.

    1. Tom 13

      Re: what counts is reliability

      Yep. And even with IBM guaranteeing the backward compatibility part, at NASA in the 1960s I hate to have to do the compatibility cross-checking to certify a new mainframe just to have the latest and greatest gizmo.

  16. MrScott123

    Why won't the mainframe die?

    Why won't the mainframe die? It's because it works! Having been involved in mainframes for decades and then also working in the PC area, I can say the mainframe world is somewhat organized hand has some sense where the PC world is a nightmare. Ever hear of a virus on a mainframe? How about malware? Both created and prevalent on PCs but not the mainframe. I recall Bill Gates saying that "We won't make the same mistakes that the mainframe has done" but alas, the PC world not only made the same mistakes and more of them.

    1. Yet Another Anonymous coward Silver badge

      Re: Why won't the mainframe die?

      Put it this way - nobody is running around in a panic wondering how they are going to migrate 1000s of users off a mainframe platform they only installed 10 years ago.

    2. Anonymous Coward
      Anonymous Coward

      Re: Why won't the mainframe die?

      Oh, please! I may have even written one or two myself.

      1. Steve I

        Re: Why won't the mainframe die?

        "I may have even written one or two myself." One or two what?. Mainframe viruses? No - you haven't.

    3. Michael Wojcik Silver badge

      Re: Why won't the mainframe die?

      How about malware? Both created and prevalent on PCs but not the mainframe.


      Those who forget history &c.

      This belief that malware is impossible under the various IBM mainframe OSes is simply ignorance generalized from the paucity of evidence to the contrary. But - as the Reg Commenter Chorus is so fond of reminding us - absence of evidence is not evidence of absence.1

      In fact it's not difficult to create malware of various sorts in various IBM mainframe environments. Take CICS - prior to the introduction of Storage Protection in CICS/ESA 3.3, all transactions in a given CICS region shared the same address space and could stomp on one another quite happily. This is rarely a problem because mainframe sysadmins were picky about what programs they'd let be installed on their machines, and those programs were generally written in HLLs that offered amazing features like array bounds checking. But a malicious CICS application, running in a region without SP, can do all sorts of nasty things.

      And it's still necessary to get permissions right in mainframe environments. If you're not careful with terminal permissions, a malicious app can issue 3270 Receive Buffer commands and screen-scrape data from privileged apps. And so forth.

      Mostly there hasn't been a lot of IBM mainframe malware because it had been too expensive for researchers (of whatever shade of hat) to play with,2 and because there's a correspondingly smaller hacker community to help, and because it's not sexy like, say, breaking a major website.

      1More precisely, it's not proof of absence. It is evidence, as any Perfect Bayesian Reasoner could tell you; it's simply weak evidence, and determining its probability with any useful accuracy so you can adjust your model is difficult.

      2That situation has changed, for older mainframe OSes, with Hercules.

  17. John Tserkezis

    I'm a spring chicken.

    Only experience here is a regular x86 rack mount server that emulates the old OS and software, along with a copy protection USB dongle. The original terminal lines were translated to something that could be transported over ethernet, and each client had an ethernet emulation terminal client that mimiced the old green screen terminals. Except in this case, the colour was customisable. I would hope so!

    I had to continually set the buggers up because the users were cutting and pasting data between the terminal window and Excel.

    The whole time I'm thinking "there MUST be better modern software that isn't such a pain in the arse".

    1. JLH

      Re: I'm a spring chicken.

      I used a real IBM 3090 mainframe when I was a PhD student.

      Real 3270 terminals with the proper spring action keyboard and a twinax connection from the office down to the server room.

      When we got an Ethernet connection for that machine guess what arrived?

      An IBM PC in a box, complete with ESCON (?) channel adapter card and chuge cables, plus an Ethernet card. Yep, the mainframe used a lowly PC as an ethernet 'bridge'.

      1. Acme Fixer

        Re: I'm a spring chicken.

        The line was RG-62/u 93 ohm coaxial cable, not twinax. I pulled hundreds of meters of it.

  18. Anonymous IV

    No mention of microcode?

    Unless I missed it, there was no reference to microcode which was specific to each individual model of the S/360 and S/370 ranges, at least, and provided the 'common interface' for IBM Assembler op-codes. It is the rough equivalent of PC firmware. It was documented in thick A3 black folders held in two-layer trolleys (most of which held circuit diagrams, and other engineering amusements), and was interesting to read (if not understand). There you could see that the IBM Assembler op-codes each translated into tens or hundreds of microcode machine instructions. Even 0700, NO-OP, got expanded into surprisingly many machine instructions.

    1. bob, mon!

      Re: No mention of microcode?

      I first met microcode by writing a routine to do addition for my company's s/370. Oddly, they wouldn't let me try it out on the production system :-)

      1. John Smith 19 Gold badge

        Re: No mention of microcode?

        "I first met microcode by writing a routine to do addition for my company's s/370. Oddly, they wouldn't let me try it out on the production system :-)"

        I did not know the microcode store was writeable.

        Microcode was a core (no pun intended) feature of the S/360/370/390/4030/z architecture.

        It allowed IBM to trade actual hardware (EG a full spec hardware multiplier) for partial (part word or single word) or completely software based (microcode loop) depending on the machines spec (and the customers pocket) without needing a re compile as at the assembler level it would be the same instruction.

        I'd guess hacking the microcode would call for exceptional bravery on a production machine.

        1. Anonymous Coward
          Anonymous Coward

          Re: No mention of microcode? - floppy disk

          Someone will doubtless correct me, but as I understood it the floppy was invented as a way of loading the microcode into the mainframe CPU.

          1. Where not exists

            Re: No mention of microcode? - floppy disk

            I can't say whether the floppy was developed for this purpose or not but I certainly recall performing IMLs using the built in (5 1/4") floppy drives in IBM boxes.

          2. Anonymous Coward
            Anonymous Coward

            Re: No mention of microcode? - floppy disk

            MTARS (I don't even remember what it stands for) for the Honeywell DPS/8x, 8xxx and 90 hardware loaded from floppy of course, by the time of the /8000 and /90 models, they were on 5 1/4 instead of 8inch.

            1. Acme Fixer

              Re: No mention of microcode? - floppy disk

              Might be maintenance test and repair system? I remember seeing GHAT tapes on our Bull FE's desk. We started out with a level 66. One of our VPs told me that our Computer Services director was always asking for "More core, more core!" We only had 1.5 megabytes.

        2. Grumpy Guts

          Re: No mention of microcode?

          I did not know the microcode store was writeable.

          It definitely wasn't. It allowed the possibility for the underlying hardware too support different instruction sets. A machine supporting APL natively was considered at one point - I think one was buit in IBM Research - but it was never implemented as a product.

          1. Michael Wojcik Silver badge

            Re: No mention of microcode?

            A machine supporting APL natively was considered at one point - I think one was buit in IBM Research - but it was never implemented as a product.

            I'm not sure whether you mean "such a machine in the 360 line" or "any such machine". IBM did sell an APL computer - the 5100.

  19. Kubla Cant

    A thousand engineers toiled to eventually produce one million lines of code.

    1,000 lines of code per engineer, then. It doesn't seem much.

    I honestly don't know anything about this kind of programming, but I assume whatever code they were using would generated an op-code per line, like assembler. I'd guess that you'd be lucky to send a line to a printer with 1,000 op-codes.

    Can anyone explain?

    1. Gordon 10 Silver badge

      The above point on Micro-codes probably explains most of it...

    2. tom dial Silver badge

      The rule of thumb in use (from Brooks's Mythical Man Month, as I remember) is around 5 debugged lines of code per programmer per day, pretty much irrespective of the language. And although the end code might have been a million lines, some of it probably needed to be written several times: another memorable Brooks item about large programming projects is "plan to throw one away, because you will."

      1. Anonymous Coward
        Anonymous Coward

        Programming systems product

        The main reason for what appears, at first sight, low productivity is spelled out in "The Mythical Man-Month". Brooks freely concedes that anyone who has just learned to program would expect to be many times more productive than his huge crew of seasoned professionals. Then he explains, with the aid of a diagram divided into four quadrants. Top left, we have the simple program. When a program gets big and complex enough, it becomes a programming system, which takes a team to write it rather than a single individual. And that introduces many extra time-consuming aspects and much overhead. Going the other way, writing a simple program is far easier than creating a product with software at its core. Something that will be sold as a commercial product must be tested seven ways from Sunday, made as maintainable and extensible as possible, be supplemented with manuals, training courses, and technical support services, etc. Finally, put the two together and you get the programming systems product, which can be 100 times more expensive and time-consuming to create than an equivalent simple program.

      2. Anonymous Coward
        Anonymous Coward

        "...plan to throw one away, because you will."

        Unless you're Microsoft. In which case you just merrily ship it. After all, your hundreds of millions of unpaid beta-testers are just longing to start finding all the bugs for you. And you can be sucking in revenue at the same time!

  20. Savvo

    UNIVAC 1

    "ENIAC ... was capable of 5,000 additions, 357 multiplications and 38 divisions in one second.


    "UNIVAC 1 was ... faster with an addition time of 120 milliseconds, multiplication time of 1,800 milliseconds, and division time of 3,600 milliseconds."

    So UNIVAC1 could manage 8.33 additions, 0.56 multiplications or 0.28 divisions per second? And this was faster? microseconds perhaps?

    1. Ian 55

      Re: UNIVAC 1

      I was wondering how far down the comments I'd have to go before someone mentioned this.

    2. Michael Wojcik Silver badge

      Re: UNIVAC 1 says "3.9 Milliseconds for division". So yes, it looks like the UNIVAC I numbers in the article are in microseconds, not milliseconds.

  21. Lars Silver badge


    But why trying to push Google and similar into the article. Google with some one million Intel? processors running Linux in different parts of the world has nothing to do with IBM 360 or the word mainframe. IBM as a PC story though.

  22. Anonymous Coward
    Anonymous Coward

    "Why won't you DIE?"

    I suppose that witty, but utterly inappropriate, heading was added by an editor; Gavin knows better. If anyone is in doubt, the answer would be the same as for other elderly technology such as houses, roads, clothing, cars, aeroplanes, radio, TV, etc. Namely, it works - and after 50 years of widespread practical use, it has been refined so that it now works *bloody well*. In extreme contrast to many more recent examples of computing innovation, I may add.

    Whoever added that ill-advised attempt at humour should be forced to write out 1,000 times:

    "The definition of a legacy system: ONE THAT WORKS".

    1. Anonymous Coward
      Anonymous Coward

      Re: "Why won't you DIE?"

      I could have added antibiotics to the list, but unfortunately they ARE showing signs of approaching death. And I don't see any crowds of merry-makers dancing in the street for that reason.

    2. dlc.usa

      Re: "Why won't you DIE?"

      I presume the person responsible for the mention of DIE is unaware that z/OS systems DIE all the time. The acronym is Disabled Interrupt Exit and coding one requires knowledge and authorizations beyond that of most applications programmers.

  23. Miss Config

    Slide Discs And Slide Rules

    As for NASA using IBM, the scene I always remember from the movie 'Apollo 13'

    is the moment that Houston DOES realise that 'we have a problem'

    and the guys at mission control start reaching frantically for ........

    their slide dscs.

  24. Miss Config

    Pay Per Line Of Code

    In the 1960s IBM would not only sell you a mainframe,

    they'd actually program it for you.

    And charge you PER LINE OF CODE. Thank goodness there were no openings for abuse.

    1. Grumpy Guts

      Re: Pay Per Line Of Code

      I worked for IBM UK in the 60s and wrote a lot of code for many different customers. There was never a charge. It was all part of the built in customer support. I even rewrote part of the OS for one system (not s/360 - IBM 1710 I think) for Rolls Royce aero engines to allow all the user code for monitoring engine test cells to fit in memory.

  25. DL_Engineer

    A Small Correction

    ENIAC didn't supply data for missiles, it computed the firing tables for artillery. Apart from that, good article

  26. This post has been deleted by its author

  27. Anonymous Coward
    Anonymous Coward


    It was of course widely believed that in reality the difference between the low end and the high end mainframes was purely the arrangement of jumpers on one of the circuit boards. Yes, I know that's a myth. It was only in-situ upgrades for which that applied [/snark].

    Incidentally, none of the valves shown look like computer valves to me; they look like audio and a few RF types. As I recall, British computer valves (made by Mullard...) usually had the B7G or B9G base. But I am ready to be corrected.

  28. dlc.usa

    Sole Source For Hardware?

    Even before the advent of Plug Compatible Machines brought competition for the Central Processing Units, the S/360 peripheral hardware market was open to third parties. IBM published the technical specifications for the bus and tag channel interfaces allowing, indeed, encouraging vendors to produce plug and play devices for the architecture, even in competition with IBM's own. My first S/360 in 1972 had Marshall not IBM disks and a Calcomp drum plotter for which IBM offered no counterpart. This was true of the IBM Personal Computer as well. This type of openness dramatically expands the marketability of a new platform architecture.

  29. RobHib

    Eventually we stripped scrapped 360s for components.

    "IBM built its own circuits for S/360, Solid Logic Technology (SLT) - a set of transistors and diodes mounted on a circuit twenty-eight-thousandths of a square inch and protected by a film of glass just sixty-millionths of an inch thick. The SLT was 10 times more dense the technology of its day."

    When these machines were eventually scrapped we used the components from them for electronic projects. Their unusual construction was a pain, much of the 'componentry' couldn't be used because of the construction. (That was further compounded by IBM actually partially smashing modules before they were released as scrap.)

    "p3 [Photo caption] The S/360 Model 91 at NASA's Goddard Space Flight Center, with 2,097,152 bytes of main memory, was announced in 1968"

    Around that time our 360 only had 44kB memory, it was later expanded to 77kB in about 1969. Why those odd values were chosen is still somewhat a mystery to me.

    1. David Beck

      Re: Eventually we stripped scrapped 360s for components.

      @RobHib-The odd memory was probably the size of the memory available for the user, not the hardware size (which came in powers of 2 multiples). The size the OS took was a function of what devices were attached and a few other sysgen parameters. Whatever was left after the OS was user space. There was usually a 2k boundary since memory protect keys worked on 2k chunks, but not always, some customers ran naked to squeeze out those extra bytes.

  30. Steven Jones

    The first clones...

    It's worth mentioning that in 1965 RCA produced a mainframe which was a semi-clone of the S/360, and almost bankrupted the company in an attempt to compete with IBM. It was binary compatible at the non-privileged code level, but had a rather different "improved" architecture for handling interrupts faster by having multiple (in part) register sets. The idea, in the days when much application code was still written in assembly code, was that applications could be ported relatively easy.

    The RCA Spectra appeared over in the UK as well, but re-badged as an English Electric System 4/70. Some of these machines were still in use in the early 1980s. Indeed, UK real-time air cargo handling and related customs clearance ran on System 4/70s during this period (as did RAF stores). Of course, English Electric had become part of ICL back in 1968. Eventually, ICL were forced to produce a microcode emulation of the System 4 to run on their 2900 mainframes (a method called DMA) in order to support legacy applications which the government was still running.

    In a little bit of irony, the (bespoke) operating systems and applications mentioned were ported back onto IBM mainframes (running under VM), and at least some such applications ran well into the 21st century. Indeed, I'm not sure the RAF stores system isn't still running it...

    Of course, this had little to do with the "true" IBM mainframe clone market that emerged in the late 1970s and flowered in the last part of the 20th century, mostly through Amdahl, Hitachi and Fujitsu.

    1. Anonymous Coward
      Anonymous Coward

      Re: The first clones...RCA Spectra

      There are a few intallations still running assembler applications developed on Spectra...

    2. David Roberts

      Re: The first clones...

      Beat me to it with the RCA reference.

      In fact the first few System 4 systems were RCA Spectras because the System 4 production line wasn't up to speed.

      First mainframe I ever saw when I started out as a Cobol programmer.

      On the microcode emulation - I think you will find that it was DME (Direct Machine Environment iirc) not DMA.

      Emulators for ICL 1900 and LEO 326 were also produced and allegedly the 1900 emulation had to be slugged because it was faster that VME/B on the 2900 series for a long time, which was seen as a disincentive to upgrade to the new systems (does this have a familiar ring?).

      So these VMWare people were late to the game :-).

      Oh, and can we have a Modified Godwin for anyone who mentions XP end of life when not directly relevant?

      1. Steven Jones

        Re: The first clones...

        You are quite right - DME, not DMA (it's been many years). DMA is, of course, Direct Memory Access. I was aware there was also a 1900 emulation too, and there were those who swore by the merits of George (if I've remembered the name properly). Of course, the 1900 had absolutely nothing to do with the DNA of the IBM S/3260.

  31. earl grey

    Speaking of remembering

    Who recalls LCS and how it was made?

    How about the earlier 7080 and 7090 systems that could be programmed from the console?

    The valves in the 083....where were they located?

    You could actually read the data from a 7 track tape directly off tape with a viewer.

    Card register anyone?

    1. David Beck

      Re: Speaking of remembering

      @Earl Grey -

      LCS, I think little old ladies with needles, threading cores.

      Note that S/360 could be programmed from the panel as well, you just had to write in machine code, in hex, a bit like kick starting a PDP8.

      BTW, Fred Brookes got the job managing S/360 since he did so well on a special project for a 7000 Series "super computer", the 7030, better known as Stretch. Some of the 7000's were a bit compatible with each other I think, at least used the same data format, 36-bit words, one's compliment integers, 7090 and 7094 were sort of compatible. There were all sorts of 70x0 machines that weren't though.

      I spent a bad summer working on old Fortran code which ran perfectly on a 7094 and not at all on a model 50.

  32. Anonymous Coward
    Anonymous Coward

    Linux on Z

    Write a story about that will you? Rumor is that wildenten did the implementation, then were blasted with lead for being disobedient, since Americans were refused resources for implementation... too close to the HQ radiation hive to run in stealth mode.

    Oh, and the freelancers were PO'd since a leak would knacker the project so no one talked.

  33. Anonymous Coward
    Anonymous Coward

    Some small corrections...

    > These included large and bulky drums filled with mercury through which electronic pulses were sent and amplified

    Mercury was used for delay lines - long glass tubes filled with mercury. Drums were large and bulky but had a magnetic coating - a primitive hard drive, if you like.

    1. Lars Silver badge

      Re: Some small corrections...

      And I would like to add that there was indeed a time when IBM representatives where the most disgusting persons you would ever meet, and I know things have changed, gone to MS perhaps. One of the moust "funny" things was a 45kg girl (a customer) who told me about a 120kg IBM service guy who refused to move the punch card device he was supposed to fix because it was +20kg. So this girl moved it for service, her self, across the room for his convenience. There was a lot of this around 60 and 70, on the other hand I would not, perhaps, write this had I not known a guy at IBM 1965. Oh Christ, have I met some IBM representatives.

  34. Glen Turner 666

    Primacy of software

    Good article.

    Could have had a little more about the primacy of software: IBM had a huge range of compliers, and having an assembling language common across a wide range was a huge winner (as obvious as that seems today in an age of a handful of processor instruction sets). Furthermore, IBM had a strong focus on binary compatibility, and the lack of that with some competitor's ranges made shipping software for those machines much more expensive than for IBM.

    IBM also sustained that commitment to development. Which meant that until the minicomputer age they were really the only possibility if you wanted newer features (such as CICS for screen-based transaction processing or VSAM or DB2 for databases, or VMs for a cheaper test versus production environment). Other manufacturers would develop against their forthcoming models, not their shipped models, and so IBM would be the company "shipping now" with the feature you desired.

    IBM were also very focussed on business. They knew how to market (eg, the myth of 'idle' versus 'ready' light on tape drives, whitepapers to explain technology to managers). They knew how to charge (eg, essentially a lease, which matched company's revenue). They knew how to do politics (eg, lobbying the Australian PM after they lost a government sale). They knew how to do support (with their customer engineers basically being a little bit of IBM embedded at the customer). Their strategic planning is still world class.

    I would be cautious about lauding the $0.5B taken to develop the OS/360 software as progress. As a counterpoint consider Burroughs, who delivered better capability with less lines of code, since they wrote in Algol rather than assembler. Both companies got one thing right: huge libraries of code which made life much easier for applications programmers. DEC's VMS learnt that lesson well. It wasn't until MS-DOS that we were suddenly dropped back into an inferior programming environment (but you'll cope with a lot for sheer responsiveness, and it didn't take too long until you could buy in what you needed).

    What killed the mainframe was its sheer optimisation for batch and transaction processing and the massive cost if you used it any other way. Consider that TCP/IP used about 3% of the system's resources, or $30k pa of mainframe time. That would pay for a new Unix machine every year to host your website on.

    1. Anonymous Coward
      Anonymous Coward

      Re: Primacy of software

      Yet Berniers-Lee notes in his memoir that the first website outside of CERN came up on an IBM mainframe (and written in Rexx).

  35. Richard Freeman

    a full sized piano has 88 keys

    "ENIAC: 18,000 square feet and 30 tones of computer"

    only 30 Tones? so it didn't have a full sized keyboard then?

    Does that mean that Eniac was an early synth?

  36. Tank boy

    One of Murphy's Laws of combat applies here

    If it's stupid and it works, it isn't stupid.

  37. Vociferous

    It's a bit amusing...

    ... a decade or so ago, I saw a documentary about how sci-fi had envisioned the future, and how things had really turned out. One of the points was that in old sci-fi, computers are these centralized, monolithic, massively powerful, structures, while in reality the net had meant that computation was done on individual networked PCs, decentralized and distributed.

    Now we're seeing a moved from reasonably powerful personal computers, to super thin clients and cellphones, necessitating moving the actual computing to cloud servers. Microsoft Office is already available as a streamed online app, and companies like nVidia and Valve are moving towards streaming graphics-intensive games to weak portable hardware.

    Maybe the documentary was wrong. Maybe the future hadn't arrived yet. Maybe the future is HAL 9000, and decentralized computing of networked PCs was a brief historical footnote.

  38. Anonymous Coward
    Thumb Up

    Great article

    It's partly informative and interesting articles like this that make me keep coming back to this site. Really great!

  39. garynb

    Why have none of you (other) nitpickers

    jumped on this part?

    "The worldwide market for Complex Instruction Set Computing (CISC) - the architecture used in mainframes -

    This would seem to imply that PC x86 processors are of RISC architecture. Not so.

    My bit of trivia: which model S/360 was not microcoded i.e. the instructions were hardwired in circuits?

    Hmmm, how to write the answer so it doesn't immediately give itself away?

    2**6 + 2**3 + 3 = ???

    1. Ian 55

      Re: Why have none of you (other) nitpickers

      As ever, it depends on your definition of 'RISC'.

      Haven't x86 CPUs been effectively RISC for the common instructions, with the legacy rarer stuff done via lots microcoded instructions since the 80486?

  40. 404

    How old is the author?

    'The 1930s and 1940s saw government, academics and businesses start to build their digital computers to crunch large volumes of data more quickly than could be done using the prevailing model of the time: a human being armed with a calculator"

    A calculator. A CALCULATOR? Really now.


    1. Francis Boyle Silver badge

      Old enough

      to remember the mechanical calculator - the preferred business machine for most of the twentieth century. Slide rules where for science and engineering types.

      1. 404

        Re: Old enough

        I stand corrected. I think. Wouldn't the scale of the calculations require sliderules? Or just armies of humans with adding machines?

        I do remember my parents wailing away at a desktop adding machine paying bills, taxes in the early 70's.

        Thing was as big as a large 4-slice toaster.

  41. rbf

    I did some IBSYS Fortran on a university computer for an actuarial calculation which got me transferred to the IT department which had just got a Model 50 with 2311s and 256K -- the minimum to run a 7090 emulator. Started in OS/360 PCP. These days still tweaking channel programs to optimise performance.

    The microcode came on special stock punchcards which were mounted on swingout gates. One of our programmers on a DOS machine was having trouble with Test and Set; so I wrote some code to exercise the instruction and store the condition codes; then showed the results to our non-IBM hardware engineers who at length agreed that TS was not working correctly. They changed the microcode. The next day JES2 would not run. The dump showed it waiting after a TS. The engineers had swapped microcode cards between the two machines.

    Mainframes are not winning new applications because IBM overcharges for classic mainframe workloads. You can run pretty much all the new stuff on much less expensive restricted mainframe processors specialised for Linux, Java, etc which is great for virtualisation.

    Banks, governments, insurance companies have strong motivations to migrate their classic applications off and a number of companies are ready to help them. But the heavy duty big stuff is real tough to move.

    Though many sneer at mainframes, the fact remains that the most spectacular project failures happen in the new technology arena.

  42. Anonymous Coward

    Mission critical

    'Searches on Google and status updates on Facebook are the new mission-critical: back in the day a "mission critical" was ERP and payroll.'

    This is false, although it is false in an interesting way. If I do a search on Google then I get some results: perhaps I will get the same results if I search again but perhaps not; and perhaps the results will be useful but perhaps not. Or, possibly, I'll get no results at all so I'll need to resubmit the query. It doesn't matter very much: all that actually matters is that the results are good enough to keep me using the service and that they contain sufficient advertising content that Google make money. Similar things are true for Facebook.

    But if I send a bunch of money to someone from my account then it matters very much that money gets to the recipient, that either it gets to the right recipient, that it leaves my account and so on. In desperation it might be OK that none of these things happen, so I can try again. It is never OK for only some of them to happen: if my bank fails to send my rent to my landlord while taking it from my account then I will end up living in a cardboard box under a bridge.

    Searches on Google are simply not mission-critical: moving money around has always been mission-critical.

    The clever trick that organisations like Google and Facebook have done is to recognise that certain sorts of activity, such as search, don't actually need to work very reliably which allow for enormous scaling.

    1. Michael Wojcik Silver badge

      Re: Mission critical

      The clever trick that organisations like Google and Facebook have done is to recognise that certain sorts of activity, such as search, don't actually need to work very reliably which allow for enormous scaling.

      Right. This is why we have things like eventually-consistent scaled-out databases for these sorts of workloads, which don't need to worry about skew, stale data, and other inconsistencies for many of their operations, and so can bypass CAP-theoretical limitations by ignoring the problem.

      Though ironically this has led to considerable research efforts by Google and the like into improving consistency in massively-distributed, partition-vulnerable systems, giving us technology like Spanner. In that sense we're coming back full circle.

  43. The H'wood Reporter

    IBM - 10 years late to the party with VM

    The Burroughs B5000 had virtual memory starting at the introduction in 1963. Unfortunately, the Burroughs Headquarters on Tireman Ave in Detroit had no idea how powerful the computer they brought out really was. Telling people how cool VM is doesn't do anything for solving their problems.

    People were supporting 256 users on a B5500 in 1972 with a whopping 128 KB of core memory and a response time < 2 seconds.

    IBM understood how to market technology, Burroughs didn't. IBM won, Burroughs merged.

    Side note, Burroughs B series was programmed using Balgol, a derivative of Algol, there was essentially no assembly language. The systems were also hardware stack machines, which allowed you to scroll though the stacks of the machine on a nixie tube display on the door (B6700, B7700).

  44. Michael Wojcik Silver badge

    CICS is not an OS

    IBM's Customer Information Control System (CICS) is the application server/operating system used to manage transactions on mainframes

    Well, one out of three ain't bad, I suppose.

    CICS is not an operating system. Neither is it the only IBM mainframe application engine to include a transaction monitor; IMS/TM runs plenty of applications, and I believe there are still some running under TSS as well.

    But I'll grant you "application server", for a loose definition of "server".

    CICS is a single task that loads and calls application programs in its address space, providing them with various facilities and APIs, including transaction coordination, IPC, terminal I/O, etc. But it runs on top of an underlying OS (zOS, MVS, MVT, VSE, etc).

    (IMS/TM is a transaction monitor and message-queuing system that loads and calls application programs as messages become available for them. It's sort of a z Series inetd, except with a transaction monitor built in, and it's often paired with its hierarchical-database sibling IMS/DC.)

This topic is closed for new posts.

Other stories you might like