back to article A tale of mainframes and students being too clever by far

Take a trip back to the 1980s with a Monday morning cautionary tale of cleverness, COBOL and mainframe programming in today's Who, Me? While some who lived through the 1980s remember it as a freewheeling time of Knight Rider, Magnum PI and, er, Worzel Gummidge, for Register reader Sam it was a period of hard work and code …

  1. Phil O'Sophical Silver badge

    Ah, the days before memory protection seemed necessary...

    VAXen have a block-move assembler instruction MOVC3, which works much like the C function memcpy(). I remember trying to debug a program that would fall over occasionally, and when it did the resulting memory image made no sense. I eventually found that I had the arguments to the MOVC wrong, and when executed it shifted my entire executable program address space by 8 bytes. After that none of the symbols matched, and the debugger had no idea what was where.

    At least that would only have affected my local program memory space, VAX/VMS having good memory protection. Way back in Uni we used ICL systems with a home-grown OS. It had an 'ALTER' command that worked much like a POKE in BASIC, and would allow you to read/change locations in your local address space, you could pause a running program & poke around in it. There was also an assembler instruction which would allow you to send a command to the OS from within a program, a la "system()" in modern C. Both useful, but no-one had tried combining them until one of my fellow students did. Turned out that the system() function was treated as I/O, and handled asynchronously, the command was queued and the program paused until it was executed, then resumed with the result of the command. Of course, on a busy system a paused program could be swapped out and another scheduled until the async operation completed. If you passed an ALTER command like that, since the system didn't have virtual memory, when the ALTER command was run there was no guarantee that the address it was changing belonged to the calling program any more...

    My fellow student ran his program, and shortly afterwards every terminal in the room hung. Odd, he thought, but crashes weren't uncommon. We went for coffee, and when we returned the system was back, so he tried again. Same result... At that point he had the wit to call the computer centre and say "umm, I think that might have been me", so the end result was a thank-you from the sysadmin and credit for finding a bug, and not any punishment for crashing the main undergrad system twice in an hour.

    1. hmv

      Re: Ah, the days before memory protection seemed necessary...

      I believe that IBM 360s had hardware memory protection from their introduction in 1964 and I dare say Univacs also had it.

      Of course you had to run an operating system that made use of it, or there may have been bugs in the protection mechanism.

      1. bombastic bob Silver badge

        Re: Ah, the days before memory protection seemed necessary...

        I believe that IBM 360s had hardware memory protection

        I think they did. They used a 'base register' of sorts for relocatable code, such that the OS would assign one of the GP registers as a 'base register' for data, jumps, yotta yotta. Then your code could be loaded wherever the OS wanted it and timeshare nicely with everyone else. Included with that (apparently) was a memory protection setup so that you wouldn't read/write outside of your own memory space. I forget exactly how it worked, though... (had to study it for this one computer architecture class, which used IBM 360's as an example, and that was about it).

        1. Old Used Programmer Silver badge

          Re: Ah, the days before memory protection seemed necessary...

          S/360s used base and displacement addressing. For some instructions, you could add the contents of a register as well. In the assembler (ALC), you assigned a base register yourself. In a compiled language, the compiler did so. It was quite common to have multiple base registers since the displacement was only a 12-bit value. There were also coding techniques to shift the base register contents as needed.

          The CDC 6000 series, on the other hand, used a base/bound pair with a flat address space between them. The register contents were set up by the OS. Within your program, it looked like you started at address 0 went up from there.

        2. Anonymous Coward
          Anonymous Coward

          Re: Ah, the days before memory protection seemed necessary...

          "I forget exactly how it worked, though... "

          Memory was divided into blocks of 4096(?) bytes. Each block had a hardware 4 bit tag - which an OS privileged instruction could set to values from 0 to F.

          IIRC** tag value '0' was "unprotected" - and the OS marked its storage with 'F'. Each user program's allocated memory area was then tagged at runtime with one of the values '1' to 'E' - limiting the system to 14 user programs multi-tasking.

          The OS main code in privileged state P2*** would signal the user program's permitted protection value before handing cpu control to it in the unprivileged P1 state. Any attempt by the user program to access memory of a different tag value in the range 1 to F would cause a memory protection error interrupt into the OS in state P3.

          ** After nearly 60 years my memory may be a bit out on exact detail - even though I lived and breathed that architecture for 10 years.

          *** The cpu had four possible "states" that influenced how the hardware operated. P1 was for user programs; P2 the OS main code; P3 OS interrupt handling; P4 OS power fail.

          1. David Beck

            Re: Ah, the days before memory protection seemed necessary...

            The "four processor states" was an RCA thing, the IBM 360's only had privileged and non-privileged. Both designs had hardware memory protect in either 2k or 4k "pages". Everybody used 2k if they used memory protect at all. I worked on both systems in the late 1960's and 1970's. Lots of DOS/360 users had small memory machines 32KB and under, DOS would run in a 16KB machine, not just the OS but also its compiler family, COBOL/Fortran/PL/1. They needed a minimum partition of 8KB to run. The PL1 G compiler (about 60 overlays) had to swap out the root code for one large overlay to fit. If a new release of the OS bumped the OS over the 8KB boundry all hell broke loose and you went hunting for bytes. Since you lost all of a 2k protect segment if one byte was used the solution (until you could squeeze out those last 8 bytes you needed, the linker and loader worked on doubleword boundries) was to turn off memory protect. God I miss those days.

            The Airline Control Program (ACP) which was the environment for IBM's airline package for years, used 2KB protection and 2KB overlays, that is no transaction code could exceed 2KB. You just chained a bunch together if you needed more. It was still better than coding in today's environments.

            BR 14

      2. Anonymous Coward
        Anonymous Coward

        Re: Ah, the days before memory protection seemed necessary...

        "Of course you had to run an operating system that made use of it, or there may have been bugs in the protection mechanism."

        In the late 1960s the English Electric Computers (EEC) System 4 range was IBM 360 compatible - produced from a relationship with RCA. The EEC System 4/50 was pretty much a clone of the RCA Spectra 70/45. EEC produced several OSs for their System 4 range - the top end one being the disk based "J".

        EEC matched the spec of the RCA top end Spectra 70/55 - but did their own hardware design by bringing together their merged in-house design talent of the successful KDF9 and LEO 3. This machine was to have two distinct models - the System 4/70 and the 4/75. The latter having virtual memory addressing.

        After much engineering debugging the prototype #1 4/70 reached the stage where it could run the first cut of a customised version of the EEC "J" disk OS. For the first stages the memory protection mechanism wasn't activated in the OS

        Come the day where it was ready to try user program multi-tasking - now with the memory protection keys. It worked - and then a user program crashed with a memory protection error.

        The problem was that the 4/70 cpu microcode used a few words of low memory as transient working storage for some decimal operations. The OS had protected all low memory as part of itself - and the decimal operation was running with the user program protection key.

        A quick hardware mod made the relevant low memory unprotected. Problem solved. Except that the key worked on a minimum block size of memory. That area also contained the words (CAW/CCW) used to direct IO controllers. Not really a problem as only the OS could issue IO commands - until someone discovered that a user program could exploit this lack of protection in a very devious manner.

        1. G.Y.

          almost Re: Ah, the days before memory protection seemed necessary...

          The RCA spectra was ALMOST compatible with the 360. They "fixed" some particularly crazy op-codes, causing a $200M write-off of the whole series

          1. Anonymous Coward
            Anonymous Coward

            Re: almost Ah, the days before memory protection seemed necessary...

            "They "fixed" some particularly crazy op-codes, causing a $200M write-off of the whole series"

            That is interesting. Can you expand on that - or possibly a link to somewhere with an explanation?

          2. David Beck

            Re: almost Ah, the days before memory protection seemed necessary...


            I worked on the RCA designs from 1967 until 1980 or so. Spectra followed by the RCA series and, after the Univac purchase of the RCA computer division, the Univac Series 90 machines. The Series 90 designs were EOL'ed as a part of the slimming exercise after Univac and Burroughs merged two 4billion dollar companies into one 4billion dollar company. This was 1986/87. Siemens, a licencee of the designs, left over from RCA days, continued with new machines from Fujitsu but still using software (BS2000) which originated in Cherry Hill, NJ in the RCA days.

            If you want to play with non-IBM mainframes, both OS2200 (or EXEC8 as we like to call it) and MCP are available for download with emulators, be prepared it's a different world.

      3. NorthIowan

        Re: Ah, the days before memory protection seemed necessary...

        The newer Univacs of 1979 had memory protection and I would think some of the earlier ones would have had it to. The 1100/80's I started on could address 16M words but a program could only address 262K words at once without doing bank swamping. If I remember correctly, you could swap banks whenever you wanted to, but the OS decided which banks were in memory.

    2. aberglas

      The days before memory protection WAS (less) necessary...

      Back then, memory protection was less necessary because people programmed in sensible languages. Cobol, Fortran, Algol etc. cannot corrupt memory. Burroughs made a point of relying on their compiler rather than memory protection. The original author simply stumbled upon a bug in the compiler.

      Then we "advanced" to programing in C.

      1. Blackjack Silver badge

        Re: The days before memory protection WAS (less) necessary...

        Actually it was mostly that computer viruses were not much of a problem.

        In the eighties however... that had changed.

        One of the selling points of the Comodore 64 was that the Os came on a read only Rom chip so not only it was basically virus proof (the term malware didn't exist back then) but you could play around without permanently ruining the thing.

        Why? Because in the eighties computers viruses were already a big problem.

        1. someone_stole_my_username

          Re: The days before memory protection WAS (less) necessary...

          You seem to have your timelines mixed up.

          The first virus was created in 1986. By that time, the C64 was all but obsolete.

      2. Anonymous Coward
        Anonymous Coward

        Re: The days before memory protection WAS (less) necessary...

        "Cobol, Fortran, Algol etc. cannot corrupt memory."

        After a system update a long-established Fortran program kept crashing. The program had declared a multi dimensional array such that some of its sparse elements overwrote portions of code. The affected areas had previously been subroutines that were only used before the program wrote any data to the array. The system update meant that the affected areas now contained subroutines that were used later.

        The array had been too large for the user program memory - and it had been getting a free ride by unintentionally overwriting code space.

      3. David Beck

        Re: The days before memory protection WAS (less) necessary...

        I would like to call BS, at least on the Burroughs comments.

        The B5500 and all of its children including the current lot, had/have hardware protection at the word level. Each word in memory has a a number of tag bits which tell the hardware what can be done with this word. Protection from over-write or read is at the segment level and again done with hardware segment limit registers which contain the storage limits accessible by the currently executing code. All code is marked execute only and data are separated into different segments depending on the compiler based definitions.

        I think Bell and Newell cover this design but I'm old and can't remember the real technical stuff, just the fluff level I wrote above.

    3. Anonymous Coward
      Anonymous Coward

      MOVC3, and as a one plus, MOVC5

      There's MOVC3.

      And there's MOVC5.

      MOVC5 has parameters (operands) to specify the lengths of the source and destination buffers, and to specify a "fill" byte to be used if there's a need for things to be padded.

      A bit like some of the memcpy_ variants which eventually arrived two or three decades later.

      Still, who needs to worry about details like that.

  2. ColinPa

    Clearing storage may not be good for you

    I remember a newish programmer working on a mainframe subsystem, getting a lot of storage (1 GB) and being a good programmer decided to clear it all using one instruction. This was very expensive, as it meant the OS had to allocate the pages on the paging data sets etc. Then because we were constrained for RAM the OS had to page in (and out) all of the storage as it was used. The performance people looked into why start-up was so slow and found this clearing was causing a problem

    They found

    1) It only actually used about 10KB of storage - not 1GB

    2) If you allocate on a page boundary, the storage is cleared for you by the OS - so making the clearing unnecessary.

    The "best practices" taught at college were not always the best, more like guidance.

    1. Pascal Monett Silver badge

      The first best practice is understanding the environment you're working in, not blindly repeating lessons you have been taught.

      I'm a Notes programmer. I have been for most of my career now. Back in the 90s, the documentation indicated that, when looping through documents in a view, the old document would be removed from memory.

      I believed that and programmed accordingly until one day I was working on a database that had a really large amount of documents that I had to loop through. Somehow, my code was never able to get to the end. I debugged several times, thinking maybe there was a document corruption issue, but the code never failed on the same document. It was incomprehensible.

      Until I had a thought : what if all these documents were still in memory and not being removed as I had been taught ? I toyed with that idea for a few minutes and then thought, what do I have to lose ? So I changed the loop structure to not just drop the previous document, but delete it from memory (not from the database, from memory). I tried my code again and it worked flawlessly.

      Lesson learned : even official documentation can get it wrong sometimes.

      1. MiguelC Silver badge

        Re: (not from the database, from memory)

        ah yes, I remember sometime ago a coworker learning that deleting records from an in-memory array meant it could also delete them from the database, depending on the parameters used for creating the damn thing.... not a fun weekend for him, recovering information from backups and transaction logs.... at least he learned (as did others - natch - by example) that everything should be thoroughly tested before deploying in production, even seemingly small changes.

      2. Doctor Syntax Silver badge

        "even official documentation can get it wrong sometimes."

        Or maybe not keep up with changes. Too much trouble....

        1. Cynic_999 Silver badge

          Or you are using the documentation for a different version of the OS or application ...

          1. Nunyabiznes

            As Built

            I worked on a simulator back in the day that had two rooms for documentation. One for the official books provided by manufacturer and various subcontractors, and one for the as built/as modified/as hacked to make work late at night.

            The official books were written after the 1st unit was built, and all subsequent units were supposed to be built based off of those books. As we know, that is not what happens.

            As we found discrepancies, we copied the page out of the official book, made appropriate notes, and inserted into our "as built" books. It made for interesting troubleshooting sometimes when you had to trace a circuit through several pages of several books and then add in the unofficial changes also.

            1. Doctor Syntax Silver badge

              Re: As Built

              One of Brookes' less quoted dicta was that the documentation should be the first thing started and the last thing finished.

              1. Anonymous Coward
                Anonymous Coward

                Re: Mythical Man Month - F. P. Brooks

                "One of Brookes' less quoted dicta was that the documentation should be the first thing started and the last thing finished."

                Wise words, not just for software either.

                Further reading:


       - full book, freely (legitimately?) downloadable

        2. Hairy Wolf

          Thirteen months...

          Thirteen months - the time from me reporting a documenation error, to tech pubs coming back and asking if I could check their correction. No! I don't have that system anymore. I wasn't even testing the UI in the first place, I just stumbled upon it.

          1. Anonymous Coward
            Anonymous Coward

            Re: Thirteen months...

            "[...] tech pubs coming back and asking if I could check their correction. "

            The new mainframe range had a large cupboard full of ring-binder manuals covering all the system and applications software. Much of this was a "work in progress".

            Every week a package would arrive with corrections to be applied. Many of these were instructions to make a hand-written change to a line or paragraph. Only occasionally there would be a reprinted whole page.

            We soon found that it was very, very time consuming - especially writing the small corrections into our set of manuals. So the correction instructions were put in a pile - with the certainty that before long there would be a re-issue of the whole manual.

        3. Anonymous Coward
          Anonymous Coward

          "Or maybe not keep up with changes."

          The more detailed the documentation - the more chance that a detail will be wrong.

      3. nintendoeats Bronze badge

        From my experience writing official documentation for a C library, another possibility is that the exact circumstances under which the files are removed from memory are more complex than anybody wanted to explain.

        There is always a push to KISS. Judicious use of the word "typically" us often the compromise between giving the full story and saying something untrue. "Typically, when you XXX from XXX the XXX will be deleted and replaced with the XXX from XXX".

        Ideally we would have enough information in play for the user to take an educated guess at the true ruleset, but it's hard to tell your editor "I included this sentence so that the reader can solve an exciting puzzle in search of deeper knowledge!"

      4. ElPedro100

        Ah the good old days...

        ...when Notes documentation was at best, patchy and debugging tools, well buggy.

        Have a beer from someone who shared the pain =>>

        1. Outski Bronze badge

          Re: Ah the good old days...

          There was a sort of @Formula debugger for a while, but Damien ripped it out when he rewrote the @Formula engine cos it was so poor

      5. Anonymous Coward
        Anonymous Coward

        "Lesson learned : even official documentation can get it wrong sometimes."

        I was posted to an overseas subsidiary where the person supporting the mainframe OS was a bit put out by this youngster from the UK. One of their outstanding problems was the occasional random corruption of memory.

        It didn't take me long to diagnose it as a hardware problem - probably electrical noise. The OS man insisted it could not be - as there wasn't any parity error being signalled. He then pedantically explained that the manual said that the parity checking would catch such corruption - therefore it could not be as I said.

        Having cut my career teeth trouble-shooting hardware/software on the prototype of that mainframe - I knew that the data bus from the memory and through the cpu was not parity protected. Character handling instructions serially fetched data words from memory and then stored them via that route. The Character Handling Unit registers were particularly sensitive to transient noise from faulty earthing on things like card readers or tape decks. Read good data from memory - write corrupted data back to memory.

    2. swm Silver badge

      Re: Clearing storage may not be good for you

      I believe that on an early version of MULTICS some programmer cleared a large array by columns instead of rows. This caused a page fault for just about every memory access.

      On the IBM 1620 using FORTRAN copying the value of one uninitialized variable to another variable would sometimes clear all of memory because the flag on the uninitialized variable wasn't set so it would copy until it hit a digit with a flag set. But all of the set flags were cleared out by the copy so it never stopped. A program like:

      I = J (J not initialized)

      (depending on the order of I and J in memory) was all that was needed.

      The IBM 704 had a clear memory button that was active only when the machine was running!

  3. Will Godfrey Silver badge

    A question for 'Sam'

    Did failure to complete the course impede your employment progress, or did it actually prove beneficial?

    1. keithpeter Silver badge

      Re: A question for 'Sam'

      Seemed a bit harsh to remove 'Sam' - unless he decided to stop going himself. Clearly not deliberate sabotage. Could have been a good learning point about machine arch differences &c

      Icon: used to work in FE. Have taught day release (not IT). Usually committed students.

  4. Pete 2 Silver badge

    The support "get out"

    > and the machines did not work in quite the same way.

    Which leads to one of the most popular responses from software support teams, when faced with an irate developer who has just wiped their (only copy) entire disk.

    "Well it works OK on our system"

  5. Olivier2553

    Strange association

    After high school, I did not want to go to University, instead I did a 2 years technical degree in computer stuff. At the end of the two years we had a couple of months of internship that I did in a lab developing digital cameras for the television. My task was to interface a math coprocessor to a machine running a Motorola CPU, design the hardware, program the driver all in assembler... The students were assigned a professor who was in charge of following what work was being made by the student. For some reason known only to the university administration, the professor working with me was the person in charge of teaching us COBOL. No need to say that he never really understood what I was doing.

  6. This post has been deleted by its author

    1. Nunyabiznes

      Re: Colleges vs. Real world.

      I was forced to sit through a class on basic computing - including how to build a PC from scratch - even though I was the lead tech at the computer sales/repair business I was working at. They wouldn't let me test out of the class, or even just write them a check for the credits.

      I can be quite a jack*ss when irritated, so I made a point of proving the instructor wrong at least once every class. Since attendance was mandatory as part of your grade, I was there every class. She didn't particularly care for me, for some reason. :shrug:

      1. J.G.Harston Silver badge

        Re: Colleges vs. Real world.

        After several years coding, designing and building hardware, summer work designing and building computerised medical equipment, a summer teaching coding, I had to endure two years at university being taught how to type.

      2. thosrtanner

        Re: Colleges vs. Real world.

        Hardly the instructors fault you had to attend her course

    2. oldfartuk

      Re: Colleges vs. Real world.

      i learned COBOL66 on Manchester Universities Dec 11/750. I loved COBOL. and were were taught the new fangled "Michael Jackson Structured Program Design Methodology"

      However, for some reason I never fathomed the lecturer assigned to teach us COBOL was a Canadian Chemistry professor.

      I went into the first lecture and came out knowing less than I started. After a second lecture, and my knowledge reduced even further, i decided I could do better on my own. I handed in every assignment o time and scored 90% i the final exam. I loved COBOL. I even wrote a batch non interactive pacman game on it, you put your (up to 10) moves as a NSEW in a text file then ran the program, then it printed out the screen on the 132 col A3 lineprinter. It took days to play one game.

      1. Bob.

        Re: Colleges vs. Real world.

        I learned PASCAL at Manchester (one Term in the Electrical and Electronic Department)

        I even bought the text book. £16? My weekly rent on my room was £13

        I wrote my first simple Pascal program. It would not compile! Just said Error in Compiling with no further details. I wasn't a novice to programming, been messing with BASIC and Assembler for some years.

        My instructor(s) were baffled too. The Code was perfect.

        After some days/sessions, we realised we had a newer version of Pascal that required the program to start with a semicolon.


        I also did a Ferranti CAD package course connecting the internals of their chips for our projects. Wireframe green screen graphics. 450 NOR gates.

        A cross between programming a ROM and PCB layout design.

        This also needed 'compiling'. But us students had the lowest priority on the system and it usually took 24 hours.

        I don't know who was hogging all the processor time, or maybe the system was buggered.

        Still I had a great 3 years in Manchester. And it has a great reputation. Many parts being better than Oxford and Cambridge.

        Lots of famous names came out of there. Not me, I'm just a decent engineer.

        1. Justin Case

          Re: Colleges vs. Real world.

          Oh! Pascal!

          Still have the book on my shelf... its pages are yellowing and its spine has seen better days. A bit like me really.

  7. Stuart Castle Silver badge

    Fairly certain I've posted this, but..

    Back in the olden days, when I used to run a computer lab dedicated to the then trendy (and always rather vaguely defined) subject of "Multimedia". We had 40 good (for the time) spec machines, and 10 really high spec (again, for the time) machines. The really high spec machines were primarily used for video editing, so each machine had a good sound card, good cpu, and a video capture card with it's own 4 Gig SCSI drive.

    All was running well, until one day, several of the applications we'd installed started failing due to missing files. Thinking we had a virus, I started investigating one of the machines. As policy dictates, I pulled it from the network, logged in with the local administrator account. Found that all the image files were missing. Unfortunately some of our software required those JPEGs, GIFS and PNGs, so failed. The virus scan came up clean. Not being one to entirely trust the virus scanner (after all, how did I know that the scanner itself wasn't infected and reporting a false negative?), I put the machine back on the network, wiped it and rei-installed the software.

    Sure enough, the next day, the machine had all the software installed, and was working. So, I allowed students to use it. The day after, the problem came back.

    I discussed it with my colleagues, one of whom looked sheepish, and said "I know what's causing the problem". Apparently, in an effort to keep the disk usage down, he'd written a script that, overnight, logged on to every student machine, searched for every conceivable kind of image, copied every image to his own HDD, then cleared out both those images and any student browser caches left on the machine. He said he copied the images to his machine because it's *ahem* evidence. We eventually came up with a solution to the problem, in that we adapted his script to ignore the folders that the broken software was installed in.

    Thankfully, the 4 gig drives for the capture cards were never made available on the network, so his script could not access them even if it tried to.

    Which brings me to a second story. Those capture cards made there drives available to the OS as what appeared to be a normal HDD. I say appaeared, because the user couldn't directly access the data, and it had quite a rigid file system. The root of the drive had folders with various file types (JPG, WAV, AVI etc), and in each folder were the project folders for user work. The weird (and I actually think quite nifty) part is the the folder structure in each of these root folders was exactly the same. The root folders all gave access to the same data, it was just converted on the fly to the data type shown in the root folder. So, your captured video file would be in the AVI folder, and you could access the individual frames in the JPG, PNG and GIF folders, with the audio track(s) being in the WAV folder. Obviously, the conversion, took a finite amount of time, so where as Premiere might take 10 milliseconds to open a JPG file on a conventional HDD, it would probably take a second to open a JPG on this drive (because the card was extracting it from the video on demand).

    A student complained to me that his project was taking > 20 minutes to open on Premiere. I thought that was odd , so went to investigate. Now, on opening, Premiere checked all the media in each project, and what he'd done was to import the entire JPEG folder on the drive into his project, then put that on the timeline (Premiere will treat consecutively numbered images as frames in a video) and imported the WAV for the audio track. I explained what was wrong, deleted both the JPEG and WAV folders from his project, then imported the video file from the AVI folder. All of a sudden, Premiere was able to open the project in seconds, because it wasn't asking the capture card to extract and convert thousands of frames.

    1. Anonymous Coward
      Anonymous Coward

      Curious - what capture cards were you using?

      1. Anonymous Coward
        Anonymous Coward

        Sounds a lot like one of these:

        Video capture card with integrated SCSI and FireWire controllers

        1. Anonymous Coward
          Anonymous Coward

          Thanks - years ago (about the time of this article), I used to work for a distributor of a lot of video kit for the UK. Was meant to provide tech support, but wasn't access to the stuff we were supporting!

      2. usbac

        I think the card the OP was talking about was the Perception Video Recorder form DPS. We used to sell these things to video production outfits...

        I distinctly remember the whole folder structure with different media types thing. It was very cool that the drivers (with help from some special onboard co-processors) could do the media conversion on the fly.

        We used to equip them with several 9GB SCSI drives. These were 5 1/4" full height drives. They each weighed about 10 pounds! We used some HUGE tower cases. I think the biggest we ever sold had 6 of these drives in it. The whole tower (with enough power supplies to run everything) weighed over 100 lbs.

    2. Henry Wertz 1 Gold badge


      I saw a scanner like this; it was kind of cool. No scanning software required, it showed up as a small hard disk. Turn it on, the disk is empty. Push "scan", a single JPEG would show up with the scan in it. (In actuality, no hard disk, it just a MB or 2 of RAM to store the scan.)

    3. tcmonkey

      Used to do something vaguely similar to discourage users from copying their pirated music collections to network shares (this was the days before smartphones were everywhere). Every night a script would run that would identify any .mp3 or similar files on shares and replace them all with a link to a fantastically awful copy of Rick Astley's inimitable "Never Gonna Give You Up". We're talking a terrible quality file, 8 bit, 8kHz mono sort of thing. It pretty much solved the problem overnight (boom boom). A passworded zip file would have saved them, but apparently nobody thought of that.

      1. Anonymous Coward
        Anonymous Coward

        Hold on, are you sayig you invented Rick Rolling?

        1. tcmonkey

          Nah - this was about 2010 - the concept was already around by that point.

  8. Adrian 4 Silver badge

    Well, duh ..

    "When the mainframe recovered, the program request list (including Sam's move of doom) was run again. Again, everything fell over."

    They didn't think of running the program _after_ the one that had crashed ?

    Seems like the fault was with the college's system programmers, not Sam.

  9. Captain Scarlet


    Everyone should know you can only learn by breaking things!

  10. Sparkus Bronze badge

    Ah yes, the old HCF opcode problem ;-)

  11. Henry Wertz 1 Gold badge


    "The college IT administration was not happy. My instructor was even more unhappy since he felt it reflected upon him personally.

    I did not finish that class. I did not continue my education at that college."

    Crazy. When I was in college (1999), I was in a parallel programming class and we'd just done an assignment on using shared memory; someone in my class managed to crash the CS departments 16-processor SGI. Professor (who was from one of the ex-Soviet countries and sounded rather high strung now and then), at beginning of class he slaps a hand onto the table "So!!! Someone has crashed the SGI! Which student has account 17!!" Everyone's looking around, this student looks like he's going to crap his pants. He points "You!!!!" (about then student is probably expecting to get expelled or something.) (normal conversational voice) "Shared memory should not crash the entire system, we'll need to write up a bug report to send to SGI."

    Honestly, that's the reasonable response for a student's program unintentionally crashing the system.

    1. martinusher Silver badge

      Re: You!!!!

      >"Shared memory should not crash the entire system, we'll need to write up a bug report to send to SGI."

      He's right, of course.

      Honestly, if you're going to allow a bunch of (clever) students loose on your system you've got to either be very clear about what they should and shouldn't be doing or you need to keep them in a sandbox that beyond just 'secure'. You should also lecture them (per the initial story) about the evils of exploiting side effects; you should only be doing this is if you're the NSA and you're trying to worm your way into someone's computer (and even then.....).

  12. IGnatius T Foobar !

    Memory protection

    It's hard to feel sympathetic to anyone who chose to purchase a computer without memory protection and then expected a multiuser workload to operate on it. Even the most primitive operating system of all time had the "General Protection Fault" exception. The most I was ever able to damage my uni's computers was with a fork bomb, but that was on a line of computers on the Sperry side of Unisys, not the Univac side.

    (My university was in Pennsylvania, you see, and at the time there were rules about universities having to buy computers from vendors in the same state if possible. Blue Bell, Pennsylvania was home to Unisys.)

    At a university, one is not exactly likely to have the best of the best in terms of system administrators. This is the stuff of legend, after all. But at least this was back in the old days when universities actually taught something.

    1. vogon00

      Re: Memory protection

      > " universities actually taught something." - Seconded!

      My major beef with recent grads is that they know the latest shiny bells-and-whistles stuff, but they have little or no comprehension of the 'lower-layer' stuff that makes it all work.

      Latest one was a recent grad in all things webby and Azure. He seemed to have the inability to accept (or believe, I don't know which) that that the resources he requires in the all-powerful Azure estate may not be reachable on a mobile platform that relies on cellular matter what 'xG' is in use:

      The phrase that got me involved was "I don't know why it doesn't work, the IP address is valid!". Que a *long* discussion of IPv4 vs v6, Firewalls and the general state of cellular connectivity in our portion of the universe. He's a bright guy, and picked up the concepts he needed very quickly.

      I blame the FE Establishments and/or syllabus for not at least introducing the idea of lower-order system components. It's not *all* about Layer 7 by a long chalk.

      1. A.P. Veening Silver badge

        Re: Memory protection

        It's not *all* about Layer 7 by a long chalk.

        No, a lot of the problems are at layers 8 (users) and 9 (managers), but what layer is education?

      2. heyrick Silver badge

        Re: Memory protection

        "the general state of cellular connectivity in our portion of the universe"

        One of my main gripes with Android is that when connectivity is established, everything suddenly seems to want to do stuff in the background (Google Play Services I'm looking at you) and the bloody OS seems to want to prioritise this.

        It's only a brief hiccup on 4G.

        But if my phone decides that all it can see is EDGE, then if we're lucky that's something stupid like 15K/sec and many retries. The phone's connectivity essentially dies as far as the end user is concerned. I left it once. Four minutes and it was still churning data in the background and busy-waiting foreground apps like the browser.

        I think it ought to be mandatory for Google's OS employees to get dumped in some place like, I dunno, Wyoming maybe? Where there's barely a person for a hundred miles and mobile connections that are somewhere between patchy and nonexistent. Then they might learn that when the user wants to look up something, shitty 30MiB updates (when you have specified "only update on WiFi") are unacceptable. But if they live in sight of a radio tower and get 4G, 5G, a hundred megabits a second, they're simply never going to experience this and, so, the behaviour of their creations will still suck in poor reception areas.

        1. vogon00

          Re: Memory protection


          I wish the current crop of young devs would realise that bandwidth is NOT limitless, and what is available is shared, probaly with a high contention ratio. I recall I made a savageish post a long while back about some idiot mobile app that tried to auto-complete after every keystroke by sending said keystroke to the backend in real time.

          Try that on GPRS!

          Just starting a telemetry and update project where I have advised the other players to assume that the cellular IP connection is not available, as opposed to assuming it really being 'always on' like wot they want. Also, I'm of a mind to suppress the use of GPRS/EDGE to avoid 'expectation' issues....the jury is still out on that decision.

        2. TSM

          Re: Memory protection

          Oh God so much this. I run with data off a lot of the time (because many of the games I have on my phone are only playable in this state; turn data on and they spend so much time retrieving and displaying ads that they forget about the game part) and this frustrates me every time I turn it back on.

          A moment's thought would lead one to the conclusion that if the user has just turned data services back on, it's probably because they want to DO something that requires data, and therefore care should be taken to prioritise the thing the USER wants to do and not the fifty background apps the user isn't trying to use that all see the connection go back on and decide now would be a really good time to use it.

          > Google Play Services I'm looking at you

          And GMail. Those seem to be the top tier, then after that all the other background apps get a go, and at some indefinite point in the future it might deign to consider the app you're actually opening.

          > It's only a brief hiccup on 4G.

          Oh how I wish. Even on 4G, turning data on pretty much locks up my phone for a few minutes. You can try to do other things but there's no guarantee anything will work, and I struggle to access any data until after GMail at the least has had its fill. Most of the time, after GMail I also have to wait for Slack et al. to finish looking for stuff before I can do anything.

          1. Anonymous Coward
            Anonymous Coward

            Re: Memory protection

            For the games, may I suggest a firewall? I've been using NetGuard on my Android phone for quite a while - it blocks by program, and can (ex.) allow wifi data but not cell data. Works quite well for that particular case.

      3. J.G.Harston Silver badge

        Re: Memory protection


      4. Sam not the Viking

        Re: Memory protection

        We employ some freshly-graduated engineers and although their discipline is not computer science, they are engineers. I was astounded that they had received no formal teaching in computer programming at university. None. Zero. They could skilfully use the commercial programmes they had sampled at uni, but anything else was a mystery.

        Different in my day and that is a longer than I care to state.

  13. Stevie Silver badge


    Interesting story.

    What I'd really like to know is what model of Univac, and what the @MAP processor had been told to do when the said program was collected.

    Because, in the world of early 1980's era the Univac computers I worked on, the physical architecture had the memory partitioned into banks (which were the original purchase units for memory) and upon booting the memory was partitioned into "I-banks" and "D-banks". Instructions went into I-banks and data into D-banks. Different cabinets too.

    Writing a Cobol program that exceeded it's I-bank or d-bank boundaries would trigger the so-called "guard mode exception" so beloved of those who *didn't* compile with extra option 7.

    It's not that I disbelieve this tale of "overwriting the operating system", it's just that exceptional claims require exceptional proof.

    Now if it had been an ICL computer ...

    1. ICL1900-G3 Bronze badge

      Re: Bah! - ICL!

      If it had been and ICL computer, there would have been no problem. The 1900 had a hardware datum and a limit register and any attempt to go off piste would result in program termination, no matter which OS you were using. The 2900s were much more secure.

      1. Stevie Silver badge

        Re: Bah! - ICL!

        Oh yeah? Well at JB Machine Tool division we had a PLAN program running on a 1901T, GEORGE II+ that on rare occasions would push data intended for a table into a jump address in its "I-Bank". No diagnostic. Just a weird error after a few more cycles of stuff-doing when the stack was popped.

        Took yonks to diagnose and fix. They called to the Rainy Tuesday bug.

  14. Blackjack Silver badge

    Ah the eighties...

    What I remember from the eighties are cartoons, like the eighties Astroboy, a short run of Conan the boy from the future, Gobots and Transformers, Thundercats and Silver Hawks.

    Yes I saw Knight Rider and Magnum PI too, but I liked cartoons more.

  15. Anonymous Coward
    Anonymous Coward

    "When the mainframe recovered, the program request list (including Sam's move of doom) was run again. Again, everything fell over."

    One morning the software developers' service mainframe crashed. Standard procedure - memory dump to tape - reboot - get the users running again - print off the dump.

    Except that the system would crash again before the print had hardly started.

    Eventually we did the dump print with the system down. Even 1mbyte of memory produced a substantial wodge of printout.

    The operators tried to continue - but the crashes kept happening. Fortunately I spotted the root cause in the dump - and headed for the lift to one of the many terminal rooms. A user was sitting unhappily at the terminal towards which I was heading. When I approached there was an outburst about "this crap system keeps falling over - every time I get this far into my program entry".

    It was a real-time BASIC program - and the user had decided to add a subroutine starting at line 1000000. It hadn't occurred to them that the system going down repeatedly when they entered that line might be significant. The line number was way outside the BASIC interpreter's anticipated range - with no contingency.

    1. swm Silver badge

      "It was a real-time BASIC program - and the user had decided to add a subroutine starting at line 1000000."

      The original BASIC only used a maximum of 5 digits for the line number. Users using longer line numbers were continually surprised at the results:

      Deleting line 123456 wouldn't work - you had to delete line 12345 etc.

  16. Anonymous Coward
    Anonymous Coward

    Crashed an AS400 on College Registration day.

    When I was admin of a college campus academic lab, on registration day I would write and run a query on two tables in a database on an AS400 to get student info to create network id's and also share student info with the library. It was the only time I every touched this database, or the AS400 during the year. One year I forgot the join and well, you can image what happened. I guess what should have happened is my bad query would fill my quota and fail. Sadly, the AS400 admin had not set any user limits, so my malformed query ran and ran and filled the box and it fell over. Panic ensued, and the AS400 admin told me I was going to lose my job over this mistake, his boss said he should be fired for not limiting users and allowing this to happen. In the end we all got through it with no one losing their jobs.

  17. Bob.

    In the late 70s, we only had one computer for the whole County (Aberdeenshire).

    It toured the schools in a Mobile Library Van, specially converted.

    While the operator got a VDU and a keyboard, we got punched cards and teleprinter output.

    Still, I fell in love with technology and my first 10 card program also included an 11th. PRINT "Hello World"

    Later I moved to England and we even had a Computer Club (with one computer at first in the Maths Dept)

    I joined and spent many happy hours there. We had a couple of great teachers too who joined in.

    A band of happy geeks who were considered odd but learnt a lot for later life/careers

    Actually we played games whenever we could, loaded via cassette tape.

    Anyone remember Colossal Adventure? Google it

    Or we wrote our own simple games

    But most of us did become social, settle down, have kids, became non-geeks(ish)

  18. Torben Mogensen

    Ray-tracing on a Vax

    When I was doing my MSc thesis about ray-tracing in the mid 1980's, we didn't have very good colour screens or printers, so to get decent images, you had to use dithering, where different pixels were rounded differently to the available colours to create dot-patterns that average to the true colour. One such technique is called error distribution: When you round a pixel, you divide the rounding error by 4 and add this to the next pixel on the same row and the three adjacent pixels in the next row. This way, the colours in an area would average to the true colour.

    I ran the ray-tracer program (written in C) on the department Vax computer, but I had an annoying problem: At some seemingly random place on the image, a "shadow" would appear making all the pixels below and to the right of this be really odd colours. I looked at my code and could see nothing wrong, so I ran the program again (this would take half an hour, so you didn't just run again without looking carefully at the code). The problem re-appeared, but at a different point in the image! Since I didn't use any randomness, I suspected a hardware fault, but I needed more evidence to convince other people of this. I used a debugger and found that, occasionally, multiplying two FP numbers would give the wrong result. The cause of the shadow was that one colour value was ridiculously high, so even after distributing the error to neighbouring pixels, these would also be ridiculously high, and so on.

    To make a convincing case, I wrote a short C program that looped the following:

    1. Create two pseudo-random numbers A and B.

    2. C = A*B; D=A*B;

    3. if (C != D) print A, B, C, and D and stop program.

    This program would, on average, stop and print out different values for C and D after one or two minutes of running (but never with the same two numbers), and this convinced the operators that there was something wrong, and they contacted DEC. They sent out some engineers, and they found out that there was a timing problem where the CPU sometimes would fetch the result of the multiplication from the FPU slightly before it was ready, so by increasing the delay slightly, they got rid of the problem.

    1. HereAndGone

      Re: Ray-tracing on a Vax

      You expected to compare a floating point result for EQUALITY?

      That wasn't a computer problem, that was a programmer problem!

      1. Anonymous Coward
        Anonymous Coward

        Re: Ray-tracing on a Vax

        Not a case of equality, 2.5 * 2.5 == 6.25, but **repeatability** - will the same operation give the same answer given the same operands? If the answer is "no", there's a clear hardware problem.

    2. vogon00

      Re: Ray-tracing on a Vax

      Nice answer to the suspected problem. I shudder to think what would happen these days....even if you could stimulate a response from the manufacturer.

      Everyone would dodge the issue and blame anyone else; God forbid actually taking some ownership of the issue on behalf of someone else!

  19. Efer Brick

    Invaluable learning experience

    Should've been happy

  20. rmstock

    Try your COBOL code with gnucobol

    Check here

    and here

    $ cobc -V

    cobc (GnuCOBOL) 3.0-rc1.0

    Copyright (C) 2018 Free Software Foundation, Inc.

    License GPLv3+: GNU GPL version 3 or later <>

    This is free software; see the source for copying conditions. There is NO


    Written by Keisuke Nishida, Roger While, Ron Norman, Simon Sobisch, Edward Hart

    Built Jan 05 2019 21:34:12

    Packaged Apr 22 2018 22:26:37 UTC

    C version "4.6.1 20110627 (Mandriva)"

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2021