back to article New year, new bug – rivalry between devs led to a deep-code disaster

Welcome, gentle reader, and rejoice, for with the new year comes a new instalment of Who, Me? in which Reg readers recount tales of tech trouble for your edification. This particular Monday meet a programmer we'll Regomize as "Jack" who had a rival we'll call "Irving." The two of them were programmers at a small concern that …

  1. Michael H.F. Wilkinson Silver badge

    Never worked on an Amiga, but I thought the "Guru Meditation" error messages had a lot more style than the bland "segmentation fault" messages that drive so many of our students to distraction.

    1. David 132 Silver badge

      Part of the Amiga quirkiness. Early models had various B52s song titles etched onto the motherboards - Rock Lobster and so on - because the designers were fans of that band.

      The Guru Meditation was essentially the equivalent of a Windows BSOD - once you saw it, that was it, game over, nothing you could do other than reboot - but yes, it was arguably more stylish.

      And due to the lack of hardware memory protection on the Amiga, you'd see the Guru quite frequently.

      Later versions of the Kickstart ROM introduced the yellow Guru Meditation, which in theory was recoverable, but in my experience - nah. Just a different tint to accompany your reboot :)

  2. Korev Silver badge
    Coat

    Sadly, Jack's tenure at the small development concern he'd sabotaged did not survive the incident.

    So he jacked his job in?

    1. Robin

      Is that icon you, getting your jacket?

  3. Wally Dug
    Alert

    Jack and Irving

    Jack and Irving - I like what you did with the names this week. Well, actually, as an Amigan (I still use it under emulation), i don't like what you did with the names - I'm starting to get cold sweats already and I'll probably have nightmares tonight :-(

    1. David 132 Silver badge

      Re: Jack and Irving

      But did you also spot the blooper in the article, which erroneously describes the Amiga 2000 as having a 68020?

      Probably the author meant A1200. The B2000 had, as standard, the same 68000 as the 1500, 500 and 1000 before it, thanks to Commodore’s glacial pace of hardware development.

      1. Anonymous Coward
        Anonymous Coward

        Mea culpa

        "But did you also spot the blooper in the article, which erroneously describes the Amiga 2000 as having a 68020?"

        I'm the original submitter.

        You're right - I misremembered that detail. It must have been machines with A2620 or A2630 accelerator cards, presumably including A2500s, that were able to run the application - but not vanilla A2000s.

        In my defence, I haven't touched an Amiga in over 30 years - I had to look up the above details on Wikipedia. Still, I'm guilty, in a very small way, of the same error "Jack" was guilty of - failing to verify assumptions. *sigh*

      2. MonkeyJuice Bronze badge

        Re: Jack and Irving

        Oh, thank goodness. As an owner of the A2000 I was also thrown by this, but naturally I assumed my failing memory was to blame.

  4. Bebu
    Big Brother

    The real lesson...

    Don't (re)write C etc code in assembly. While you are are smarter than the compiler the compiler is a plodder (like Irving:) and usually doesn't make alignment and other foolish mistakes. Get a better compiler or pass the suboptimal assembler code emitted by the existing compiler through your own optimisation stage.

    Compilers back then did sfa* optimisation so even adding a peep-hole optimiser was a win.

    *classical technical term rendered by Terry Pratchett as adamus con flabello+ dulci [Jingo]

    +sir Terry was a gentleman.

    1. MiguelC Silver badge

      Re: The real lesson...

      There are times that the difference in performance matters.

      In my graphic computation project in uni (30 years ago....God, I'm old!), I used some embedded assembler code in my Visual C++ project to load a polygon vector file. While most of my colleagues' code took several minutes to load the largest files, mine did it in a couple of seconds - it loaded the first file so quickly the teacher first thought my code just didn't work - he only realised it had done it's thing when I told him to press the menu button to show the 3D object and all worked :)

      1. Spazturtle Silver badge

        Re: The real lesson...

        The Linux kernel devs are currently replacing all the assembly with C because with modern compilers C is actually faster in most cases.

        Assembly only really has a place is things like audio and video codecs which are less code and more mathematical wizardry.

        1. UCAP Silver badge

          Re: The real lesson...

          There are arguments for saying that even audio/video codecs can be written in C with no loss of performance - modern optimisation techniques have got so good that assembler language is only really useful in a few edge cases (e.g. embedded board boot-up code).

          1. imanidiot Silver badge

            Re: The real lesson...

            Mostly the argument is that compilers have gotten so good it's VERY hard for humans to actually do a better job (and most of the time will actually do worse).

            1. tin 2

              Re: The real lesson...

              This.. That's the reason and most certainly NOT that C is in any way inherently better at it than assembly!

              1. heyrick Silver badge

                Re: The real lesson...

                C is better in that there's an awful lot of crap that you can dispense with.

                You don't need to keep the stack balanced. You don't need to waste time wondering about the best way to optimise branches. You don't need to keep track of registers and, if using named definitions, make sure you're not using the same one for different reasons at the same time. You don't need to worry about stack frames so crash traces make sense. You don't...

                Maybe in the '80s and early '90s with slow processors assembler was the best choice.

                But these days? Let the compiler deal with all that crap, it's what it's there for.

                1. Anonymous Coward Silver badge
                  Boffin

                  Re: The real lesson...

                  It's impossible for C to be better than assembler for any specific case. When you run C through the compiler, the output IS assembler. So if a human was skilled enough (extremely unlikely now) they could product assembly code that's just as good. (Or indeed if you're skilled enough with your butterfly - 378)

                  HOWEVER

                  The overarching benefit of C (or any other high level language) is that it's processor agnostic. Assembly needs to be written for each target architecture, whereas in C the compiler deals with that (mostly)

                  1. gnasher729 Silver badge

                    Re: The real lesson...

                    The other advantage of C vs assembler is that it takes much less time to implement and test an algorithm, and you can use the time saved to improve your algorithms. Especially when you reach the point where you make code faster by making it more complicated.

          2. david 12 Silver badge

            Re: The real lesson...

            Any compliant c compiler will emit crap arithmetic code because the standard demands it: 32 bit arithmetic emits a 32 bit number. If you don't see the problem with that, it's not your area.

            And modern c compilers are almost all pretty crap with 8 bit processors: everything gets promoted to 16 bits. That's not a problem an optimising compiler can fix.

          3. An_Old_Dog Silver badge

            Re: The real lesson...

            1. If your program is too slow, don't tweak it with calls to assembler code. Get a better algorithm! (If it's I/O bound, well, you're stuck. Or are you?)

            2. Never replace C (or any other higher-level language) with assembler before you've profiled your code. Your "hot spots" rarely are where you think they are.

            3a. What sort of lousy assembler lets you accidentally begin assembling an instruction on an illegal address?

            3b. If you don't know how to intentionally create an instruction on an illegal address, you should not be professionally programming in assembly language.

            1. robinsonb5

              Re: The real lesson...

              > 3a. What sort of lousy assembler lets you accidentally begin assembling an instruction on an illegal address?

              Actually neither the 68000 nor 68020 can execute instructions from an odd address - so I suspect what actually happened in the story was an instruction's word- or longword-sized *operand* was on an odd address - which would cause a bus error on 68000 but be OK on 68020. (Probably using move.l to copy the credit string as quickly as possible!)

        2. NXM Silver badge

          Re: The real lesson...

          No no no and no again.

          I write in assembler because C is too slow and hard to handle, no good at all for real-time stuff. I did an addressable led driver on one of the cheapest processors available which has about 4 instructions to do something, and C would take too long.

          1. druck Silver badge

            Re: The real lesson...

            Writing an application in assembler is a whole heap more of masochism than just a driver.

            1. Rich 2 Silver badge

              Re: The real lesson...

              I once wrote an entire flight simulator control in assembler.

              And by “flight simulator”, I mean a real one - you sit (sat) in it, it had hydraulics, and (literally) cost millions

              It didn’t feel the least bit masochistic - one of the very beast and fun jobs I’ve ever done

              Oh and it worked perfectly. Never failed

              1. FIA Silver badge

                Re: The real lesson...

                How easy was that codebase to maintain? Especially long term? (I'm working on a code base in a 'dead language' at the moment, the code is very well written, but that's neither here nor there as we can't easily hire developers to maintain it. This is a high level language too, not assembly).

                How easy was it for someone else to update it after you've left?

                What architecture, does it run reliably on newer versions of the same? How easy is it to port it?

                None of these may have been design considerations, but they may later turn out to be issues.

                1. swm

                  Re: The real lesson...

                  I have converted many algorithms from one language to another (LISP to C to Java to C++ etc). It is fairly straight-forward. Even converting programming styles is fairly easy.

                  1. ricardian

                    Re: The real lesson...

                    About 20 years ago I converted a Pascal program to C by using MS Word then tweaking the final result. I'd never encountered Pascal before but found an ancient Pascal textbook in a charity shop.

        3. gnasher729 Silver badge

          Re: The real lesson...

          I remember about tripling the speed of some video encoder by replacing an assembler function that handled one pixel as fast as possible with C code that used compiler intrinsic for vector operations, and then unrolling a loop eight times. So if you want to optimise speed plus programming effort it’s high level language.

        4. swm

          Re: The real lesson...

          Multics was written in PL/1 as the higher level language gave better control of the system design. It was thought that they would lose a factor of 2 in performance but make it up in cleaner code. What happened was that various constructs in PL/1 were slow so they changed the compiler to optimize the code generator for these cases. This benefited everyone as their code also ran faster.

          There was a switch for the compiler to optimize the code but soon it was discovered that the compiler ran faster with the optimize switch because less code was generated so they took the switch out and always optimized.

          Higher level languages have come a long way (but the original FORTRAN compiler for the IBM 704 generated code that was quite fast).

        5. pirxhh

          Re: The real lesson...

          It also has its place in crypto, where you want to make sure that different code paths take the same time. Compilers may optimize too well in that case; you don't want minimal time but constant time (to avoid leaking information about keys).

      2. jmch Silver badge

        Re: The real lesson...

        I remember a university assignment [redacted] years ago to implement the same sorting algorithm in C and in assembler, to compare the relative speeds. The C code ran in a few seconds. The assembler seemed to me to be already done as soon as I had pressed 'Enter' on the command line invoking it.

        (of course it's not to be taken lightly, and there are indeed many things that can go awfully wrong coding in assembly.)

      3. martinusher Silver badge

        Re: The real lesson...

        (The difference between buffered and direct file I/O?)

    2. Gene Cash Silver badge

      Re: The real lesson...

      > modern optimisation technique

      > compilers have gotten so good

      > with modern compilers

      May I remind people this was 1985ish which was nearly FORTY years ago?

      C compilers back then were mostly described by the technical terms "crap" and "shite"

      GCC wasn't even released until 1987, and to quote Wikipedia: "by 1990 GCC supported thirteen computer architectures, was outperforming several vendor compilers, and was used commercially by several companies"

      So back then, rewriting C in assembly was a valid path and all the moaning about how stupid that is, are way off target.

      1. mattaw2001

        Re: The real lesson...

        Not only were the compilers crap, the CPU architectures were typically not well matched to c code, the os debugging infrastructure was pitiful, the list goes on....

        Although, I would argue modern CPU architectures also don't run c well as at the language level C does not understand multiple actors like dma, multiple cores, etc. All hail our new rust / Go / Java overlords's!

        1. Anonymous Coward
          Anonymous Coward

          Re: The real lesson...

          "overlords's"? WTH is that? Did you do that to ensure it would be wrong?

          1. mattaw2001

            Re: The real lesson...

            Mobile autocorrect on android, bane of my existence when it comes to spelling, punctuation or grammar.

    3. Phil O'Sophical Silver badge

      Re: The real lesson...

      Most compilers have a switch to control whether unaligned memory accesses are permitted, it's still perfectly possible to have a build environment set up for a 68020 which will generate code that crashes on a 68000, in any high-level language.

    4. Michael Wojcik Silver badge

      Re: The real lesson...

      Compilers back then did sfa* optimisation

      The Amiga 2000 was released in 1987, so the events in this story happened no earlier than that. There were certainly optimizing compilers in 1987. Optimizing compilers go back at least as far as IBM FORTRAN II for the 1401. With C compilers, there were optimizing compilers for RISC platforms such as SPARC (Sun's compiler) and the IBM RT (e.g. Hi-C) in 1987, and even some optimizations in compilers for the PC such as Turbo C.

      GCC was first released in 1987, and by the end of the year was up to version 1.16 (and supported C++, contra the claims in a widely-reproduced potted history you can find online). I don't have the source for GCC 1.x handy, but I'd be surprised if there wasn't at least some basic optimization — things like constant folding and strength reduction — in it.

      That said, it's also known that GCC was optimized only for certain architectures, particularly 68000, prior to the early '90s when interest in using it outside the GNU project really grew (partly in response to Sun's decision to charge for their compiler).

      Compilers varied widely in the late '80s and early '90s. Some did a lot of optimization; some did very little; some were good on some ISAs and bad on others (or only supported a single ISA).

      1. Terje

        Re: The real lesson...

        What you fail to remember is that during that time era, optimization by the compiler was better then nothing, varied a lot, but was nowhere close to compete with assembler even written by someone not very good at it. and it took a long time for the compilers to catch up to someone good at writing optimized assembler.

    5. ricardian

      Re: The real lesson...

      Back at work in the 1980s we got one of the first IBM PCs and a C compiler. The C compiler was "Aztec" and it was selected because (allegedly) the CEGB were using it in power station design! Memories of writing batch files to compile, link and create .exe files. And TSR (Terminate & Stay Resident) programs were not unknown.

  5. Anonymous Custard Silver badge
    Pirate

    Out in the fields

    Not only the lowest common denominator of hardware, but also in the range of common usage scenarios.

    We had something recently here which when rolled out royally screwed up everyone working in the field.

    Everything had been "tested" of course before roll-out, but always from the comfort and safety of the company LAN when sat in a cosy office.

    But when run on-site with the customer breathing down your neck and access only via VPN into the mothership network, the less than optimal speed and other under-the-hood differences between the connection methods were enough to completely screw up the remote users.

    And given all this was actually aimed primarily at field usage, there were some interesting questions asked at the post mortem as to why it hadn't actually been field-tried before release...

    1. Anonymous Coward
      Anonymous Coward

      Re: Out in the fields

      That sounds like you're describing Google applications. Designed on the latest hardware on a 10G LAN.....fuckers.

      1. tin 2

        Re: Out in the fields

        and Slack, Teams. Throw in a bit of a dodgy network into the mix and their error handling is revealed to be... well... absent.

        1. mattaw2001

          Re: Out in the fields

          I couldn't help but laugh when my younger brother came to visit my parents in the hilly Mobile signal blackout of Cornwall and try to use Google maps to get there!

          1. G.Y.

            Re: Out in the fields

            That's what offline maps are for

            1. An_Old_Dog Silver badge

              Re: Out in the fields

              Quoting a technophile associate of mine: "Nobody uses [hard-copy] maps anymore!"

              1. jake Silver badge

                Re: Out in the fields

                "Quoting a technophile associate of mine: "Nobody uses [hard-copy] maps anymore!""

                I've been working with and on the bleeding-edge of the technical world for over half a century now.

                I still much prefer a good paper map to the electronic equivalent.

                1. gnasher729 Silver badge

                  Re: Out in the fields

                  I recommend Pocket Earth which lets you download maps ahead. I think Apple Maps also does this now.

              2. Mast1

                Re: Out in the fields

                "Nobody uses [hard-copy] maps anymore!"

                .......because, at least in the UK, when their phones/tablet devices have got water-logged, they whistle up the local mountain rescue team.

      2. An_Old_Dog Silver badge

        Re: Out in the fields

        ... on the latest hardware on a 10G LAN and/or 6G, always-on, no-shadow, high-signal-strength, mobile-phone radio.

    2. rafff

      Re: Out in the fields

      "why it hadn't actually been field-tried before release..."

      Probably, as at several places I worked, there was no pre-prod environment to test in.

    3. Pete Sdev Bronze badge

      Re: Out in the fields

      Ah, good old localhost-syndrome, sibling to it-works-fine-on-my-machine.

      Actually, that's just given me an idea: recreate "Stopit and Tidyup" but with common naive programming mistakes. Need a ML model to generate narration in Terry Wogan's voice.

      1. Yet Another Anonymous coward Silver badge

        Re: Out in the fields

        We had the opposite, worked in the field but not in the office.

        A Bluetooth connection between an oil field data logger and a PDA, with a pairing that was "simplified".

        Great in the middle of nowhere with not a cell phone to the horizon. Took it to a trade show with 10,000 visitors, all with multiple Bluetoothy gadgets screaming for attention and our device went to have a little cry in the corner

        1. David 132 Silver badge
          Happy

          Re: Out in the fields

          Ah, yes. Back in my day working at $LargeTechCorporation, we were frequently called on to do demos of our latest & greatest at CES, Microsoft TechEd and similar all-comers trade shows.

          The marketing people could never understand why I always put my foot down with a firm hand :) and insisted on cabled Ethernet for demos wherever possible.

          "But the cables look so ugly! With WiFi $latestversion we can get just as much performance and it'll support our work-anywhere, truly-mobile marketing message!"

          Yes, Chuckles. And Wifi worked perfectly in the lab.

          But in the Moscone Center, or Olympia/ExCel, with hundreds of other companies' demo networks, and (tens of) thousands of attendees' devices... it would always crash and burn. How's your 4K video streaming demo NOW, Mr Marketing Know-it-All?

          1. John Brown (no body) Silver badge

            Re: Out in the fields

            Oh, yeah, seen that on a smaller scale too. A shared building, and reception used desktops with WiFi, depspite there being Ethernet wall points. Installers set it all up on a Sunday and left. By Monday I was called in to find out why only one of the 3 reception computers was connecting. Turns out it's not two out of three specific devices failing, it's actually whichever one is switched on first that works. A quick WiFi channel scan found about 35[*] wireless access points across at least 15[*] distinct networks, all fighting over the channel space. Putting reception on to yet more WiFi was the final straw. The fix was to patch in the Ethernet ports to the correct switch :-)

            Sometimes, WiFi is in use "because cool", not because it's actually required.

            * Numbers made up, it was a few years ago now, but whatever the numbers were, they were too big for the available airspace.

  6. Caver_Dave Silver badge
    Boffin

    Strange story about field testing

    I used to work with a multinational team that tested telephone networks in the time of GSM rollout.

    They had to drive around the country in a van with multiple handsets mounted on a rack and monitoring the signal reception parameters.

    They had to test in Greece, but the Greek speaker was off on some kind of illness leave.

    They sent the Chinese speaker as he could at least order what food he liked from the Chinese take-aways for the 2 weeks of his stay.

    1. David 132 Silver badge
      Happy

      Re: Strange story about field testing

      What's Greek for "A Number 27 and a side of Number 14, please?"

      1. J.G.Harston Silver badge

        Re: Strange story about field testing

        shik gho sun dai mgoi.

  7. Howard Sway Silver badge

    his name should go first in the About box

    This reminds me of the time I worked at a terribly corporate firm with in-house development teams who all did completely their own thing, some of which were led by really bad programmers. On a project to create a suite of windows applications to replace the old terminal based stuff, one guy decided it would be neat to have an autoscrolling "rolling credits" style About box which listed himself as "Lead Programmer and Designer", under our manager's name which came first as "Software Development Manager". Said manager was very pleased when he saw this during development. Unfortunately, the application was an atrocity of highly original and unusable interface design, which crashed many times a day, resulting in massive complaints from the poor users, who had been supplied with a handy list of names for who was responsible and let everybody else in the company know it.

  8. Brynstero0

    I am Jacks

    I am Jacks P45

    (the often omitted Fight Club reference)

    1. Yet Another Anonymous coward Silver badge

      Re: I am Jacks

      Or in Full Metal Jacket mode

      This is my P45, there are many like it but this one's mine.

      My P45 is my best friend. It is my life

  9. imanidiot Silver badge

    Fair enough

    Usually with these tales of Who, me? I consider the mistakes and screwups actual mistakes and screwups that anybody could have made and not deserving of termination. This tale however? Jack deserved his firing. The "whose name first" issue was petty in the first place and implementing program breaking (even if accidental) code with a "time trap" for swapping the names was petty in the supreme.

    1. robinsonb5

      Re: Fair enough

      The funniest aspect is that of the "wedge" Amigas only the 500+ even had a battery backed clock as standard - so plenty of machines wouldn't have even known what the true date was!

  10. tin 2

    Amiga pedantry. Sorry.

    A2000 still had a 68000. Possibly it was upgraded or was a 2500, but it didn't have an 020 -because- it was a 2000.

    1. l8gravely
      Gimp

      Re: Amiga pedantry. Sorry.

      I can't be arsed to remember, but I think the A2000 could use a 68010 by default? I rmember swapping one into my A1000 at the time, before I then upgraded to an A2500 with a 68020... but the memories are really dimm and I might be just confused. And yup, I'm confused. I must have just had an A2000 which I then upgraded with an 68010 myself. I don't think I ever had a onboard accelerator card with 68020 or 68030 processors, 68881 math coprocessor and MMU chip. But I do remember dropping $800 for a 80gb Quantum 3.5" SCSI drive since I was done with swapping floppies all the time. This during the era of the "stiction" problem with Seagate ST-mumble muble drives which people would have in their Amigas and PCs which would not spin up if you powered them down too long.

      Memories!

      1. WolfFan

        Re: Amiga pedantry. Sorry.

        I think that you mean 80 MB, not 80 GB. Gigabyte drives didn't exist yet. Not for desktop machines, anyway. Your friendly neighborhood Big Iron might have a gig or two or three worth of storage, but not a desktop, and most definitely not 80 GB. And it would cost several orders of magnitude more than $800.

        Around that time my Mac Plus at home had a 60 MB external SCSI drive. It cost $600. I thought that I would never run out of storage. Earlier this month I got an email attachment bigger than that. Ah, the Daze of Youth!

        1. Yet Another Anonymous coward Silver badge

          Re: Amiga pedantry. Sorry.

          >a 60 MB external SCSI drive. It cost $600. I thought that I would never run out of storage

          When external full height 1Gb SCSI drives dropped to 1000quid we bought one for every workstation. Unlimited storage, never have to copy data on-off tape again!

          I just bought a 256Gb SD card for the dashcam, so small I will lose it, for the price of the SCSI terminator

        2. David 132 Silver badge

          Re: Amiga pedantry. Sorry.

          > Gigabyte drives didn't exist yet.

          Indeed. I remember when in the 2nd year of University, a friend got a 1GB 3.5" IDE drive for his PC - this would have been around '94.

          We were all incredibly impressed at how humungous it was and, of course as is the Sacred Tradition, doubtful about how he'd ever fill it. To quote Blackadder, "More capacious than an elephant's scrotum".

          I had a 60MB-ish drive in my Amiga A1200 at the time and thought THAT was pretty huge.

      2. Gene Cash Silver badge

        Re: Amiga pedantry. Sorry.

        > but the memories are really dimm

        Nice

      3. TFL

        Re: Amiga pedantry. Sorry.

        The 68010 was a simple drop-in replacement for the original 68000 CPU, though it didn't buy you much. As you note, no MMU, no math co-processor. I'd done the same with my A500.

        One company made some neat add-ons, such as an IDE drive controller that would fit in the A500! Little daughter board under the CPU, plugged into the CPU socket, with the IDE ribbon connector beside.

        1. David 132 Silver badge
          Thumb Up

          Re: Amiga pedantry. Sorry.

          And of course the obligatory 512KB RAM with Real Time Clock board for the trapdoor expansion slot! I had an A590 MFM/RLL drive unit for the sidecar slot too - a whole 20MB of expansion, and Workbench icons appeared INSTANTLY!

      4. mirachu Bronze badge

        Re: Amiga pedantry. Sorry.

        MMU chip? ...Oh yeah, 68851? Usually the CPU would have the MMU, if any. I think a vanilla (non-EC or LC) 68030 was the easiest way to get one.

  11. ColinPa Silver badge

    Test on the slowest box

    My boss told me about when he was a new grad, when new terminals were being designed with colour and which could draw graphics. All the developers and proper testers got the latest kit for testing the software. He got a machine, as he said, powered by a rubber band. This machine had a problem. It would display, then a second later redisplay the same stuff. Development didn't believe him ( he was only a new grad), until one of the developers was sitting at the terminal for a different problem saw the problem

    The root cause was, that the display software was displaying it twice - but because the developers and testers terminal's were so fast - they didn't see it happen.

    1. Korev Silver badge

      Re: Test on the slowest box

      My old work's QA team tested running Oracle DBs on a slow even at the time single pentium 4 with a Gig of RAM - their reasoning "We know Oracle scales well, we want to know what happens to our software when the database server is overloaded"

      1. Jou (Mxyzptlk) Silver badge

        Re: Test on the slowest box

        Citrix should do that. Maybe they would finally go ahead and fix the causality errors on loaded network connections.

        Minor variant: You click, move mouse, doubleclick, mode mouse, click. Executed on the remote end: Move mouse, click, move mouse, doubleclick, click.

        Major variant: You type, and some keys get lost and others get switched in pairs.

        1. Yet Another Anonymous coward Silver badge

          Re: Test on the slowest box

          >You type, and some keys get lost and others get switched in pairs.

          You want the keys to arrive in a particular order?

          You need to upgrade to our super-duper(tm) Enterprise grade version

          1. ChrisC Silver badge

            Re: Test on the slowest box

            "You want the keys to arrive in a particular order?"

            Why yes Mr Morecambe, I very much would...

            Regards,

            A. Preview

            1. David 132 Silver badge
              Pint

              Re: Test on the slowest box

              Bravo. If only I had more than one upvote. Have a pint instead!

            2. Jou (Mxyzptlk) Silver badge

              Re: Test on the slowest box

              Aw, I'm from across the canal, and a bit east... Can someone tell me the joke behind the reference? Something from Eric Morecambe, or something referring to the lively and always happy Morecambe beach? I mean, I am proud to get most Monty Python references, but I don't know any Morecame reference...

              1. Yet Another Anonymous coward Silver badge

                Re: Test on the slowest box

                Search for Morecombe+Wise show and Andre Previn

                You're playing all the wrong notes!

                I'm playing all the right notes, but not necessarily in the right order.

          2. Jou (Mxyzptlk) Silver badge

            Re: Test on the slowest box

            > You need to upgrade to our super-duper(tm) Enterprise grade version

            But we ordered toe full intellilink package!

        2. CrazyOldCatMan Silver badge

          Re: Test on the slowest box

          Citrix should do that

          Likewise Microsoft with the Vista & Windows 8 debacles.

      2. vogon00

        Re: Test on the slowest box

        As I've said before, I used to be professional tester which was enormous fun :-)

        Along comes carrier-grade VOIP, and I managed to add an item of test equipment - the Shunra Storm - to the list of project test equipment. At the time, it was the newest and smartest bit of WAN simulation kit available, and allowed all-too-real simulation of WAN Link speed, insertion of jitter, packet duplication and and packet loss (in one or both directions) and other packet-related skullduggery. Sounds hard, but it was comparatively easy to describe the topology to emulate and the packet effect simulations to apply thanks to the then *very* innovative use of VISO as a front end to desribe the topology and desired packet effects.

        I cannot over-estimate the importance and impact that device had....and all in a 2U little box. Beers & props to the developers.

        Myself and my colleague were either loved or feared by the devs - loved for finding all sorts of 'retry' bugs with the packet loss tricks, and feared due to the latency tests which exposed one or two gaping holes! After a while, the call agent stopped falling over and started to become more resilient, handling some godawful network conditions without falling down in an irrecoverable heap!

        If you worked on the XCD5000 and/or the NN 1460 SBC, then we probably know each other:-) Good times.

        1. watersb

          Re: Test on the slowest box

          > allowed all-too-real simulation of WAN Link speed

          That sounds lovely. I could really have used one of those at $redacted_company_name to demonstrate the 1990s home computing experience that we were building on our Silicon Valley network, one hop away from Mae West in San Jose.

          1. jake Silver badge

            Re: Test on the slowest box

            If you had put out the word, I'd have loaned you a BERT or two of one description or another ... we were (mostly) done with them by 1988.

            Had three of them in MaeWest, another four at the Bryant Street CO in Palo Alto, and a couple more at Stanford.

            Not my money, I hasten to add ... they were originally BARRNet issue (I think).

    2. swm

      Re: Test on the slowest box

      Also, test on the slowest network.

      1. Anonymous Coward
        Anonymous Coward

        Re: Test on the slowest box

        And also, on the slowest user.

  12. Stuart Castle Silver badge

    I've talked about it before, but where I used to work, we used a custom written Equipment management system to track who had what equipment. This system tracked ad-hoc loans, as well as allowing equipment to be booked. Some of the equipment required a risk assessment to be filled out and approved before booking, and because students on different courses required different equipment, the system only offered equipment available on the courses the student was doing.

    I write the booking system, one colleague wrote the backend API it relied on, and the ad-hoc loaning application. A 2nd colleague designed a nice user interface. Web design is not a strong point of mine, and the booking system was to be web based, so my colleague designed a non functional site, and I provided functionality.

    Then we got reports that the equipment list page was taking several minutes to load for some students. I knew the reason for this. When the site displayed the page listing the equipment, obviously it needed the list, which was one API call, but it also needed a couple of other bits of info, which the web service providing the data required separate API calls for. As such, each item of equipment needed three extra API calls. Given that some courses required access to over 100 items of equipment, some students bookings could generate >300 API calls. Most of the students were booking equipment on a gigabit network, but given that a lot of students leave their coursework to the last minute, come coursework hand in time, the server could easily be dealing with a couple of hundred students booking the equipment at the same time. So, it could easily be dealing with tens of thousands of API calls, which really slowed it down.

    I pointed out the problem, and suggested a solution (add a new API that returned an array consisting of all the equipment registered to the course containing all the information needed), but my colleague swore blind it must be my "iffy" code. I pointed it that the code on that page did nothing but call the APIs, then output the results, but he still didn't believe me. Then I added code to the page on the test server that logged what was being done, when, how long it was taking and the results. It also logged the start and end of each page download. I showed him the log for one of our courses with the most equipment. It was >10 sides of A4 and was mainly the results of calling the same 3 APIs over 100 times on different bits of equipment. He said "Fucking hell.. Let me look into this" and went away with the logs. Within 2 days, I had the API I'd requested, and the load times for the page for the biggest courses went from >10 minutes to < 1.

    1. J.G.Harston Silver badge

      That reminds me of the f***wits who don't understand "display sized to X" is not the same as "resize to X".

      Last week I got an email with about three lines saying (paraphrased) "Thanks for the update. Bob." I wondered why it took ten seconds to open. I glanced at the inbox listing and wondered why it said (252M) at the end of the line. How TF is a three-line email 252M? Then I thought....uh oh.... There's a little image under the signiture line. Yep. Menu -> Display image. It was three times the width of my monitor, set to display in the email as 160 pixels wide.

    2. Trixr

      What a royal pr*ck of a colleague, though. No-one you work with should make you jump through those hoops, unless you're known to be completely incompetent. Especially something they could have replicated themselves with a test account in conjunction with your initial observations of the problem. Or maybe he was just lazy.

      Thankfully I've only encountered one or two colleagues like that in my career. One demanded full logging of a particular problem to "prove" it was not our config (a simple STMP delivery issue where the destination was rejecting our messages due to policy) - never heard anything else after I provided all 10GB of text message log files for that day.

      1. watersb

        > "No-one you work with should make you jump through those hoops, unless you're known to be completely incompetent. Especially something they could have replicated themselves."

        This really hit home for me, I have been thinking about this behavior, and I think it's quite a bit more common than we might believe -- especially among the tech savvy.

        It comes down to the essential experience of computer vs human: it's quite literally unbelievable how quickly a modest, modern computer can perform the wrong thing, or perhaps the only-incidentally-correct thing. The senior programmer with decades of experience with these beasts is particularly susceptible to this form of blindness.

        We should teach the kids at university how to test and measure the execution of existing systems. Yet we still spend all of the time teaching them to create new problems.

        There's a proverb in here somewhere.

        1. Caver_Dave Silver badge

          Used to work for a mil-aero board manufacturer doing the low level software including Built-In-Test.

          I had to prove every hardware fault in minute detail before it would be fixed.

          Something I definitely don't miss, but it was a good learning experience that I still benefit from.

  13. Marty McFly Silver badge
    Pint

    About box Easter Eggs

    I do miss the days of clever nuances secreted within the About box messages. Alas on the corporate software side we had to start signing documents attesting to 'No undocumented features or functions'. The last of those fun bits were EOL'd about 20 years ago.

    1. Yet Another Anonymous coward Silver badge

      Re: About box Easter Eggs

      But if you have no "undocumented features", what do you call the bugs ?

      1. John Brown (no body) Silver badge
        Joke

        Re: About box Easter Eggs

        "But if you have no "undocumented features", what do you call the bugs ?"

        A sacking offence for lying on the declaration? :-)

        1. Anonymous Coward
          Anonymous Coward

          Re: About box Easter Eggs

          Unintended features!

      2. Doctor Syntax Silver badge

        Re: About box Easter Eggs

        I was, I think, one of the guinea pigs to get a system reviewed for security under the BT project (the name escapes me) following on from Prince Phillip's Prestel mailbox getting hacked. I was asked to confirm our system had no undocumented features. I suggested he go to BT procurement and get them to get Microsoft to sign such a declaration. I think he did - the question was dropped from the checklist. I later found out his background wasn't IT at all, it was perimeter security.

    2. David 132 Silver badge

      Re: About box Easter Eggs

      ISTR that in Windows 3.x, if you clicked repeatedly on the Windows Flag logo in the About box, it would eventually switch to a bitmap picture of Bill Gates instead and a scrolling list of the developers.

      And let's not forget the Wolfenstein-alike hidden in Excel 95, or the entire flight-sim that came with Excel '97. That latter was pretty much the straw that broke the camel's back, and caused - given that the business world at the time was waking up to code security issues in general - the total clampdown on Easter Eggs at Microsoft and beyond.

      And of course, I must include the obligatory XKCD.

      1. Anonymous Coward
        Anonymous Coward

        Re: About box Easter Eggs

        IIRC it was a grey teddy bear

        Anyway, do remember playing the games in excel!

  14. martinusher Silver badge

    Rewriting in assember 'to go faster'?

    That's what doomed the project/product. Interpersonal rivalry was just the garnish.

    I could understand taking an original application and rewriting it in C -- parts of it, anyway -- because it would make maintenance and extension easier. Never the other way around.

    1. John 110
      Headmaster

      Re: Rewriting in assember 'to go faster'?

      @martinusher

      Might be handy to go back and read some of the earlier comments rather than just jumping in.

      [HINT: this is Amiga era software...]

      1. jake Silver badge

        Re: Rewriting in assember 'to go faster'?

        "[HINT: this is Amiga era software...]"

        [HINT: So is martinusher. Well, wetware anyway.]

  15. aerogems Silver badge
    Holmes

    I've always thought

    Developers should always be working on machines speced to the minimum system requirements for the app. That way, you know the min requirements are actually going to give acceptable performance, not just "it runs, but is barely usable."

    1. M.V. Lipvig Silver badge

      Re: I've always thought

      Or even the same platform. At a previous company the developers on this one platform would do all their work on Apples, deploy to Windows, then spend the next week fixing it. The penny finally dropped for them and they started testing on Windows before deploying.

    2. nintendoeats

      Re: I've always thought

      Hmmm, except that software often goes through periods where performance is not acceptable during development. If you give the devs minimum spec machines, then you are needlessly hampering them during those times. Also, sometimes there are features which simply require more grunt than the minimum specs.

      Example, I was working on a 3D Display system. Most users were on integrated graphics, but I had an Nvidia card. When working with high detail objects (especially those with transparency), the Intel GPU was nowhere. We developed a system to degrade those objects automatically, but developing that system obviously required lots of testing and fine-tuning. If I had been stuck with the Intel the whole time, that process would have been painful on the deepest levels.

  16. Scott 26

    judging by the up and down votes on most comments, it feels like el REg has its own Jacks and Irvings....

    (not a dev, so most of the points raised go over my head, so can't wilfully vote on any of the comments)

  17. Anonymous Coward
    Anonymous Coward

    Wrong punctuation

    "in it's stride" should be "in its stride".

    "It's" is short for it is.

    "Its" is something belonging to it.

  18. Boris the Cockroach Silver badge
    FAIL

    Praise be for logs

    happened to me.

    Night shift had a big crunch , destroyed the production cell.... had to rebuild everything the next day... of course the boss is doing boss things like blaming me... got the logs out of the machine showing the edit on the program was at 8pm .... 3 hrs AFTER I went home.........

    The night shift guy was duely spitted and roasted when he turned up....

  19. M.V. Lipvig Silver badge

    Had similar

    I was the platform manager for a now-bankrupt Canadian telecom system. I'd used the system as a tech so knew what the users needed to use it effectively. Worked with the vendor, had backups going flawlessly, and all equipment could be accessed perfectly in a linear approximation of the circuit layouts the techs used.

    In comes a new manager, and he has a brother in law to hire. Unfortunately (or fortunately when looking at the long view) the manager decided I was to be the sacrificial goat. He hired the BIL, assigned him as my second for training, and once he thought he had a good handle on it I was moved out. I was able to jump groups which the boss gladly approved.

    Anyway, here's the pettiness. The BIL decided that if he was going to make his mark that he was going to have to change what I did. He first got out his Linux for Dummies book and "cleaned up" the vendor developed backup scheme, then he proceeded to redesign the spans to different from how I had them. About a year later, the platform was unusable, the last usable version that I did was long gone, and the techs went back to just logging directly into the equipment to test.

    Later, when senior management came to me to find out why I screwed it up (they tried throwing me under the bus) I pointed out that I had been replaced as platform manager on this date, and here's the email showing that my admin privileges had been revoked. Without admin privileges, I had no access to make any changes after X date, and it was working properly up until then. I never heard anything about it again. Curiously, I never saw that manager or his BIL again either, although I worked there another 10 years.

  20. ldo

    To Require Alignment Or Not To Require Alignment?

    I thought for a very long time that CPUs should permit unaligned accesses to multibyte objects without crashing. Yes, it hurts performance. But on the other hand, it simplifies programming somewhat by removing a source of crashes, and I definitely feel that correctness should come before efficiency. Let the programmer decide whether they want to pack things or not!

    1. Doctor Syntax Silver badge

      Re: To Require Alignment Or Not To Require Alignment?

      You reminded me, for the second time recently in these pages, of a FOSS project that wasn't an emulator. Worked fine until it didn't and it didn't because it crashed on the splash screen of a product I ran under it. A lot of other users reported issues with other products.

      A bit of git-slicing took me to the responsible commit. It added a case option that said if the driver reports it needs 24 bits per pixel use 32 instead. The devs argument seems to be that using 24 is slower for gaming - hint, guys, if it crashes you'll never know whether it's slower or not.

      Obviously, knowing where to look I just edited the function on every release, rebuilt and continued using it. I used to run their test suite on every release and it failed multiple tests on the unedited version and ran more or less perfectly with that one edit but eventually they started refusing to accept results from the fixed version. I suppose that was easier than having the results of their obduracy staring them in the face every month or so. Eventually there was a big rewrite and I couldn't be arsed to go looking for where they'd moved that erroneous assumption so I started using a VM instead.

      Eventually I moved to different H/W and the problem went away but it's one project I've never really been able to take seriously ever since.

    2. gnasher729 Silver badge

      Re: To Require Alignment Or Not To Require Alignment?

      I remember using a processor where unaligned access trapped and was emulated in software. Fine if it happens once. Bad if you allocate an unaligned array with a million integers. A million traps slows you down.

  21. hoofie2002

    Happy Birthday

    Many Years ago I was working on some C code in Oracle on Unix that did a lot of data analysis and reporting on pharmacy dispensing scripts which was then sent back to the drug companies.

    The previous developer, who was f&&&ing useless but his mother was a Director...., had put in code to print "Happy Birthday" everywhere when it was his Birthday.

    In a Production System.

    Across all the printed reports.

    Which were sent out to Customers.

    Twat.

    1. This post has been deleted by its author

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like