back to article A practical demonstration of the difference between 'resilient' and 'redundant'

Monday is upon us, and with it comes a cautionary tale of how one Register reader's overconfidence led to his undoing, thanks to an unexpected interfacing with a belt buckle, in today's edition of Who, Me? Our story comes from "Dan", a lead system admin at what he described as a "rather large company" where the vast majority …

  1. Anonymous Coward
    Anonymous Coward

    The old demonstration of if only

    It’s amazing how many business don’t see the need to invest in redundancy.

    It can be eye wateringly expensive, but so is the alternative of not having it when disaster strikes.

    It’s even worse when businesses build redundant systems but don’t configure them properly, simply ticking a box on the DR plan that there is redundancy but requires huge manual failover of every different system.

    1. UCAP Silver badge

      Re: The old demonstration of if only

      It’s amazing how many business don’t see the need to invest in redundancy

      Absolutely agree.

      As far as many bean-counters are concerned, critical failures are million-to-one chances, and hence are so unlikely that there is no point spending money to protect against. What they always seem to forget is that million-to-one chances in IT occur nine times out of ten.

      1. Anonymous Coward
        Anonymous Coward

        Re: The old demonstration of if only

        UCAP,

        "What they always seem to forget is that million-to-one chances in IT occur nine times out of ten, on average !!!"

        FIFY :)

        1. WonkoTheSane
          Thumb Up

          Re: The old demonstration of if only

          Ah. "The Colon Effect", as mentioned in the historical document, "Guards! Guards!"

      2. jmch Silver badge

        Re: The old demonstration of if only

        "As far as many bean-counters are concerned, critical failures are million-to-one chances"

        Brings to mind Feynman's observations when investigating the Challenger disaster that each management level somehow thought that the probability of a failure was an order of magnitude less than what the level below them thought it was.

        1. Sam not the Viking Silver badge

          Re: The old demonstration of if only

          Feynman pointed out that the difference in probability (of failure) was three orders of magnitude: 1:100 as opposed to 1:100,000. As these numbers are so big, he wisely illustrated the argument by suggesting that, in management's opinion, a shuttle could be launched every day for the next 300 years with only one failure.

          I would like to emphasise jmch's remark by pointing out that the accountant's "million to one" is ten times less likely than NASA's management-estimate.

      3. Terry 6 Silver badge

        Re: The old demonstration of if only

        I love this quote from one of Churchill's cabinet minsters, Alan Lennox-Boyd;

        Accidents in the main arise from the taking of very small risks a very large number of times. A thousand-to-one chance against an accident may not be rated very high, but for every thousand people that take it there will be an accident.

        1. Anonymous Coward
          Anonymous Coward

          Re: The old demonstration of if only

          I understand the point of the quote, but can I just say: Alan Lennox-Boyd did NOT understand probabilities!

          1. Rich 11

            Re: The old demonstration of if only

            He may not have done, but if he did and he'd said "if a thousand people take it, there is a 63% chance there will be an accident" he would then have had to sit down and explain it to a bunch of politicians, a prospect that would make even the most capable and patient of maths teachers baulk.

            1. This post has been deleted by its author

            2. xeroks

              Re: The old demonstration of if only

              I'm clearly not as au fait with stats and probability as I thought. Why only 63%?

              While I understand that after 1000 rolls of that awkardly sized 1000-sided dice, there's not 100% chance of an accident, I don't understand why it's as little as 63%.

              1. Anonymous Coward
                Anonymous Coward

                Re: The old demonstration of if only

                It's easiest to think of it the other way around. What is the chance of no accidents happening?

                The first time, it's 0.999 (999 times out of a thousand, everything is fine). A thousand occurrences has the probability of 0.999 raised to the power of a thousand. In Excel, power(0.999, 1000) returns just under 0.368. So the chance of one or more accidents happening per thousand times is just over 0.63.

                Calculating the odds of exactly one accident happening after a thousand occurrences is left as an exercise for the reader.

                1. Rich 11

                  Re: The old demonstration of if only

                  What is this "Excel" of which you speak? Does thou not grasp the concept of electrickery calculators and/or log tables? Even Newton's approximation?

        2. Anonymous Coward
          Anonymous Coward

          Re: The old demonstration of if only

          Which is why reliability of critical systems needs to have a number of 9s. For instance, a 99.9% reliability sounds great - unless it's the brakes on my car. At that level, getting into an accident due to brake failure would be a near certainty, as they get pushed dozens of times per trip, times a dozen trips per week...

  2. ShadowSystems

    An SFW tale to share?

    Noooo... *Nervous looks left & right* Definitely NSFW, sorry. Besides, the NDA on what happened to the moose hasn't expired yet. *Cough*

    1. Chris G

      Re: An SFW tale to share?

      But.... Is the moose okay?

      1. David Robinson 1

        Re: An SFW tale to share?

        Not after it had bitten my sister.

        1. BebopWeBop

          Re: An SFW tale to share?

          Sure it was not the other way round?

          1. UCAP Silver badge
            Joke

            Re: An SFW tale to share?

            Have you seen his sister?

            1. the spectacularly refined chap

              Re: An SFW tale to share?

              Rimmmer:

              I've got something for you. A lateral thinking question.

              Cat:

              A lottery what?

              Rimmer:

              Ahhh, I knew I could rely on you. What caused this accident?

              Cat:

              [alert.] What accident?

              Rimmer:

              No, no. It's a question, alright? Are you ready? It's 1971, a man-

              Cat:

              Was he Swedish?

              Rimmer:

              ... Yes?

              Cat:

              A moose! [Rimmer sags in resignation.] It was a moose! He swerved to avoid it, and hit a tree! Oh, and the moose is on the road, by the way - not in the car driving. Oww! Yea-ah! [dances out of the room.] Oww! Yea-ah!

            2. The Oncoming Scorn Silver badge
              Pint

              Re: An SFW tale to share?

              Is this the sister with a brother in law called Svenge whose a Oslo dentist and star of many Norwegian møvies: "The Høt Hands of an Oslo Dentist", "Fillings of Passion", "The Huge Mølars of Horst Nordfink".

              1. Ken Moorhouse Silver badge

                Re: a brother in law called Svenge

                Did he need to go to the GUM Clinic after these exploits?

      2. ShadowSystems

        Re: An SFW tale to share?

        I can't say, that's part of the NDA. Otherwise I could tell you about how-

        *Gets tackled from the side, dragged out in shackles, a gag, & a black sack over his head with large thugs advising him of the terms of the NDA*

        1. J. Cook Silver badge
          Joke

          Re: An SFW tale to share?

          *giggles*

          That reminds me that that one time where [redacted] did [redacted] to that system, and boy was [redacted] pissed! I didn't even know that [redacted] HAD trained goats that could do [censored], let alone [cens- ARE YOU KIDDING US?!?!?!]

          1. Marcelo Rodrigues
            Trollface

            Re: An SFW tale to share?

            "That reminds me that that one time.."

            At band camp?

            1. J. Cook Silver badge

              Re: An SFW tale to share?

              I wouldn't know- I never went to band camp.

  3. Giles C Silver badge

    Proliant server

    A tower compaq proliant server (this is pre 2000, front panel is off as it had to be moved to gain access for something.

    Thumb caught the power switch - the thing is these switches only actuate when the pressure is released - I sat there for about 10 minutes waiting for everyone else to shut down before I could let go of the switch - the springs on those switches get very heavy after a minute or two....

    The server was the main Novell box for the entire company.

    1. Rufus McDufus

      Re: Proliant server

      I did that with a Dec Alpha probably around the same time (late 90s). Powered off server for maintenance. Oops - wrong one - I got the mail server by mistake. Stuck for about 15 minutes with my finger pressing the button in before my colleagues came in and laughed at me.

      1. KarMann Silver badge
        Joke

        Re: Proliant server

        Username checks out. No offense, but fitting with the colleagues' reaction.

    2. big_D Silver badge

      Re: Proliant server

      Been there, done that.

      The best was a DEC engineer. We had ordered a memory upgrade for a VAX 11/785. All jobs moved to the neighbouring machine, all users logged out and onto the next machine...

      VAX shutdown, console says it is safe to power off...

      Engineer goes behind the cabinets and... Nothing, nichts, nada... Then sudden shouts, squeels and a general blue tinge to the air around the next VAX's console.

      Yep, the DEC engineer had pulled the mains isolator on the wrong machine!

    3. Anonymous Coward
      Anonymous Coward

      Re: Proliant server

      Funny enough happened to me once many moons ago. Luckily it was a dev server, 5 mins of using my left hand to try to get my phone from my right trouser pocket and dial my mate to tell the devs to get off the server for 5 mins due to emergency fix needed!

    4. John Robson Silver badge

      Re: Proliant server

      Worked out that for many of my home machines if I released and instantly pressed the power button then the PSU would handle the very brief loss of power without breaking sweat.

      Of course that's back when AT power switches actually had the power running through them - but when you reached for the turbo button and hit the wrong one...

    5. Caver_Dave Silver badge
      Childcatcher

      Re: Power switch

      Caught the bosses son (about 20 at the time) pressing the power switch on the server circa 1990 (so very old story and it took a long while for things to shut down.) I had the strongest sticky tape in the building holding his finger to the switch within 30 seconds. It took about 10 minutes to shut everything down and then I let the boss release his son. It didn't happen again!

    6. Ken Moorhouse Silver badge
      Headmaster

      Re: Novell box

      cough Netware cough

      (It doesn't matter to me in the slightest, but there are people who will get the flamethrower out for that).

    7. Rich 11

      Re: Proliant server

      the springs on those switches get very heavy after a minute or two

      Almost but not exactly like when you tread on an anti-personnel mine and you realise you really mustn't move, and you stand there sweating while your mate digs around under your foot with his spork, and your other mate tries to keep you calm by asking if your girl has written to you recently, and you tell him, no, she hasn't, and her last letter was about how your brother was looking after her really well while you were at the front, and then your mate stops digging around and say he's got it and when he says "Jump!" you have to jump, but your legs are stiff and you've got pins and needles and your other mate is slowly backing off, trying not to show how scared he is for you, and you can see your mate lying down dead flat with his arm stretched all the way out holding the spork in place with just two fingertips, pulling his helmet down over his head, and then he says "Jump!" and you...

      Something like that, I imagine.

    8. Anonymous Coward
      Anonymous Coward

      Re: Proliant server

      Heh... my boss did that once. We were pulling an all nighter to clean up the rats nest of cables behind the servers. I was behind the rack and I heard a click of an AT power switch, followed by "ohhhhh fuuuuuck". Then a pause, followed by "hey AC, come over here and down the HR server!"

    9. Dazed and Confused

      Re: Proliant server

      Seems a common condition.

      I arrived one morning to teach a class to be met by one of the other guys on the class with "Sorry, XXXX can't come down at the moment. He's very carefully sitting as still as possible at his desk with his knee pressed against the the power button of the 857, he's just discovered it's at knee height and the end of month run has quite finished yet, so we can't go anywhere till the FD has his numbers. Then we can shutdown and the boss will be allowed to breath again. Until then he's sitting still."

      All said with a grin which reached from one ear to the other.

      Can't have been the only customer to make this discovery, Nova series box soon started shipping with a cover plate over the switch.

  4. Potemkine! Silver badge

    Resilient, they had become. Redundant they weren't.

    How can a system be resilient without redundancy? :~

    1. Graham Dawson Silver badge

      Resilience is the ability for a node to remain standing when it encounters error conditions. Redundancy is for when a node falls over.

      1. Potemkine! Silver badge

        Resilience is the ability for a node to remain standing when it encounters error conditions.

        I see few ways for a node to be resilient when it fails. Maybe autorestart? The node is still unavailable before it happens.

        If anyone gives me way to make a node resilient, I'll take it. TIA!

        Redundancy is for when a node falls over.

        And this makes the system more resilient.

        1. big_D Silver badge

          Power supply fails, the secondary takes over, the machine keeps running. The dead PSU can be pulled out and a new one hot-plugged (put in, without turning the machine off)

          Network port fails, secondary (or, these days, tertiary or quaternary) takes over. You pull the dead network port/adapter and replace it and it starts working again.

          Drive in RAID array fails, the rest carry on and the dead drive can be swapped out and the RAID rebuilt.

          A catastrophic failure or corrupted system drive, for example, will still cause the system to fail, but a lot of key components are duplicated and hot-swappable, meaning the parts can fail, without the system keeling over.

          There, motherboard or memory failure and things like fire and lightning strikes are about the only things to stop it working, other than the OS getting its knickers in a twist.

          With redundant, not only do you have the resilience, you also have a whole other system that is shadowed and can take over when the primary fails - usually in a separate fire section and with a separate mains power feed and a separate network. Preferably in another building or on another site, if possible.

          1. J. Cook Silver badge
            Thumb Up

            I'd give you multiple upvotes if that were possible- that's the best explanation I've seen of that.

          2. Dazed and Confused

            How redunant do things need to be?

            Years back I went to do some work on a customer of a customer's site.

            They'd built themselves a cluster without much if any input from the manufacturer. They had:

            2 servers, a primary and secondary Check

            A disks array running RAID5 Check (but only one power supply)

            but then things went down hill.

            Each server only had one link to the array.

            Each server only had one network link (apparently their network guy didn't hold with a server having more than one link)

            But the piesta resistance was that the whole cluster, that is both servers and the array, were powered from one 4 way extension lead which was plugged into a socket in the ceiling above the rack.

            So they'd managed to have two of the most expensive component and pretty much failed on everything else that could be thought of.

          3. Jou (Mxyzptlk) Silver badge

            Memory can be redundant too, either mirror and / or hot-spare memory. But today you cannot swap memory life, you have to power down the machine. There were machines where you could swap memory live, but it is cheaper to have two or more machines running as a cluster than having one with that capability. Same applies to Hot-Swap PCI, haven't seen one for over a decade in a new machine due to clustering being cheaper.

            1. Dazed and Confused

              We used to find people building HA clusters out of their fault tolerant systems. After all, if someone blows up the data centre where the FT box is, it's still dead, FT or not.

              Virtual machines with live migration allow work loads to be moved without an interruption to service for planned down time events. Which reduces some of the issues that hot plugging helped with. It's been too many years to remember how the FT version of the OS coped with double bit memory errors, I suspected they caused a panic. But if you're getting lots of more normal single bit errors it's easy enough to migrate away the VMs to another node and then take the HW down to swap out the suspect DIMMS.

      2. Anonymous Coward
        Anonymous Coward

        > Resilience is the ability for a node to remain standing when it encounters error conditions. Redundancy is for when a node falls over

        The question was "How can a system be resilient without redundancy?" but your answer was at the node (i.e. server) level.

        At the server level I agree with you: resilience is the ability to remain standing when a server encounters error conditions, and the most common way to achieve that is redundancy within the server e.g. twin power supplies.

        At the system level resilience has the same definition: the ability of the system to remain operational when it encounters error conditions. Again, it does this via redundancy - but this time the redundancy is at the server level for when a node falls over.

        So back to the original question: "How can a system be resilient without redundancy?" With great difficulty, I think is the answer.[1]

        [1] I'm deliberately ignoring 'serverless' systems such as Lambda as there is massive redundancy behind the scenes even if you don't have to worry about it

    2. Anonymous Coward
      Anonymous Coward

      Absolutely!

      Redundency is the price we must beg from the beancounters in order to have more resiliency!

      1. Anonymous Coward
        Joke

        > Redundency is the price we must beg from the beancounters in order to have more resiliency!

        Speak for yourself but I'm sure has hell not going to the bean counters and beg to be made redundant!

      2. Dazed and Confused

        A large customer I used to deal with had been trying to get three way mirroring for a few years and head bean counter wouldn't sign it off. So certain key servers had a downtime window in the wee small hours for backups. Head bean counter then exploded at the IT director one day when he was in the far east and couldn't access his email. The third set of disks were ordered that day.

        Moral of the story?

        Make sure the bean counters suffer from their own decisions.

        1. A.P. Veening Silver badge

          Make sure the bean counters are the first to suffer from their own decisions.

          FTFY ;)

          If you can work this correctly, you can write your own ticket.

          1. J. Cook Silver badge

            THIS.

            Back when I was doing the 'roving field engineer' gig, I had a few client balk at the cost of a replacement for their failed DLT drive, which we had priced at ~$700 USD. (This was back when DLT was on the way out, and it was easier to buy a refurbished drive for some of our clients who had service contracts that covered labor, but not parts.) My response of "well, how much is your business really worth?" kind of shut them up on that one.

  5. TheGriffin

    Ohhh Yesss

    Unfortunately I have been in a not so insignificant same boat.

    This tale goes back to around 2004/5 ish, and it was on return to my employer. I had what I like to call a sabbatical with a competitor for around two years. As usual the grass was not as green as one suspected.

    Me I am in the automation industry, purveyor of DCS systems. Those systems that control many Oil Refineries and the like. During my first stint I was on call, and this had onsite response times of 1 hr, and I thought I was a "seasoned pro". On return to my employer after my sabbatical I was put straight back on call, as I had not forgotten "anything" or so I and my manager thought. All this can unravelling down when one night the phone rang around 11PM - A call out, problem all IO had stopped working on a remote rack. The good thing was this hardware had intelligence and the plant continued to run, and held the previous setpoints and control strategies. This hardware was bespoke, but I knew it very well. Full of confidence I arrived at site, diagnosed the problem and they had a spare card (comms card). Now this is where my confidence unravelled, I completely forgot the chassis was hot swappable and duly powered it down. This in turn was followed by lots of hissing as valve opened or closed, and the whir of large motors slowing down. Oops.... I had crash shut the plant. Tail between my legs, I owned up, replaced the card and has to wait another 4 hours for them to get the plant running again. So what could have been a 30 min fix, turned out to be a longer stint than anticipated.

    1. DJV Silver badge

      Re: Ohhh Yesss

      Great story - it should have been a full Who Me!

    2. Pascal Monett Silver badge

      Re: Ohhh Yesss

      It's always better to own up to one's mistakes, that way you don't get a reputation of the guy who breaks everything and lets others take the blame.

  6. DailyLlama

    We've all done it...

    Gone behind server racks and pulled plugs out with our backsides, only to find that the UPS was fine when plugged in, but miraculously failed as soon as power was removed, or even better, found out that the separate PDUs were both connected to the same UPS.

    1. big_D Silver badge

      Re: We've all done it...

      At one customer, they has an UPS and it was religiously tested every week... That is, the admin for the AS/400 that was plugged into it went into the server room and pressed the test switch on the UPS, it reported success and batteries at 100%...

      Then we got a new manager and he insisted that we do a full test of the UPS - that is, the power to the UPS was shut off. Internal test carried out, OK. Batteries, OK. Power removed, AS/400 powered down within 2 seconds, as in uncontrolled, no power after 2 seconds, silent, but for the sound of disks whirring down...

      Turns out the UPS test and battery monitor weren't worth the thousands they had cost. The batteries were dead as a doornail.

      Powering the AS/400 back on didn't go to plan either. The drives had been running for years. Once powered off, the dry bearings ceased and refused to spin the drive back up. A new DASD for an AS/400 back then wasn't cheap.

    2. DS999 Silver badge

      Color coding

      I saw a datacenter once that had acquired a crapton of blue power cables and had blue tape/paint on the one of the PDU rails in each rack. They'd use the black power cables on the black PDUs and the blue power cables on the "blue" PDUs. The PDUs were wired up to separate UPSes which were each served by separate utility feeds and had their own generators, so redundant PSUs were truly redundant. I think many sites only use the PSU redundancy to protect against failure of a PSU, and don't have infrastructure that splits them all the way to the transformers.

      I imagine that color coding would make it much easier to avoid confusion over what goes to what, and was presumably done because of such confusion causing a lot of pain/expense in the past. What's the point of spending all that money for fully redundant power delivery only to have some yahoo plug both PSUs into the same circuit?

      1. J. Cook Silver badge
        Go

        Re: Color coding

        Had that happen to me with a storage appliance; Turns out that Nimble storage appliances get REALLY cranky when the second shelf of three expansion shelves loses power, because some chucklehead* plugged both leads into the same PDU whose upstream breaker had tripped. Thankfully, it was only DR, so once I got the power cabling sorted out and reset the breaker, the site came back to life and started to re-sync with production.

        * That chucklehead was, in fact. Yours Truly. Whoopsie.

      2. big_D Silver badge
        Facepalm

        Re: Color coding

        Yes, very much so.

        At one site, they wanted to replace one of the UPS systems. No problem, they were redundant... So they unplugged one and one of the racks went dead...

        The admin who had installed the hardware had wired the cabinet, with redundant PSUs on each device in the rack, to a single UPS, instead of wiring up 2 sets of cables in the rack, one to each UPS and ensuring that the PSUs for each device were divided up properly... :-(

  7. Paul Crawford Silver badge
    Stop

    Big in Japan

    Not me personally, but once a colleague of mine and I were in Japan installing some equipment at a university and they gave us a table, etc in ones of their rooms. It just so happened to share space with their new supercomputer (at least in uni terms in 1995) with multiple HP boxes with dozens of CPU in a cluster all glued together with fibre channel storage. He leaned back in his chair and hit the EPO button.

    Suddenly the room was near silent. But not for long as a half dozen rather distraught Japanese IT folk rushed in to find out what had just gone wrong.

    I was impressed at how low he managed to slink in to the chair...

    1. nintendoeats

      Re: Big in Japan

      Their fault. It should have had a molly-guard.

  8. nintendoeats

    Now are we TOTALLY sure that he didn't twiddle this switch on purpose? Seems like his belt made a good political move.

  9. Sam not the Viking Silver badge
    Facepalm

    Testing the UPS

    Our small network had a UPS positioned near the server and in the same office as the 'book-keeper'.

    When the whole estate suffered a power cut everything went dark except for the book-keeper's PC, desk lamp, fan, radio, foot-warmer and personal phone-charger. They were connected to the 'protected' side of the UPS as opposed to the server which was not......

    A subsequent review, when the lights came back on, found no less than four daisy-chained, multi-plug extensions powering all sorts of devices: printers, fax, mail-stamping, encapsulating machine etc.

    We re-arranged things after that.

    1. Phil O'Sophical Silver badge

      Re: Testing the UPS

      We had a lightning-induced power outage of our office some years back. Server room was on UPS+genny, all backend work continued as normal, but most ordinary offices were (intentionally) not protected. While we waited for the power to come back we soon discovered that one coffee machine had been accidentally connected to the UPS. It never had to work so hard in it's life!

      1. Anonymous Coward
        Anonymous Coward

        Re: Testing the UPS

        "one coffee machine had been accidentally connected to the UPS. It never had to work so hard in it's life!"

        I bet it did. One coffee machine for the whole office, during a crisis, no less.

      2. A.P. Veening Silver badge

        Re: Testing the UPS

        While we waited for the power to come back we soon discovered that one coffee machine had been accidentally connected to the UPS.

        For some reason I doubt it was fully accidentally.

      3. Anonymous Coward
        Anonymous Coward

        Re: Testing the UPS

        Of course it was. Coffee machines are critical infrastructure, required for successful troubleshooting of other equipment.

        1. J. Cook Silver badge

          Re: Testing the UPS

          ... and powering the techs putting it all back together. :D

  10. Anonymous Coward
    Angel

    'resilient' and 'redundant'

    Resilient is what sysadmins need to be when they are declared redundant after "Who Me?" incidents.

  11. jake Silver badge

    Single points of failure always do.

    I landed a contract to install two big, garage sized, Memorex tape backup robots at a large number-crunching outfit once. Before I bid on the job, the VP of operations gave me the grand tour. He was proud of all his redundancy. He had two power lines coming in to two separate rooms, with a motor-generator, a large battery consisting of dozens of telco-style lead-acid batteries, a generator, and monitoring systems for each room-full of gear. The 48 Volts was switched by a box at the corner where the two rooms met, brought into the main building via a 5" conduit, where it was switched to two separate computer rooms. Even the links between outlying offices were redundant T-1 and T-3 lines. There was a third "data center" that was dark, to be used for spares "just in case". It was designed to provide non-stop operations, and it did a pretty good job of it. Even the Halon had built-in redundancy.

    Until a semi-truck carrying some of my Memorex kit backing into the receiving dock went off course & cut the 5" conduit. The security cameras caught the sparks quite nicely :-)

    Two weeks after installing the tape robots, I had a proposal for a more geographically diverse version of the same thing on the VP's desk. I didn't land that contract, alas.

    1. Paul Crawford Silver badge

      Re: Single points of failure always do.

      If your redundant paths are in the same duct, they ain't resilient!

    2. A.P. Veening Silver badge

      Re: Single points of failure always do.

      Lovely story, seem to remember reading it before.

  12. Anonymous Coward
    Anonymous Coward

    Once when we were moving office and large comms room, the network had to be up before the building work was finished and so the switches used to collect dust, lots of dust... one day when using compressed air on the core chassis the straw on the can vanished inside.... It didn't appear to stop any fans spinning so I just left it in there... probably stayed inside till the day it was decommissioned!

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like