back to article My top three IT SNAFUs - and how I fixed them

Everyone's had experiences where something just inexplicably didn't work. Or apparently inexplicably, anyhow. Here are three of mine. What’s yours? Please share in the comments below. Why is our application so slow? Giant burger We're going to need a bigger table A funky new application (HTML GUI with a SQL Server back …

  1. deadlockvictim

    Entity Framework

    esteemed poster» A bit of digging showed a query that was JOINing umpteen tables for no apparent reason, and missing every index in the process. Turned out that this query was servicing the little “Your current dataset” box that appeared in the corner of every page (and whose content got refreshed every time you clicked to a new page).

    Ah, that sounds like Entity Framework. Our application is infested with it and it makes debugging and performance optimising a real nuisance.

    1. Anonymous Coward
      Anonymous Coward

      Re: Entity Framework

      Or Java Persistant(ly-awful) Architecture... Ugh. Dear $god


    OY! Less dissing gravity...

    ... I like it, it keeps my feet on the ground.

    1. Anonymous Coward
      Anonymous Coward

      Re: OY! Less dissing gravity...

      Besides, isn't it more of a surface-tension thing?

  3. Stefan 2

    Uphill kink? Tie a simple knot instead.

    Source: A conversation with an electrical engineer about why there are knots in basement power cables.

    1. This post has been deleted by its author

    2. Preston Munchensonton

      Forget all of that nonsense. Why wasn't the cable in a conduit to protect it from the elements, wild squirrels, and other cable-destroying hazards? As though water is the thing to be concerned with exclusively.

      1. chivo243 Silver badge

        from the story it sounds like this was a long time ago, but still, you would think that possibly putting a grommet in the hole first would also work? Then feed the cable through? creating a seal around the cable?

        I agree about the wildlife munching on cables too... you have to see it to believe it.

        1. Cynic_999


          ... putting a grommet in the hole first would also work? Then feed the cable through? creating a seal around the cable?


          Water can trickle through or be blown into extremely small gaps. Seals have a habit of deteriorating over time, but gravity is pretty reliable and so is the preferred method of keeping water out whenever possible. First ensure that the entry hole is never going to be *submerged* (so not too low down on the wall or any place where water could puddle against the hole), and drill it at a slight upward angle (as seen from the outside) as well as ensuring the cable initially exits in a downward direction even if it will ultimately be run up the wall. For good measure do the same with the cable on the inside of the wall as well so in the event that water gets in, it will not track all the way down the cable to the equipment (water can run *upward* around a shallow U bend, so be generous). Then seal the hole by all means - but not completely. Leave a small gap at the bottom so that any water that does manage to enter the hole will immediately run out again. A fully sealed hole where the seal has failed at the top forms a dam so that water will fill up the hole. If the material of the wall is at all porous, the inside of the hole should be given a waterproof coating so that water cannot wick through and cause damp on the inside.

          1. JerseyDaveC

            Grommets and gunk squirted in do help (we've all seen horizontal rain, after all) but a bit of uphill cable works a treat to stop water running in.

        2. JerseyDaveC

          A sales engineer from Newbridge Networks (remember them?) once told me of an installation he designed for a Middle East country which had alligator-proofing as a requirement ...

          1. The McV

            Shame that - they only have Crocodiles in the Middle East....


      2. This post has been deleted by its author

        1. Anonymous Coward
          Anonymous Coward

          My university used something similar to that (back in the late 1980's) when they move the computer centre from the edge of campus to the university's main building. The building had some huge shafts in it that were originally used for heating, but were no longer in use for anything. Someone had the halfway reasonable idea to run the network's backbone cables up this shaft, and since standard cables might be prone to damage the used these heavy armoured cables (which apparently cost a fortune at the time, virtual a special order).

          Everything was fine and dandy for about 3 months, then they started to move the Computer Science department over from it's old location (also exiled at the edge of the campus) to the main building on the next floor above the new computer centre. In order to do this move they needed to a fair amount of reworking to the internal layout of that floor; hence some walls need to be knocked down, other walls needed big holes knocked in them for (internal) windows.

          The builders had to get significant amounts of rubble down 4 floors. Guess what, those old airshafts looked soooo inviting ... right up to the point were a load of rubble sliced the armoured cables in half, requiring a complete recabling job cost loads-o-money.

          I can still recall overhearing how the head of the computer centre described the builder's actions, along with their "not me, guv" response.

        2. This post has been deleted by its author

      3. JerseyDaveC

        Actually it *was* in a conduit (plastic tube in a covered concrete channel) for much of its length but the last few metres had to go up a pole and enter the building about 15 feet up. Unfortunately there was a gentle downward slope from pole to building, as the latter was downhill from the former.

  4. Anonymous Coward
    Anonymous Coward

    "Turned out that this query was servicing the little “Your current dataset” box that appeared in the corner of every page (and whose content got refreshed every time you clicked to a new page)."

    Users complained about pauses in keystroke echoes. A similar problem to the article - the "queue status" data was refreshed every time the application window gained focus. When an Exchange "you have mail" pop-up occurred the operator would usually dismiss it - but the change in focus then caused a queue refresh. That was when every user experienced slow keystroke echoes.

    Hadn't shown up in testing because it needed a slow WAN link to get temporarily overloaded with the queue data refresh - and keystroke packets were being delayed.

    In another case a help desk had to ask the caller for their post code. This then took several minutes to update the fields. The reason was that the post code look up data was on a server at the other end of a WAN. Instead of sending the post code to the server - the client application did a series of SQL look-ups on the post code database. Had worked ok on the in-house LAN tests.

    There was one occasion where a client application had been developed in Visual Basic by the user department. When it was rolled out they complained that the LAN/server were slow - as it took 30 seconds to update the client screen. It transpired that the Visual Basic wasn't making a local copy of remote objects. Every time it queried the object it generated 30,000 requests to the server. Even with a LAN round-trip of 1millisecond it took 30 seconds to complete.

    A working X-Windows client application suddenly slowed to a crawl on the LAN. Turned out that the application had been updated with a new version of a supplier's library. It had now proceeded to interpret application screen changes as requiring many tiny X-Windows commands - multiplying the traffic for a screen update by a factor of several thousand times.

  5. Rich 11 Silver badge

    Damp cabling

    I had a similar problem to deal with about a year or two after Dave's (as best as I can determine). My IT Centre was housed in a Portacabin, and our 4800baud connection to the rest of the campus was a cable strung the 20 feet from one corner of the Portacabin to one corner of the nearest building. We soon noticed that whenever it rained, our connection would drop out. I borrowed a stepladder and checked the cable at each end, then noticed a blob in the middle. The cable (just a simple unshielded pair of wires) had been spliced and wrapped in insulation tape, and with this being the middle where it sagged the most, rainwater would collect on the cable and run down to slowly penetrate the tape. It would usually dry out by the following day, but I still had to unwrap and reseal it several times before I could get the damn bodge job replaced.

  6. Matthew Smith

    I had this whinging developer once..

    There would often be problems with projects that he was involved with. But it was never his fault: the developers were all overseas, the Cisco engineer was a numpty but wouldn't take his calls, the wiring was put in wrong by someone else. So I sacked his smug face, and his successor was much more pro-active in finding faults before the product went live. I've had much less problems since.

    1. DropBear

      Re: I had this whinging developer once..

      Unless he was either flat-out lying about those reasons or he was explicitly designated to do quality assurance I don't see how any of that was his concern. Ignoring a fault you know about is one thing, not bending over backwards to find other people's mistakes just so your employer doesn't sack you is quite another. People are employed (and paid!) to do a specific job, not to carry the weight of the universe and perform miracles routinely.

      1. dieselbug

        Re: I had this whinging developer once..

        You did the read the bit about 'it's not my fault, it's overseas devs/networks etc'. EVERY Dev is responsible for their own QA so if he doesn't even look at his own code to verify it's good, he's toast in my world too.

  7. Anonymous Coward
    Anonymous Coward

    Intermittent poor client responses were a problem. It was eventually found that the problem lay in a faulty BNC connector to the LAN. Every time a plane landed or took off on the nearby runway the vibration would cause a temporary problem.

    A remote job entry terminal was on the floor above the computer room. The available lights on the modem suggested it was connected ok - but it sometimes didn't work. Then one day the people in the computer room saw it burst into life. This happened several times until it was realised that the change coincided with the cleaner doing their rounds upstairs with a feather duster. A poor joint in the modem plug was being disturbed by the cleaner's duster.

    This device was only being tested prior to moving to a building a mile away. So its modems were connected by a couple of twin pair cables - a bit like flat bell wire - through various holes and snaked across the computer room floor. One day it stopped working - but the available modem lights suggested it was connected ok and being polled. Following the cable across the floor it was noticed that it disappeared under a floor tile - then reappeared again. Someone had removed and replaced the floor tile - trapping the cable. They must have had to jump on the tile to get it flat. The sharp edge had sheared one of the cables carrying the replies from the device.

    The air conditioning in a new computer room wasn't as effective as expected - so an extra unit was supplied. A visitor to the computer room solved the problem when the operators offered him a cold beer and a slice of water melon. They lifted a tile by the air conditioning unit to show their stash blocking the cold air outlet.

    Then there were the many cases of computer crashes that were down to electrical spikes on the supposedly clean earth in the computer room. In one case it coincided with the kettle being switched on in the adjacent kitchen. A more subtle one was when the problem was caused by the ladies' toilet. It had an electrical macerator in the waste pipe.

    The list of Murphy's Law cases seems endless.

    1. Phil O'Sophical Silver badge

      cables != cables

      I had a VMS cluster once (a while ago) which would, from time to time, run like a dog for 10 minutes. There seemed to be no correlation with anything else, and the diagnostic tools we had at the time were fairly basic. The fault never lasted long enough to debug.

      Eventually I noticed a huge spike in network collisions when the problem occurred, and on lifting the floor tiles found that one piece of what looked like ordinary 50ohm 10base2 co-ax was actually 93ohm PC network cable (ARCnet, I think). The mismatch was creating collisions and retries, which mostly went unnoticed until network traffic reached a certain threshold.

      On a similar note, I had great difficulty convincing our sysadmin why it wasn't a clever idea to use RJ45 doublers to plug two PCs into a network socket, since it seemed to work just fine...

      1. Kubla Cant

        Heard about this intermittent problem in an office I once worked in. This was in the days of standalone PDP-11s, before networks and PCs. Every few months, there would be evidence of serious disk and memory corruption problems, lasting about an hour.

        The office was next to the Thames. Visiting warships would moor alongside HMS Belfast, which was just opposite. When the time came to leave, the radar operators used to run some kind of test, with the result that all nearby computers were thoroughly zapped with radio waves.

  8. gkroog

    "Fond" memories...

    ...Mac users tipping wine over their keyboards and unplugging their modems...

    ...Windows 2000/XP users mistaking the networking connection icon in the system tray for the internet connection icon (as they were identical), and failing to notice that the network connection would present the option to "disable," instead of the "disconnect" option of the internet connection...

    ...handling support calls forwarded to me through not one, but two call centres, and having to figure out for myself that "that does not mean what they think it means"...

    ...the HP support line insisting that a software upgrade would miraculously fix faulty hardware...

    1. The Quiet One

      Re: "Fond" memories...

      ...the HP support line insisting that a software upgrade would miraculously fix faulty hardware...

      The HP Support line once told me that updating the BIOS on a laptop would suddenly change it's shape so that it would fit in the a docking station. It was clearly nothing to do with a "That'll do pig" attitude to quality control.

  9. Matthew Glubb

    The Pie T Department

  10. ISYS

    Those were the days

    A long, long time ago when I was cutting my teeth on a IT Helpdesk I took a call from a user who told me all his work had disappeared.

    After a bit of questioning about how and where he saved files I established what he actually meant was that the Word document he was currently working on suddenly had no text in it. He had typing all morning and now it was gone - and what was I going to do about it!

    More questioning about had he been saving as he went along (no auto-save in them days!) and what was the last thing he did.

    Eventually I wore him down and he admitted he may have changed the text colour to white. He couldn't remember how and wanted to know how to make it black again!


  11. Little Mouse

    ""never believe anyone, always see for yourself". Always this. Always always always ALWAYS!

    In general, end users only experience the symptoms of an actual problem, but they report what they think the problem is to the Service Desk, who then log what they think the problem is based on what the user told them. It's assumptions all the way down. Inaccurate, misleading, and guaranteed to send you down the wrong path if you let them.

    I think they do it on purpose.

    1. Phil O'Sophical Silver badge

      but they report what they think the problem is to the Service Desk,

      Ah yes, these are the same users that are totally incapable of reading out an error message word for word from a screen. No matter how many times you ask, you always get their interpretation of what it says.

      1. Anonymous Coward
        Anonymous Coward

        Re: Phil O'Sophical

        Once I had to try to solve a problem for a user based on the error message he was getting. I provided the fix to the simple error but he kept insisting that it wasn't working. After a while and trying different approaches, it turned out that he was NOT reading the error message he was receiving; instead he was recalling the error the user next to him has called about and was giving me that error message instead!

        Apparently he assumed that if he gave an error message that was fixed for another user, we will fix his error faster!

        P.S. Forgive my EngRish, it isn't my first language.

    2. Vic

      end users only experience the symptoms of an actual problem, but they report what they think the problem is to the Service Desk

      I was talking to a test pilot at Boscombe Down a few years back. He had a very nice take on reporting.

      If he reported that an aeroplane had too small a rudder, the manufacturer would supply him with an aircraft with a bigger rudder. If it turned out that that didn't fix the problem, he was on the hook for the custome build.

      If, however, he reported the problem - that the rudder had insufficient authority - it was down to the designers to fix the fault. That would usually mean a bigger rudder - but it was someone else's responsibility to make that call.


  12. Keir Snelling

    After the umpteenth desk visit to a business analyst that had a habit of calling the IT director to ask him "Is the network down again?", whenever she managed to kick the network cable out of the floor port under her desk, I decided a more permanent solution was required.

    I requested her laptop for some "deep analysis" to see if I could find the "root cause" of her network issues.

    I configured her wifi adapter, which at the time only supported 2Mbps, to connect to the corporate network. The laptop had a physical wifi on/off switch, so I invested in some superglue, and fixed the switch so it was permanently switched on. Then returned to the laptop to her.

    "Yes, I found the fault. I don't think you'll have any more problems with the network".

    She continued through to retirement working with nothing better than a 2Mbps connection to the corporate LAN. Never complained to the IT director about the network again. Even praised me for finally fixing "the network issue".

    1. Anonymous Coward
      Anonymous Coward

      Re: "the network issue"



      This is basically every day of my life.

      User: "The network is really slow"

      Me: "When did this start happening"

      User: "Since we moved to the new application"

      Me: "Is everything slow?"

      User: "Just the new application but It's too slow I can't do my job like this and I've been told it must be the network because everything is isn't it and I'm emailing the IT Director and CEO"

      And yes I've been called out of hours to fix a network issue with a PC in $Important24/7ServiceArea which required the network cable being plugged back into the PC.

      Anon because ranty.

      1. Vic

        Re: "the network issue"

        And yes I've been called out of hours to fix a network issue with a PC in $Important24/7ServiceArea which required the network cable being plugged back into the PC.

        Years back, I used to support a Health Authority network. This wasn't ethernet - it was all serial comms[1], routed through statistical multiplexers in nodes (usually in hospitals). The users were generally paid piecework, so keeping the terminals running was important.

        One hospital had a particularly troublesome terminal that simply would not work. They had on-site support, who'd "tried everything", so I was called out to do a node shutdown and statmux replacement. As this was a disruptive job, it had to be done out-of-hours.

        I always turned up a little early for that sort of work - it makes it easier to ask any questions you might need of staff who are about to go home. So I was at the terminal long before shutdown, with the on-site guy by my side. I had a quick shuftie around - yes, the terminal was powered on. Yes, the brightness was turned up[2]. I then decided to ask the local guy whether or not the serial cable ought to be laid unconnected on the floor, or whether it might actually work better if plugged into the wall port...


        [1] Yes, I'm that old.

        [2] I had another callout which turned out to be the brightness control, so I always checked that.

  13. Michael Duke

    Best one was a Netware 3.12 customer, their server had two network cards in it to support the two network runs around the office on Thin Net, each about 150m ish in length.

    Get a support call one day that the clients keep dropping so I grabbed our network kit (BNC tester, 2 spare NE2000 cards, 3 spare terminators, a few T's and a couple of 5M lengths of thin net) and head out to the customers site.

    Upon arriving I notice that the server has moved to the other side of the office, and both segments are connected to ONE card.

    I ask why this is the case and they admitted to losing one of the terminators when moving the server and of course they did not see the issue, I measured the network before I fixed it and sure enough, total length 327M :)

    Re-terminated the second card, split the segments and of course the network issues disappear immediately.

    Easiest emergency callout fee we ever earned.

    1. Anonymous Coward
      Anonymous Coward

      I was once called out to a client an hour or so drive on the motorway from our office, the client could not log on to his machine, no one else ever used that computer as it was his, someone must have changed his password! He absolutely refused to let me try any remote debugging, or answer any questions over the phone, I simply had to drive out there and fix it NOW!

      So, I drive out there, buzz into their building, walk into his office and say "show me the problem"

      He typed his password and got an incorrect password error.

      I ask "so is your name Laura?"

      "no I'm Steve"

      (names may have been changed to protect the innocent)

  14. Spaceman Spiff

    Ah, the "joys" of notworking... :-) When working at Nokia Mobile Phones I had to write performance data collection and analytic tools in order to do fault detection and prediction. We ended up collecting 10 billion data points per day of SAR, SNMP, and application performance data into a Hadoop cluster running in the Amazon EC2 cloud. It allowed us to determine that in some cases of congestion (we supported 100+ million users world-wide), that network switch ports, server NIC's, or firewalls and routers were failing. Replacing the failing components would usually solve the problem. Sometimes servers themselves would start to degrade. The SAR data would help identify those. Of course, Microsoft took over my division and promptly laid off or fired about 13,000 of us, so I have no idea if this stuff is still in use and working... :-(

  15. Anonymous Coward
    Anonymous Coward

    i used to have something similar .Two buildings a couple of hundred yards away from each other, connected together over fibre. These were back in the days of good old BNC.

    Pigeon sits on wire, causes a connection drop and kills the entire network to that building! No money to get a better connection at the time either. Reboot repeater and it's all good again.

    Fun times

    1. The First Dave

      Fibre != BNC

      1. dc_m

        Good point, that was a stupid statement, must have been BNC across the buildings!


  16. Just An Engineer

    Dead Quiet Data Center

    Working in a datacenter with a break fix engineer to repair some servers. There on the other side is an escort with a truck driver to remove a rolling cabinet and relocate it to another location.

    While working away, the DC goes dead quiet. Looking up I noticed the truck driver was just heading out the door. He was fairly running down the ramp to the loading dock.

    It appears the BIG RED BUTTON about 6 feet up the wall was though to be the door release.

    So the power interrupt between the DC / Call Center and the UPS was turned off.

    From the time of outage to my mobile ringing was abut 30 seconds. After explaining what had happened, my Mgr said he would be right over to help bring things back online. Then the Director and most of the staff arrived,to do what exactly I do not to this day know. My manager and I were just sitting and waiting for power and they could not understand why we were just sitting there.

    One problem, the company that owned the building has laid off the building engineer, on the previous Friday. So there was no one either authorized or with the knowledge to reset the power.

    Took just an hour to find an electrician to reset power. Then while waiting for that to take place all of our servers needed to have the power cords disconnected. Had the director and staff do this, since there were storage arrays that needed to come up first , then the servers.

    Meanwhile there were alarms starting to go off in the next room, it seems that the call center had battery backup, and the batteries were only good for 45 minutes, since there would usually be a generator to kick in when the power went down.

    It only took about an hour once power was restored to get the rest of "my" stuff back online. But the BIG RED BUTTON(s) had covers installed on both sides of the door, and you had to sign in and out of the DC which was really a PITA.

    1. Anonymous Coward
      Anonymous Coward

      Re: Dead Quiet Data Center

      On site one day - the building suddenly went quiet as the air conditioning and general hum disappeared. In the computer room an operator was trying to explain to everyone, including himself, why he had had a sudden urge to press the big red button near the exit door.


      In my much younger days I took a holiday from the frenzy of computing to "get back to the soil" on a kibbutz in Israel. As it was very close to a potentially hostile border we had an induction lecture explaining what to do if the alarm sirens sounded - basically "duck and pray" while everyone else grabbed their Uzi.

      Working in the dining room there was a shift system. After a couple of weeks we were on our own for the 5:30am shift. No problem - we knew the routine by then .....except we couldn't work out how to switch the lights on.

      After a fruitless search we debated the function of a couple of very large red push buttons on the wall. Eventually we came to a decision and pressed one - and breathed a large collective sigh of relief when the lights came on.

  17. Vic

    The joys of third-party applications.

    Some years ago, I ran the web servers for a members' organisation.

    This organisation decided that it needed a CMS. I recommended one, with a few others as backups in case they didn't like that one. But a Shiny Salesman turned up, and sold them a bespoke solution.

    One fine afternoon, almost all of the CMS disappeared. The server was still up, but the pages were absent.

    I was called in to find out what had gone wrong. It was rather shocking. It turns out the PHP had[1] two ways to get hold of environment variables - with a bright red warning on the documentation page never to mix the two, as context leakage would surely ensue. And, of course, the developers of this CMS had done exactly that.

    Now, a page editor had (accidentally) included a link to his admin-area stuff, rather than the customer-side view of that page. That should have been harmless - no-one without some sort of administrative privilege should have been able to get to the admin side, so that's safe, right? Nope. This leakage meant that a user could accidentally gain administrative privilege if an admin was logged in at the same time. Guess which user did gain said privilege? An aggressive web spider, that merrily followed all the "delete page" links it found...

    I put a patch in place to prevent re-occurrence whilst the developers "urgently" fixed the problem, and restored the DB from a copy I had secretly stashed away. The patch was still there when the CMS was retired, and no backup strategy was ever formally implemented. The developers in question have now discarded their product and are now shipping the one I'd recommended in the first place...

    So has that organisation learnt from this? Have they hell.

    A few years later, they decided they should have a CMS. I didn't even hear about the discussions until the deal was done, so by the time I asked "what about the one you've already paid for?", it was all too late. Another Shiny Salesman had done the dirty, and taken a large sack of loot away. And so the day of the rollout came around. One of the important parts of the new site was a Branch Finder application, that allowed users to find their nearest club. It was a Google Maps thing, and the developers[2] were very proud of it, as were my customers. So when users started reporting that it was *incredibly* slow, or didn't work at all, there was pandemonium. The developers, of course, blamed the server platform; I'd obviously commissioned something far too slow, and a new server was required. So I showed them the idle time graph to demonstrate just how little this server was doing; it was most certainly not a server problem. Then they decided that this was an inherent problem with the way maps work, and nothing could be done.

    All this, of course, bent the needle on my Bullshitometer. A little interaction with the users showed that it was the ones with older PCs that were having most problems - it looked like a client-side problem. So I took a look at the application. The despair has not yet left me.

    The application worked by sending the entire dataset of clubs - including data that would never make it onto the map, and probably breaches the DPA - to the client, where it is filtered for proximity to the user, and then displayed on the map. And the data is sent from the server in XML. Which is parsed on the client. In Javascript.

    My initial replacement simply exchanged JSON for XML, and that went like the proverbial excrement from agricultural implement by comparison. That got rid of the "inherent problem" bullshit, and the developers resolved to have another look. By the time I left the project, they'd done nothing more than my quick hack...


    [1] I believe this is no longer the case in the current version of PHP. But I still wouldn't put any money on it.

    [2] A differnet bunch from the first story. No better at their jobs, though.

  18. Anonymous Coward
    Anonymous Coward

    the joys of support

    I can think of lots but here's a couple of good ones.

    User complaining she couldn't get her laptop on the Lan, quick trip to her office to find her trying to plug her modem cable into the RJ45 network cable wall port.

    And the best in all my 18 years in IT. User came to tell us the toner was low on a shared laser printer which was situated in a corridor. It was a HP4300DN so quite a big beast weighing in at something like 20kg. conversation went like this.

    User "I think the toner needs changing on the 3rd floor printer its a bit faint"

    Me "ah ok we'll come and replace it, but have you shaken it first?"

    User "what the printer?"

    Me "errrrrmmmm no just the toner cartridge"

    This has happened twice! different users.

  19. The McV

    Press any key to continue......

    Yes, it happened to me...

    Much hard work during the week - Friday morning install a new test system on the shop floor.

    Run a sample unit - fine - had over to production & retire to the pub to celebrate.

    One extended lunch time later, we rolled back from the pub to a complete production stop.

    Not only the new system was down but all the others (that had been working happily for weeks) were down.

    The new operator had come to the 'Press any key to continue' prompt and, you've guessed it, couldn't find the 'ANY' key, so stopped - worried that she might do something wrong. She asked an experienced operator on one of the other systems - who suddenly became afraid that she'd been doing it wrong for the past couple of weeks, and this rippled down the whole line - stopping all the systems.

    I took a marker pen and wrote "ANY" on the side of the space bar. Problem solved.

  20. The McV

    Press any key to continue......

    Yes, it happened to me...

    Much hard work during the week - Friday morning install a new test system on the shop floor.

    Run a sample unit - fine - had over to production & retire to the pub to celebrate.

    One extended lunch time later, we rolled back from the pub to a complete production stop.

    Not only the new system was down but all the others (that had been working happily for weeks) were down.

    The new operator had come to the 'Press any key to continue' prompt and, you've guessed it, couldn't find the 'ANY' key, so stopped - worried that she might do something wrong. She asked an experienced operator on one of the other systems - who suddenly became afraid that she'd been doing it wrong for the past couple of weeks, and this rippled down the whole line - stopping all the systems.

    I took a marker pen and wrote "ANY" on the side of the space bar. Problem solved.

  21. Anonymous Coward
    Anonymous Coward

    A couple of weeks ago I had a user call me to complain that they could not establish a VPN into the corporate network. The conversation went something like:

    User: "The VPN service is down, I cannot connect to it, it needs to be fixed"

    Me: "Well I have a VPN connection, so the corporate network connection, routers and servers are OK. Do you have a connection to the internet?"

    User: "Of course I do"

    Me: "Can you open a command window and enter the command 'ping' and tell me what you see"

    User: <response indicates all ping packets disappeared into cyber-hyperspace>

    Me: "Are you sure you have a connection to the internet?"

    User: "Well of course I ..... ahhh, I see the problem now"

    Me: <sigh>

  22. gisabsr
    IT Angle

    Government IT - enough said

    Biggest IT SNAFU I've been involved with? Put it this way - I used to work for the Office of the e-Envoy, responsible for UKOnline and the Government Gateway.

    Microsoft involvement meant that the portal required IE, for 'security' purposes. After a couple of months I think we managed to get it to work, mostly, in other browsers, but initial signup still had to be in IE on Windows.

    Later we discovered that the terms of the contract meant that MS got the rights to take away all the things it had learned from working with us to set up the government gateway, and implement the same thing elsewhere, including stuff we'd developed in house.

    That whole wretched episode isn't entirely deleted off my CV, but its appearance is brief...

  23. J Bourne

    After being asked to check the security of the company web site (remote hosted) I Enabled 'Hot link' protection on the company website through the hosting admin console. A feature intended to preventmedia from the site from being embedded on any other sites apart from approved ones, i.e. our own. Tested the site and all was working well at 3:15pm went home at 5pm. Arrived next morning to find that at 5:15 all the images on the website had 'inexplicably disappeared off the sites web pages.

    The hosting control panel could have mentioned the small fact that it takes (for some unknown reason) 2 hours to apply the setting and another 2 hours to remove it and that it might actually break the site. :doh and this was just last week!

  24. js.lanshark

    Where is your temp file at?

    Users at a remote site were complaining a new application ran very slow, while all other users at the main site reported no problems. Latency was below 5ms and utilization below 40% on the WAN link.

    It turned out that the desktops were configured at the main site and shipped to the remote site, so the fact that the application used a file share at the main site as a temp directory was never an issue. It worked fine if you were local, but your data took multiple hops across the WAN before you ever got to do anything with it, even then, it was still across the WAN.

    Changed the temp dir to use one on the workstation and all was good.

    1. Anonymous Coward
      Anonymous Coward

      Re: Where is your temp file at?

      When the French tax office introduced online income tax forms they required a secured connection, and the first step of account setup was to create a public/private key pair. The private key was stored in C:\something. Or might have been if I hadn't been running on Linux. Back to paper.

      A later version didn't explicitly use C: but just stored the private key file in the current directory. With world-readable permissions.

      Government IT? Don't go near it, ever.

  25. SecretSonOfHG

    Documents disappearing

    Got a call from a user saying that she had lost a very important document on her laptop. She could not open it from the "Recently opened" file list of some Office app. Asking her to look it up in the folder where it should be, she could not find it either. In fact, the folder was empty. She insisted that she had not deleted it, and that it was an important document she had been working for days. Furthermore, she had lost a lot of work in that folder.

    I could not resist and walked over to her desk. She showed me the original document that was attached to a mail message. They used Outlook at the time. Then she opened the folder where the document was. I immediately spotted the "AppData/Temporary blah blah" path and asked her why she saved anything there as that was a special location for opening temporary files and as such you were at the whim of Outlook to keep or delete temporary files. In fact, that was what had happened, and there was no hope of getting the document back.

    Her answer was that she had always edited documents from there, as that was the location they were opened by Outlook when you double clicked on an attachment, and whenever they were ready, she mailed them back.

    The amazing part was that she had been doing that for years and it was the first time she lost something..

  26. Hazmoid

    The Data centre kill switch

    We too had the problem with the power kill switch being 1. unlabelled, 2. next to the exit door, 3. uncovered. After 2 power outage incidents caused by the fact that it was uncovered (and easily bumped as you were exiting the door), this was remedied with a lift up activation cover.

    As recently as this year, I had an problem where we moved a number of users into an existing office and their computing infrastructure into an existing comms room. Because there were no extra 15 amp sockets, the UPS for these servers were plugged into the existing UPS. Everything was working fine when we left on Saturday afternoon. Monday morning I got a call on my way to work complaining that nothing was working. When I walked into the comms room I could instantly see the issue. The extra draw on the existing UPS had caused the power breaker circuit for this UPS to drop out. When the battery died, that was it, everything went down including the phone system. After some power circuit juggling and a visit from a sparky to run 2 new circuits it was all sorted. The Email server corruption was another issue though :(

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like