back to article When disaster strikes, proper preparation prevents poor performance

As Benjamin Franklin famously said: "An ounce of prevention is worth a pound of cure," and that's especially true when it comes to disaster recovery. Most companies with a decent-sized IT department will have an incident response plan, but that in itself is nowhere near enough. Such plans have to be constantly updated and …

  1. Persona Silver badge

    Title

    As titles go it's taking the piss

  2. Anonymous Coward
    Anonymous Coward

    Efficient is not Robust

    Business disaster prevention:

    Prevention costs money. Continuous backups cost money. Redundant systems cost money. Competent employees cost money. Training cost money.

    And we don't expect a disaster this quarter.

    Joking aside, the only effective strategy is to practice regularly. Every X days or weeks you shut down a server or service and restart it from backups. You move work from one site or cloud to another, etc.

    Only then you can be confident the backups work and people know how to respond quickly and effectively.

    But when this finally works, you cannot simply fire employees every few years as the replacements have to be trained again. So, now the board fires you and hires someone who doesn't waste the stockholder's money.

    1. vogon00

      Re: Efficient is not Robust

      At last.....someone as disillusioned and cynical as me!

  3. BinkyTheMagicPaperclip Silver badge

    Not just off site

    Remember from Buncefield and other disasters that the off site backup or the hot/cold standby systems should be an appreciable distance from your main business (which of course adds a large amount of suspense).

    Too many disasters have been affected by large power outages, dependencies which turned out later on to have a single point of failure, or appreciate natural disasters.

    I think some of work's data centres are around thirty miles from one another as the crow flies, which isn't perfect, but it's likely to need such a large disaster that the only thing most of us would care about would be staying alive to worry about greater resilience.

    1. MachDiamond Silver badge

      Re: Not just off site

      "I think some of work's data centres are around thirty miles from one another as the crow flies, which isn't perfect"

      My offsite backups are in the back of the closet in my mother's spare bedroom around 150 miles away and on a different tectonic plate.

      1. MatthewSt Silver badge

        Re: Not just off site

        My only experience of tectonic plates is when they rub against each other, so if between you and your Mum there's a join then I imagine that an issue affecting one of you would also affect the other...

  4. werdsmith Silver badge

    We know that if you haven't run your process through the DR system recently with your current staff and your current SOPs, you haven't got a DR system.

    Management thinks if you have an idea about how you might do it and you created a runbook for it several years ago then the box it ticked.

  5. pdh

    One day, your DR system will face a realistic test

    That's all but guaranteed. The only question is, will the test be part of a planned exercise on your part, or will it be the result of some unexpected external factor.

  6. Jim Whitaker
    Coat

    Missing p

    As it stands, your headline is missing a p. Or are you taking the p?

  7. Anonymous Coward
    Anonymous Coward

    I recall in a previous job, and this brings in the bit about "outside help", manglement had someone in from outside 9without actually telling IT !) - I don't recall if this was from the insurance company or the businesses US owners ...

    So manglement then throws a list at IT of "failures" and an instruction to sort them. One of them was "no lockout on failed login attempts", which we were told to enforce even though the OS we were using at the time didn't support it, nor could we lock out "dynamic" TTY lines used by Telnet clients (yes, long time ago) - only when the fixes caused "lots fo problems" were we allowed to revert. Somoething we could have pointed out to the clueless auditors (who had zero clue about our systems and just used generic checklists) if we'd been given the opportunity. But I digress ...

    After one such visit, our director just looked at us and instructed "write disaster recovery plans for the IT". Well I had some ideas, but fortunately got onto a business continuity course which was very illuminating - if you are in any way involved in business continuity (BC) or disaster recovery (DR) then I recommend getting on a good course. The main thing I learned was that you can't treat the IT as an independent entity - you need requirements from the business. Without this, you could implement something that won't support the business, or spend more than you need to on better than the business needs. Needless to say, requests for this sort of involvement from manglement were met with responses along the lines of "stop being awkward and just do it".

    Oh yes, and did I mention there was no budget for anything either - so if we'd determined that the backup systems we then had weren't adequate, we didn't have a budget to improve anything !

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like