back to article How NOT to f-up your security incident response

Experiencing a ransomware infection or other security breach ranks among the worst days of anyone's life — but it can still get worse. Like if you completely and utterly stuff up the incident response investigation and that snafu adds millions of dollars more in damages costs to the overall bill. In one such incident, Jake …

  1. Doctor Syntax Silver badge

    the subsequent forensic report stem from "a big issue of confirmation bias, ...The report reads like they formed a theory about what happened, and then spent a bunch of time going and searching for evidence that supported their conclusions."

    The correct approach in forensic science is form a theory and then look for evidence to disprove it. The harder you look and fail the more likely the theory is to be correct.

    1. Eclectic Man Silver badge
      Unhappy

      Or, as Arthur Conan Doyle has his famous detective, Sherlock Holmes, put it:

      "When you have eliminated everything that is impossible, whatever remains, however improbable, must be the truth."

      The thing is that doing an investigation means you have to look at everything, not merely what you want to examine. I cannot help feeling that in far too many disasters, 'mangelement' tries to blame 'other people' for the decisions they made that resulted in the breach / disaster / whatever.

      I did a few investigations some decades ago, and it was interesting that blaming the actual 'management' culprit was rarely an option. One had to merely specify what had occurred, and not apportion blame. So, the senior manager who left his (unencrypted) company laptop on the back seat of his company Range Rover while he went for an evening meal in a restaurant, returning 2 hours later to find a broken window and an absence of said laptop (and other sensitive company papers) was, of course, not punished in any way (a few years later he was promoted). If I had done that, I'd have been sacked.

      We suspect the laptop was wiped, and later re-sold, as there was no ransom demand, and the sensitive client data does not appear to have been used against the company. And, hopefully, the replacement was encrypted bore issue to him.

      1. This post has been deleted by its author

    2. Mike 137 Silver badge

      In the right order

      "The correct approach in forensic science is form a theory and then look for evidence to disprove it."

      But only after gathering all possible evidence without any preconceptions. Forming a theory too soon is a pitfall for the unwary. The fundamental problem, though, is that IR 'forensics' are not laboratory forensics, let alone forensic science -- they're practical investigation primarily directed towards damage limitation rather than abstract analysis. So we're not really discussing forensics (a heavily misused term1) here. The forensics come later, once the incident is under control.

      1: strictly, forensics is the gathering and presentation of evidence "pertaining to, connected with, or used in courts of law" [OED]

      .

      1. Outski

        Re: In the right order

        Upvote for correct definition of forensic.

  2. Mike 137 Silver badge

    "having a current incident response plan that is [...] regularly rehearsed

    Several organisations (both international corporate and UK govt.) I have consulted with conducted "rehearsals" as sit-down sessions with an external consultant who talked the executive through some elementary scenario and asked them how they might respond. In one classic case, the scenario was "how do we evacuate the building and get staff working from home when a UXB is discovered in the next street?", no other possible incidents being even mentioned during the session despite my attempt to make this happen (actually considered disruptive).

    Many IR plans I've encountered have been restricted to addressing a limited list of 'expected' incidents, and none have been actually live tested at all. One such plan, even after multiple notional reviews (as indicated by dates on the front cover) still contained an action flow chart with an infinite loop triggered by a branch early on in the decision sequence. Apparently, nobody had ever noticed this. When I suggested to this client that there should be at least an annual unannounced incident simulation I was told that it would annoy the notional first responders to be called out at 3 AM without warning. When I gritted my teeth and further suggested that in aid of realism confusion should be intentionally injected into the simulation I became seriously unpopular.

    Finally, no IR plan review panel I have encountered has included any technical staff -- it's always been the executive and senior non-technical management. So, taking all this into account, it's not surprising that incident response generally remains abysmal, as nobody seems to take it seriously until too late.

    1. Like a badger Silver badge

      Re: "having a current incident response plan that is [...] regularly rehearsed

      This is because organisations only want what they have: Sufficient pantomime to claim they have a DR/IR plan, and to claim that it has been tested.

      Very, very few companies want the disruption and pain of proper DR testing, because that will reveal things that don't work and need expensive fixing. And there's a simple test as to whether these organisations actually care: Do they keep vast amounts of rarely needed but easily pilferable data on hot servers in the first place? The answer's almost always yes.

    2. keithpeter Silver badge
      Windows

      Re: "having a current incident response plan that is [...] regularly rehearsed

      "Many IR plans I've encountered have been restricted to addressing a limited list of 'expected' incidents, and none have been actually live tested at all."

      I wonder what the risk register was like...

      [Seems like the risk register might be a starting point for rational incident response planning, but this is all above my pay grade these days]

    3. dmesg Bronze badge

      Re: "having a current incident response plan that is [...] regularly rehearsed

      "Finally, no IR plan review panel I have encountered has included any technical staff -- it's always been the executive and senior non-technical management."

      Yep. I've been the technical person in some such meetings. When you start pointing out flaws in plans and current practices/configurations, you become unpopular. Management tends to see these meetings as box-ticking exercises.

      1. Azamino

        Re: "having a current incident response plan that is [...] regularly rehearsed

        I contracted for one of the larger European banks in the noughties and once a year we would schlepp over to their Business Continuity site on a weekend to practise failing over from the 'live' site in the City. We would make live a handful of BBG terminals, connect to DataStream and have a bunch of juniors login to PC's and run thru' a tick list of tasks. The senior management didn't really engage and their running joke was that it was an elaborate scheme on our part to generate some easy overtime (and to be fair it was pretty tasty day rate).

  3. Howard Sway

    Both the CISO and CIO were fired over the security incident

    The classic executive response to an incident : find a scapegoat and CYA. Now, they may well have been incompetent and deserved their fate, but the main reason for there being no comprehensive recovery plan is neatly always an unwillingness to pay for something seen as very expensive with not enough benefit for the cost.

    1. Eclectic Man Silver badge

      Re: Both the CISO and CIO were fired over the security incident

      You will note that there was no mention of the people who hired the CIO and CISO or oversaw their work also being fired.

  4. Terry 6 Silver badge
    Flame

    Part of the usual culture

    The various organisations I've been involved with, at the various levels and roles over the decades, have always had in common an unwillingness to look at "hypotheticals". Or more to the point to plan for them. A mixture of hoping for the best and abject failure to contemplate the consequences of a business continuity failure. A lot of "It probably won't happen and if it does we've got the competence to deal with it". On the two or three occasions when something big did go wrong, like a major flood or a gas leak that meant evacuating hundreds of kids there would be nothing in place, and senior Brass would be nowhere to be found, leaving it to the troops on the frontline to muddle through and cobble together a solution of sorts- in the absence of a satisfactory solution that could have been arranged fairly easily with just a little bit of advance planning.

    And if it's IT disasters- well they have no concept of what that means.. I'd lay heavy odds that these top brass think the computers are just expensive typewriters and calculators.

    Formal Business Continuity systems can even make it worse. Being convoluted messes of reporting lines and structures designed, as far as I could fathom with the one I was slotted into, to avoid anyone having any direct responsibility for any kind of action, but everyone expecting to be kept informed about what other people were doing..

  5. Anonymous Coward
    Anonymous Coward

    Another way it can be screwed up is if initial signs point to a consultant group whose boss is a friend of the IT manager being the source of the screwup - great efforts will be made to ensure that the final answer is "it's a mystery, we know nothing."

  6. Anonymous Coward
    Anonymous Coward

    I wonder if the CISO was really just the Security manager. Most CISO roles I've ever seen are nothing of the sort. Chief "Blamehound" probably a more accurate description.

    Who authorised or blocked the non-patching of systems, didn't allow funds or time for an awareness training programme or allow any time for DR testing and rehersal. Probably not the two being pushed out the door.

    One upon a time I worked at an organisation that had a series of small ransomware attacks (disruptive not devastating). The first one took 3 weeks to fully recover from.

    After that I was granted a half day to run a tabletop exercise with the IT dept where we did a very basic run through of responding to a similar incident.

    A few weeks later we got hit again, this time, recovery was 3 days.

    I had feedback from IT where they specifically called out that the excercise contributed to the more rapid and controlled response.

    Managment then refused to authorise or fund any further tabletop training.

  7. Kevin McMurtrie Silver badge

    Patelco

    • Notice a security incident
    • Shut the banking core down
    • Leave the home page broken only because it shows some cosmetic information from the banking core
    • Don't tell anyone what happened. Let rumors spread.
    • Turn off the phone systems because too many calls are coming in
    • Let tellers inform people that it was a hack
    • Wait a week so customers get really angry and start missing debt payments
    • Make a public announcement that there was an incident. Your money is safe.
    • Remember that web site is still broken. Put a banner on it that banking is down, but allow logins to a server error page.
    • Make a public announcement that there was an incident. Your money is safe. Wait, was this already done?
    • Let another week or two pass
    • Make a public announcement that there was an incident. Sorry, that was done. Announce that money transfers can be queued up by tellers, but banking is still offline.
    • After a month, bring banking back online. Don't give any details except assuring people that their money is safe and that the bank may cover some overdraft fees.
    • All customers pull their money out

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like