back to article Ukraine's secret cyber-defense that blunts Russian attacks: Excellent backups

The Kremlin-backed cyberattack against satellite communications provider Viasat, which happened an hour before Russia invaded Ukraine, was "one of the biggest cyber events that we have seen, perhaps ever, and certainly in warfare," according to Dmitri Alperovitch, a co-founder and former CTO of CrowdStrike and chair of security- …

  1. Potemkine! Silver badge

    Insightful

    This is very interesting.

    As it will be quite impossible to thwart all the attacks, we should then install systems/methods/solutions that can take care of most of them. At the same time, we should focus on resilience and recovery of the systems. Basic efforts on protection, maximum efforts on recovery. It doesn't mean only having backups but also testing them, and it isn't always easy in a production environment.

    1. thames

      Re: Insightful

      Perhaps the production environment needs to be designed from the ground up with recovery in mind rather than it being tacked on as an afterthought. And perhaps recovery needs to be tested regularly, with objective measures of how well it worked.

      Like you said, eventually an attack will get through. The real test of your preparedness comes in terms of how fast you can recover from it.

      1. Suragai

        Re: Insightful

        Agreed about recovery being built in from the beginning, but a lot of places have systems that have been running from before there was a perceived* need for this, and adding them is not trivial.

        Convincing the business that it needs to practice restoring from backups is always very hard - until they get owned, then you get a lot of enthusiasm for the next year or two...

        *we know the need was always there, but convincing the business to spend the money is always an uphill battle.

      2. wcpreston

        Re: Insightful

        As someone who has focused on data protection and resiliency for 30 years, it warms my heart to see you say that. Backups have always been afterthought.

      3. Danny 2

        Re: Insightful

        @Thames

        Three minutes, that how long it will take our other centre to resume our work if we get nuked.

        "Wait, we're likely to be nuked? Nobody mentioned that."

        Or drowned, or terrorism, whatever. The important thing is three minutes downtime maximum.

        "I don't think that is the important thing. I'm still more focussed on the nukes,"

        I also worked for a wee business who had an excellent backup strategy which they never tested. DAT tapes were all chewed through with age. My first purchase order was for new tapes, and I was made to justify the expense. "Well, do you still want to work here next year?

        1. teknopaul

          Re: Insightful

          I find a good approach to testing backups is to regularly use the backups for something else.

          e.g. Restore production data into performance test environments.

          You need a big dataset anyway, take the opportunity to test recovery procedures.

  2. Terry 6 Silver badge

    He said...

    "We have to spend a lot more effort on resiliency."

    He should have said;

    "We have to spend a lot more money on resiliency."

    Because effort is sort of secondary. It appears to me that it's the cash for back-up systems, training, staff, etc. that is begrudged. Beancounters don't like that sort of spending.

    1. Anonymous Coward
      Anonymous Coward

      Re: He said...

      You hit the nail on the head.

      They, and the suits they report to, are the reason we don't have resiliency, or adequate security, or trained in-house staff. or high salary high experience employees.

      I can't blame them for everything - there's also the suits and the lawyers - but I've definitely got space for them up against the wall come the revolution.

  3. Dan from Chicago

    Maintaining offline backups is expensive and a lot of boring, repetitive work. But online backups can be zapped by exploits - they're part of the live system, even though some very clever approaches exist to buffer them against exploits and make them pseudo read-only. But a lot of clever approaches exist to stop the exploits in the first place, and they never seem to be enough.

    When this idiocy is over, Ukrainian IT people are going to be in high demand, everywhere!

    1. wcpreston

      Or... you could have off-site backups stored in a completely different computing environment that none of your employees have admin access to. This is what you get using a SaaS data resilience solution.

      As you already alluded, the only truly secure computer is one not connected to anything. But having backups in a completely different cloud provider, different account, different design, and different authentication mechanisms, with NO logins whatsoever to said infrastructure makes it extremely unlikely that some hacker attacking your production network would be able to use your network to also attack your SaaS backups.

      Disclaimer: I work for such a company, Druva. I'm also an expert in backups, having written four O'Reilly books on the topic. You can get an ebook copy of my latest one (Modern Data Protection) here: https://www.druva.com/ebook.

  4. John Smith 19 Gold badge
    Coat

    "Maintaining offline backups is expensive and a lot of boring, repetitive work."

    Boring, repetitve work that needs to be done accurately?

    Gosh. Sounds just the job for one of those new-fangled, what do you call em? Computers?

    Seriously, what is it with some operations team, they cannot seem to grasp that repetious s**t is exactly what compuers do best.

    1. SImon Hobson Bronze badge

      Re: "Maintaining offline backups is expensive and a lot of boring, repetitive work."

      Yeah, a lot of it is easy to automate - but then there's that boring stuff of removing media and taking it off-site, putting the right "next set" in place, etc., etc.. That's the bit that always got me back when I was doing daily (well, overnight) backups for ${day_job}. Oh yeah, and labelling the damn tapes.

      But in spite of zero budget beyond "if we don't have backups the auditors will fail us" minimum, at least in IT we were confident we could recover from total devastation - in as long as it took to get new hardware plus another day. No idea how that would fit in with the "if we don't recover in ${time_period} then don't bother because the business will be toast" figure as the business never did any analysis down that route (manglement were prompted).

      1. Trygve Henriksen

        Re: "Maintaining offline backups is expensive and a lot of boring, repetitive work."

        If you have to worry about inserting the wrong set, or labelling tapes, you don't have a large enough robot.

        In a robot, there are 3 sets of tapes; protected(tapes written to, and that should not be reused for a certain period), the 'Scratch tapes'(any tape not protected by a time limit) and the cleaning tapes...

        Tapes you NEED to remove for offsite can be exported, and you replace them with scratch or new tapes.

        And you can get LTO tapes already labelled from the factory.

        1. SImon Hobson Bronze badge

          Re: "Maintaining offline backups is expensive and a lot of boring, repetitive work."

          Robot ? Would you be referring to one of those expensive machines for handling tapes - the sort that our manglement would never ever have considered paying for, even if it would have been of practical use to us.

          We were a fairly small outfit - so manual operations - pop into the server room, pull the tape out of the drive, pop the next one in, send the tape off to our off-site storage (our warehouse far enough down the road that if it went up along with the main site then we'd not care about trying to recover). DAT tape for the nightly backup of live data, and DLT for the weekly full system backup - with a fairly large store (OK, filing cabinet drawer) holding a couple of months of the nightly tapes, and a couple of years of the weeklies. We only got all that because tapes can be sneaked out of the op-ex budget !

          And before you ask, no we didn't have spare drives we could test read the tapes on :-( And we didn't have spare hardware so we could do useful things like test full restores :-( I can tell you, major upgrades - of the "back up, wipe the disk array & re-configure (with more/bigger disks), restore" sort were "bum twitching" 8-O

          1. Trygve Henriksen

            Re: "Maintaining offline backups is expensive and a lot of boring, repetitive work."

            They forced you to use DAT tapes for backups?

            Those... those... FIENDS!

            1. SImon Hobson Bronze badge

              Re: "Maintaining offline backups is expensive and a lot of boring, repetitive work."

              Not forced - it was the logical choice at the time in terms of cost, capacity, and performance.

              From memory back then, DLT was horrendously expensive for both drives and media, SLR (correction to above, it was SLR we used for our weeklies, not DLT) was similarly expensive. Smaller tape formats (the mini version of the SLR) didn't have the capacity or performance. DAT apparently met our mix of constraints - for a while at least - and had a conveniently small media size which helped with storage.

          2. John Smith 19 Gold badge
            Unhappy

            And we didn't have spare hardware so we could do useful things like test full restores

            This is where an IT Manager has to actually start "managing"

            "Managing" the C Suite's understanding of what will happen during an IT failure IE Basically that the company will die within days, if not hours with the systems as they stand.

            That making restore processes effectively untestable implies that we (the dept) cannot guarantee that will even be possible right now.

            IT is a service. It's like air. Normally you don't notice it. But you soon would if it was removed.

            These are the areas we are we weak-to-nonexistent on.

            Have an outline budget but do no give it unless asked (because #1 rule is that while you cannot predict detailed threats you can predict directions and work through consequences, and hence devize mitigations for classes of threats. Anyone who found that last sentence too vague and abstract probably won't succeed managing an actual IT dept :-( ).

            I've worked in companies where IT was everything from a guy who came in every few months to ones where they regualary practiced starting up a backup generator in case of power failure (and it had to start).

  5. SImon Hobson Bronze badge

    I have to admit, over the last few months I've been seriously impressed with Ukraine's ability to carry on. Not just with IT, but every update seems to include (or included, they seem to have stopped including them as "non-news" now) something along the lines of "Russia knocked out Gas|Lecky|Internet|something, Ukraine has restored it" piece.

    And something we in the UK ought to consider, especially after the example just the other week. One of the reasons Ukraine has kept internet access working for so many people is that they have massive diversity - different providers, with separate fibre networks, etc, etc. In the UK, the majority of us are reliant on one provider - doesn't matter who's name is on the bill, the connection come curtesy of BT OpenRetch, and one hiccup can take out millions of connections across all providers. If we suffered an attack like Ukraine did, we'd go dark in minutes.

    1. ChoHag Silver badge

      > I have to admit, over the last few months I've been seriously impressed with Ukraine's ability to carry on.

      Impressive yes, but if you knew any Ukrainians you would not be surprised.

      What surprises me is that Russia didn't know this. Perhaps they assumed Ukrainians were as inept as they are?

      Slava Ukraini.

      1. SImon Hobson Bronze badge

        What surprises me is that Russia didn't know this

        Ah, you talk of "Russia" as if it's one whole, a people all in agreement, and crucially - where people are free to tell their superiors what they think. What has become clear over the last few months is that the last bit is far from the truth - for whatever complex reasons, this is Putin's war, and his top brass were not able (or willing) to persuade him of his misunderstandings.

        At various levels, "reality" is failing to influence those at the top. Whether it's the cannon fodder on the ground realising that the ""Ukrainians will welcome you as liberators with open arms" story was an outright lie, but facing a choice of either carrying on and getting killed or (if they're lucky) captured by the Ukrainians, or turning back and being shot by their own side. Or whether it's the officers up the chain who manage to change "we're getting slaughtered and getting no-where" to "everything is going as well as we could have planned for" by the time it reaches the top.

        I reckon a good few people really did "know this" - but were unable to persuade their superiors of that, unable to penetrate the reality filter protecting each level from anything inconsistent with the "official reality".

        I don't know Ukrainians, nor to be honest, much about the culture of that area. But your comment that "if you knew any Ukrainians you would not be surprised" is no surprise - with a neighbour like that, you would be totally mad not to assume there'll be hostilities sooner or later.

  6. Nifty Silver badge

    Not mentioned in the article was the fact that part of the recovery effort was getting the sat modems to a service centre to re-flash them, while distributing replacements. How the various victims including the German wind turbines were restore quickly must be in interesting story.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like