back to article Everyone screams patch ASAP – but it takes most organizations a month to update their networks

The computer industry may have moved to more frequent software security updates – but the rest of the world still takes a month or longer to patch their networks. That is one of the findings in a new report by enterprise network bods at Kollective. The biz spoke to 260 IT heads in the UK and US about their systems and security …

  1. Anonymous Coward
    Childcatcher

    Patchy McPatchface

    I am a dyed in the wool sysadmin that owns my own company (MD). I only have around 10 Windows and 20 odd Linux servers to worry about on a VMware cluster with a slack handful of SANs, switches etc and pfSense routers.

    I can't manage to patch that lot to Cyber Essentials standard all the time because CE mandates patches applied within two weeks of release. That's a laudable aim and one to work towards but the real world has a nasty habit of intruding.

    For example, recently (last two months) Mr MS unfortunately released a right old bugger's muddle of updates that broke Exchange a bit (ooh me Transport Service has died) and broke older and weirder SharePoints, and screwed Azure Sync (and the rest). I have also had RDP die on 2008R2 servers until I fix certificate perms and even which one to use. I really picked the wrong time to start restricting schannel stuff and enable other MS patches via registry keys.

    I *am* the pointy haired boss and have absolute power (until my office manager kicks me into touch) and know what I am doing. I'm CREST accredited and can throw together a Gentoo box without bothering with docs. There are not enough hours in the day to patch things anymore.

    I have a few customers to worry about and a few PCs as well

    1. TReko

      Re: Patchy McPatchface

      MS just performs minimal testing on their patches these days before releasing them into the wild and seeing what breaks.

      1. Rich 11

        Re: Patchy McPatchface

        We're their beta testers. And we know that, so we try to wait at least 48 hours after release to see if there are any wider signs of trouble (and recommended fixes) before initiating our own testing. So in answer to the question:

        Why? What causes the delay?

        I'd say it's justified caution. By the time the weekend has come and gone and you've negotiated downtime (a common consequence) for a reboot early one morning and given the business fair warning, the live systems don't get patched until 7-10 days after Patch Tuesday.

        I was told last week that we're about to move to CE+, with as-yet unknown degrees of documentation to be thrown on top of the existing procedures. There was a sharp intake of breath from half the room.

        1. Tom Paine

          Re: Patchy McPatchface

          Pardon my ignorance - what's CE+ ? Not Windows CE surely?

          1. Rich 11

            Re: Patchy McPatchface

            what's CE+ ?

            Cyber Essentials Plus.

      2. Keith Langmead

        Re: Patchy McPatchface

        "MS just performs minimal testing on their patches these days before releasing them into the wild and seeing what breaks."

        So much this! There was a time when MS actually bothered to test updates, and issues from updates were rare occurances. These days it's a rare month where an update doesn't break something. So is it really suprising given the very real and demonstratable risk of an update breaking things, vs the theoretical and possible risk of a compromise due to not installing it, that people may focus on protecting themselves from the greater and more common risk.

        1. anothercynic Silver badge
          Facepalm

          Re: Patchy McPatchface

          It's not just their patches that barely get any testing...

      3. Okole_Akamai

        Re: Patchy McPatchface

        I will just leave this right here -

        https://borncity.com/win/2018/07/14/microsofts-july-2018-patch-mess-put-update-install-on-hold/

        1. Danny 14

          Re: Patchy McPatchface

          we have a similar sized environment. I just leave MS to be patched by WSUS. servers inclluded. we run hyperv cluster and that has cluster aware updating. All our MS is as patched as can be. the updates run 2 days after release on Thursday with the thinking that bad patches will be pulled by first day beta testers by then.

          Our linux servers are very manual and take more time and notice to be patched.

    2. FlamingDeath Silver badge

      Re: Patchy McPatchface

      Have you tried employing more staff?

      What my experience (20 years) of working in the IT industry has shown me, is that company bosses don't like to invest in staff, they would rather have their 4 holidays a year than actually run a company with a full deck, it seems the skeleton crew is the norm these days.

      I despair too, because most of these organisations I have had the sad opportunity to experience, have made me facepalm endlessly to the point my forehead is bruised.

      1. A Managed services company used a common password to access customer machines (password-01)

      2. The Owner of the said Managed Services company didnt even know what the WEEE directive was after I told him he couldnt just throw electronics into the normal bin

      3. A technology manufacturer who developed a web app to work with their devices, had a support account which would obviously be easily guessed and the password was the same as the username

      4. A telecommunications company used 'folder redirection' in their domain policy, but the folder permissions were incorrectly set to Full permissions for everyone, meaning everyone in the company could read and write everyother employees desktop folders, my documents, etc. This company wanted to be ISO 27001 accreddited Bwaaahahahaha

      5. I once started at a company and was handed a laptop that had not been wiped since the previous owner, I asked for an ISO from MSDN and was told there weren't any available, and that he had a disc at home, I asked if it was from MSDN, and he said he "thought so", but wasnt sure

      6. Same company, was not given a deskphone, and was told to order one from Ebay and claim it on expenses!

      I could go on and on and on and on and on and on, about the clusterfuck that is, IT management

  2. JohnFen

    Personally or in my workplace?

    "Hold long does it take you to update your networks? And what is your future solution to the constant nightmare of security updates?"

    In my workplace, I just leave it to the IT folks and don't pay any further attention to it (aside from being very thankful that's not my job).

    Personally, I don't rush to update things immediately. I usually do it once per month, as anything more frequent than that is unworkable for me.

  3. veti Silver badge

    Testing, testing, and more testing

    It seems to me that "network scaling issues" and "company policies" are just another way of saying "testing".

    If only we could get a provider who was willing to certify, on pain of actually, y'know, paying money by way of compensation, that a system designed in compliance with their published spec would continue to work correctly after patching...

    Ah well, I can dream.

  4. Anonymous Coward
    Anonymous Coward

    You do patching regularly and religiously omce you've seen the outcome of not patching.

    A server outage due to a patch is easier to explain than a data breach lawsuit....you only have to hear about it from a friend to know you DON'T want one.

    1. yoganmahew

      "A server outage due to a patch is easier to explain than a data breach lawsuit..."

      Unfortunately it isn't. A patch is a change, and failed changes, particularly those that cause customer impact are ITSM black-death.

      Customer: "Why weren't told you were going to patch?"

      Us: "We patch every night and have told you so"

      Cust: "Why didn't you give us the opportunity to test?"

      Us: "Because we waited weeks the last time, and you still didn't do any testing that we could observe, other than asking us to switch if off one night"

      Cust: "Why don't you use something secure?"

      Me: "You don't like our mainframe and demand something shinier"

      Cust: "Why didn't you tell us this change would break our geegaw?"

      Us: "!@##$%$#"

  5. Throatwarbler Mangrove Silver badge
    Holmes

    DUUUUUH

    This is not just a sysadmin problem. Let's say you push a patch to a commonly used development framework such as .Net or (ptui) Java. Suddenly a business-critical application falls over, so you roll the patch back and report the issue to the developers, who say they can't possibly get to testing against the new version, so you go and badger their managers, who eventually come around, if you're lucky, to realizing that having a security breach is Bad, so they repurpose the developers to update their code. Might take a week, might take longer. Obviously, the breakage should have been caught in QA, but--bad luck--the entire QA department was let go/offshored/never existed because it was not clear to the PHPTB (pointy-haired powers that be) that they added any value. Some amount of time later, the vulnerability is fully patched. Rinse, repeat as needed for every motherfucking patch that comes down the pipe across your entire estate, but if your environment gets pwned it's the fault of the sysadmin team.

    The problem is not "content distribution," and the Kollective can fuck right off. I'll go lie down now.

    1. Anonymous Coward
      Anonymous Coward

      Re: DUUUUUH

      such as .Net or (ptui) Java.

      Seriously? Someone prefers Microsoft? It takes all kind of people.

      Suddenly a business-critical application falls over,

      And that's because these "business-critical application" were written like someone hired shit-flinging monkeys for the whole pyramid (more like inverted pyramid, amirite?) from management down to the just-out-of-school "experienced developers" - then expected some viable delivery. There is no automated test code, nor even unit tests. There is no documentation. There is no log. Sometimes there is no source code, design or general idea what's actually supposed to happen. Programs running as sysadmin, on the Internet. In JavaScript. If there is a licensed tool or library someone finagled something so that the tool actually - perilously - succeeds in performing a tasks for which one would *really* need to pay the next upwards tick on the monetization ladder. For others, support has been left to expire and won't be renewed because "muh budget" and reactiving support after 5 years is top dollar. Or the original company went extinct - with the Intellectual Property ending up at Computer Associates or lost forever to reaches of mankind.

      "Business critical" is a forever clown show of dumb slogans and wilful ignorance pretending that it won't be mugged by sad reality.

  6. KSM-AZ
    Mushroom

    Patching Hourly

    I'd patch hourly if it didn't break shit. Unfortunately it breaks shit. Not every time. Generally when you are least expecting it, and sometimes two weeks after it was applied, ... "Every time the EOM jobs run now they puke, and it takes hours for us to clean it up... What gives?" You can pie in the sky all you want, but if you change shit, you break shit. The more shit you break the worse your reputation, and the more push-back you get the next time you want to patch. YMMV

  7. ecofeco Silver badge

    It depends

    Most places I've worked at always test the patches first. So however long that takes.

    1. Mr Dogshit

      Re: It depends

      You don't have the luxury of time to test anything. As soon as the patch is released, the bad guys can do a delta of an unpatched system and the patch and work out the vulnerability within minutes. A teenager in his bedroom in Macedonia will work out an exploit within hours.

      This isn't the 1990s. Many of your servers will be VMs, so you can take a snapshot, a clone or should it go monkey, restore the VM from backup. Microsoft patches are all installed via Windows Installer, so you can roll back very easily.

      Idiots argue if you patch systems, you might break something. The truth is, if you don't patch systems, something's going to get broken.

      Now and again you might break something by applying a patch. So what? Stuff breaks each and every day in IT, and each and every day we fix it. That's what we do. Stuff breaks, we fix it.

      Ultimately you have two choices: patch ASAP or get pwned.

      1. Anonymous Coward
        Anonymous Coward

        Re: It depends

        Because your 'so what' in some organizations can result in hundreds of thousands of the currency of your choice go down the drain. I've seen it. In a lot of places, the O/S patching is determined by the application you are running, not the other way around. What's the point of having an O/S sat there all nicely patched when your app won't start! Or worse still, starts but subtly doesn't do what it did before.

        I am certainly not arguing against the need to patch - it is critical. But so is some level of testing, and of course buy-in from the application vendor. To patch at the drop of a hat - that way lies madness.

        1. Anonymous Coward
          Anonymous Coward

          "can result in hundreds of thousands of the currency of your choice go down the drain"

          The new GDPR requirements and fees about data breaches that lead to personal data leak, especially if not disclosed in time - could change that approach...

          It could also lead to better written application that should not break when a non-breaking patch is applied (unless the patch itself has big issues, of course) - too many applications are too fragile.

          1. Anonymous Coward
            Anonymous Coward

            Re: "can result in hundreds of thousands of the currency of your choice go down the drain"

            Personally I think it will take time for GDPR to sink in, and even more time for the big Corps and PHBs to sit up and take notice. Probably after the few big fines hit the news. But it still doesn't entirely address the issue of application certification and support on patched O/Ss.

            A lot of businesses are trying to use off the shelf software rather than house their own IT Devs (its allegedly cheaper, right). Once that happens, the only recourse is to badger the vendor - who more often than not is a small dev company that do not always understand the implications of running their applications in an enterprise environment (not all I hasten to add, but enough to make this a problem). So what then? The vendors do some quick testing on a system that may or may not be at the same patch level as your production system. You get a tick in the box and.....

            You are right though - too many applications are fragile, and that (I believe) is a limiting factor in being able to patch (and I don't just mean O/S patching - the same would apply to database, middleware, application etc etc etc).

            1. Mike Henderson

              Re: "can result in hundreds of thousands of the currency of your choice go down the drain"

              "the vendor - who more often than not is a small dev company that do not always understand the implications of running their applications in an enterprise environment"

              So much, so often.

              The 'solution' bought and already paid for by a non-IT part of the organisation proved to be from a two-person company - a 'sales guy' and a 'developer' - who thinks its their lucky day because they've found someone who's prepared to pay six figures.

              Then

              * you say "OWASP" and get a blank look,

              * you ask them about licensing the database and find "that's not needed" and eventually "it's MS-Access"

              :(

      2. Pascal Monett Silver badge

        @ Mr Dogshit

        It is obvious that you are not in charge of ensuring that over 1,000 people can work every day.

        Neither am I, but have rather close relationships with people who do. And I have learned from them that patching is a tight-wire rope exercise in managing not only safety and machines, but people and expectations.

        Yes, security is obviously preferable. However, you always have Mr Performer who just can't have a minute without his server access, because he is making all the money for the company so his needs trump server downtime needs. And since he is the guy bringing in a fair chunk of revenue, his managers are on his side.

        Of course, the admin knows that if the network is breached, it will be his fault and maybe even his ass, but the divas are the ones who give the okay for downtime, not the admin.

        1. Anonymous Coward
          Anonymous Coward

          Re: @ Mr Dogshit

          " the admin knows that if the network is breached, it will be his fault and maybe even his ass "

          I n these days of patch hell, if i was return to sysadmin, I would make sure it was in my contract, that if I advised something needs to be patched ASAP, and management started crying about down time and insisted I waited, that ANY data breach after that time up until patching was not my fault, and would not be held responsible for and that blame would be directed at whoever objected to the downtime and I would get paid a bonus for cleaning up the avoidable mess....

          I guess that with those terms, its unlikely you would get any job in IT, but I really would not want to go back to that stress again anyway. I'll stick to making a living on YouTube videos....

          1. JohnFen

            Re: @ Mr Dogshit

            I'm a dev, not an IT person, so this might not be quite the same -- but I've found that it's not necessary to put butt-covering stuff like that in the contract. All that is necessary is to keep records of what you've said and done so that if someone downstream messes up, you can prove that you did your job properly.

      3. JohnFen

        Re: It depends

        "Idiots argue if you patch systems, you might break something. The truth is, if you don't patch systems, something's going to get broken."

        The "idiots" are correct, though -- patches break stuff all the time. Your argument is solid as well. So, I guess the real message here is "give up, you're fucked no matter what you do."

      4. Juillen 1

        Re: It depends

        You don't work in the mainstream Sysadmin role do you?

        I work in the medical sector, and we have to have service packs validated by the vendors of medical systems (Linacs, CT scanners etc.), and in some cases even individual updates. If you don't have that, you're running a medical device without its CE mark.

        If you do that, you're liable for any damage or death (yes, that's what happens when medical devices fritz sometimes) caused. Oh, and kiss goodbye to ever working anywhere again in the IT field. And maybe even get jail time for it (I've seen inquiries into medical tech where things have gone wrong, and jail time is a very, very real possibility).

        May seem very easy to you that "something breaks, so you fix it". What happens when the break corrupts databases and takes down other (you thought) unrelated systems (oh, to have a nice clean delineation of systems!). It's an absolute nightmare.

        That's why you test what you can actually apply first. This can take a couple of days; in the meantime, you're basically doing a risk assessment that says "The chances of us being hacked are lower than the chances of killing people/taking the company down for an undue length of time due to untested behaviour".

        And that's the nature of a risk assessment; occasionally the risk materialises.

        If you think things are a binary "easy" evaluation, you're absolutely wrong. Especially when there are limited resources/budgets to invest in systems to keep the infrastructure ticking along properly. Even with huge budgets, there's still an element of gamble.

        Whichever way you go, you stand a chance of being damned, but by taking the test->apply cycle, there's a better chance of still having a job and a career at the end of it.

  8. Anonymous Coward
    Anonymous Coward

    I dunno

    We have had the rollout of the latest patched version of 7zip on the "TODO" list for about 3 months now.

    1. robidy

      Re: I dunno

      If you understand the issue and you're not vulnerable or have mitigation in place that's fine.

      If you don't know what the vulnerability is then you shouldn't be responsible for patching.

  9. Denarius
    Meh

    and in big End of town

    unskilled, uncaring, not responsible socialised psychopaths often take great delight in denying change requests randomly to stroke their egos. Such power, and no responsibility for consequences. It is always the techies who get blamed. The other usual KPIs have been covered above. Usually summed up as overworked staff.

    1. yoganmahew

      Re: and in big End of town

      @Denarius

      Absolutely! Change approval is the homeplace of charlatans and idiots with a god complex. The poor techie has to fill in the PIR and the RCA and be subject to enhanced scrutiny for every other change for the next month. CI/CD is a dream of children along with pink fluffy unicorns. For most of us, change is hell. Putting in somebody else's change is inhabiting somebody else's hell.

      1. Anonymous Coward
        Anonymous Coward

        Re: and in big End of town

        Putting in somebody else's change is inhabiting somebody else's hell.

        "I have no change approval and I must patch": A Harlan Ellison short story.

  10. onefang

    For some complex software, it takes a long time to figure out "what did they break this time", and you can almost guarantee that something got broke. I'm looking at you OpenSim.

  11. vmistery

    I doubt the figures are anywhere near that. Everywhere I’ve worked has had at least a few machines that only get patched every 6 months or so due to it needing to be up 24/7. Sounds terrible bt you can’t blame the techies who maintain it, it’s always a lack of quality project management at the beginning that fails to consider it. Microsoft could do a lot to improve the patching experience by not requiring a reboot each time, that’d speed up server patching.

    1. Wensleydale Cheese

      "Microsoft could do a lot to improve the patching experience by not requiring a reboot each time, that’d speed up server patching."

      This.

      1. SAdams

        I think the restart is to help keep the server working. I’ve known Unix boxes to stay up 10 years without a reboot, but Windows servers are more reliably when they get the monthly restart. I suspect the patch that MS re-release each month is mainly to ensure there is a restart.

  12. Potemkine! Silver badge

    Do you prefer being burnt at stake or quartered?

    Do you prefer to let your network vulnerable if you don't patch ASAP or putting it down with a faulty patch if you do?

    Looks like IT is in the following situation: " If the stone falls on the pot, woe to the pot; if the pot falls on the stone, woe to the pot; in either case, woe to the pot"

    1. hplasm
      Unhappy

      Re: Do you prefer being burnt at stake or quartered?

      And now I have my new Sig.

      Woe to the pot.

  13. Anonymous Coward
    Anonymous Coward

    Depends on the customer's cycle

    My primary account of 4000+ Windows boxes takes roughly 21-28 days to patch completely. This covers Dev, Test (UAT, SIT etc), Prod and DR systems in roughly that order. Patching on Dev starts on the evening of Patch Tuesday after customer has assessed and given their own rating on criticality.

    Our limitations are based on agreed change windows with the customer and requirements from the customer for x amount of time between Dev, Test, Prod and DR changes. We also have lead time requirements for changes depending on the environment so changes have to be raised with enough lead time to be approved before the deployment.

    No two environments for an application can be targeted the same night (no patching of Dev and Test regardless of application complexity (even though the same teams do the post implementation validation for the environments.))

    Mission critical systems DR is usually left for 7 days from Production to allow for any possible issues that may not be detected earlier ( occasionally used functionality etc.

    Of course, if it's critical enough then it becomes a case of trying to patch as much as can be done (with more limited testing/timeframes) in a single night etc without overloading the patching infrastructure and having enough people to cover day to day operations.

    Other accounts I've heard only patch once every 3 months.

  14. LucreLout

    Oh FFS

    More than a third of IT managers – 37 per cent – view the slow installation of software updates as the biggest security threat they face; more even than idiot end-users choosing bad passwords (33 per cent).

    More than a third of IT managers - 37 per cent + are idiots, said LucreLout.

    Patching is important, very important. There, I said it. But your biggest risk is a slow patch? Erm, no, no it isn't. It's your developers.

    The last bank I worked for made the mistake of giving every dev read access to everything in version control. And some devs had checked in secrets, such as connection strings, user names and passwords for service accounts, systems access, mail servers etc etc. I could have done anything I wanted to their systems and they would have had absolutely no way of tracking it back to me.

    And thats before we get to things like prod access for builds and releases - allowing me to make users do stuff on other systems while interacting with my own desktop app (I did a demo of this for management using development environments, where by I had them checking their risk exposure on my system while it secretly executed trades on another due to their use of cookies/remember me). They still didn't fix the problems......

    Your developers should be some of the smartest people in your organisation, and they should have a very good handle on exploiting software (hard to do defensive coding if you don't know what to defend against).

    So when they think a slow patch is their biggest risk, I think they don't have a clue what they are doing and should vacate their roles immediately. Your biggest risk isn't that your admin applies a patch 4 weeks late, your biggest risk is your developers. Try to remember that when the temptation to screw them over inevitably rears its head - hopefully they know shitting on the admins isn't a good idea already.

    1. Tom Paine

      Re: Oh FFS

      I could have done anything I wanted to their systems and they would have had absolutely no way of tracking it back to me.

      I worked in the SOC of a well-known multinational megabank, and I think you may be mistaken.

      1. Anonymous Coward
        Anonymous Coward

        Re: Oh FFS

        But you forget, not all banks are created equal.

  15. Herring`

    In an ideal world

    You would have a pre- prod environment which exactly mirrored prod and a comprehensive set of automated tests that could verify 100% (or as near as you can get) that everything works. Patch that, run the tests and if all shiny then patch prod.

    I haven't encountered many (OK, any) outfits that want to put in the investment to do this. Which is daft as it would also benefit your dev process hugely. Meanwhile, IT still gets the kicking when stuff is rolled into production and things break due to inadequate testing in an inadequate test environment.

  16. Anonymous Coward
    Anonymous Coward

    There's other considerations of how often you patch. If you have the tools and protections in place stop most nasties or hackers, then you don't have to be so strict. NSX is a prime example, so this server only tasls to that server on these ports, so why would it need a critical IE patch or Meltdown fix when there is no attack vector open.

  17. Multivac

    Try patching an Exadata once a month LOL

    It takes almost a month to get an Oracle Exadata back up and running after Oracle themselves patch it, if we patched it every month we'd never get to use it!

  18. chivo243 Silver badge
    Holmes

    Avoid the Bleeding Edge

    After a few deep and nasty cuts from "OMG, gotta patch now dogma" I have learned to let others get cut on the bleeding edge or burned on the hot kettle... I'll wait and see if the patch really works as it says on the tin...

  19. Anonymous Coward
    Anonymous Coward

    Quarterly

    ...that is, at my employer discussions are underway about the practicality of moving to a quarterly cycle at some point. Don't ask what it's been historically.

    location: City of London

    sector: financial services

  20. Sixtysix
    Unhappy

    Prioritise carefully

    I won't allow patching without testing... except very occassionally on Internet connected devices/servers.

    Everything else gets a test cycle.

    That can be 1 day, more usually two weeks, sometimes longer.

    We have a LOT of legacy systems and applications that really rely on a cobbled together patchwork - and that means some patches do get rejected.

    About to find out what that means for Cyber Essential Plus - but whatever the outcome, business operation trumps potential risk.

    I'm not for changing!

  21. 0laf

    Patching is like throwing rocks off a cliff. You probably want to look at what's below before you roll a big one.

    We have over 20k end points and probably 2k different and significant applications. Many old, fragile and badly written, many involved in life and death services.

    We have to be very careful when rolling out our patches to ensure we don't wreck anything significant in the process. MS current practices make this very difficult.

  22. Anonymous Coward
    Anonymous Coward

    "Windows 10 will automatically update your system"

    WSUS and proper group policies achieved it since XP/2003 at least. I have groups that are automatically updated when critical patches are issued - it something bad happens, we'll rollback, we assessed the risks and costs of unpatched system was higher than the cost of rolling back if needed.

    Others may need manual approval, depending on what happens if a patch has issues.

    The biggest issue under Windows is the software which requires to be updated manually because it requires a specific procedure often only partially automated.

    Other systems has patch issues - i.e. Python virtual environments, if the base Python they are created from is patched, the venv may not...

  23. Pascal Monett Silver badge

    These posts are very interesting

    Most of them point to patching issues with large or very large user bases.

    I am self-employed. If a patch borks my system, I am good for a full day of reinstalling everything to be able to work again. Who's going to pay me that time ? Nobody.

    If my system is bricked, then I am good for an emergency trip to the nearest quality hardware dealer and a hefty ticket price to get a new machine, which I then have to spend the day cleaning, removing stupid vendor-installed cruft I couldn't care less about, and getting the stuff I need to start working again. So a day lost again, and a big expense that I have not budgeted. Who's going to pay me for that ? Nobody.

    If my system should be hacked (has never happened), then at worst, I'm good for losing a day reinstalling.

    So my threat profile tells me that I can wait a while before patching, to see if there are any howls of pain from the latest batch of Windows updates. If I don't hear anything for a few weeks, then I put Windows Update back into Auto and patch, reboot, finish the patches and reboot again. Then WU goes back to Disabled, where it belongs.

    1. Anonymous Coward
      Anonymous Coward

      "Then WU goes back to Disabled, where it belongs."

      You can set it to simply warn you when patches are available, maybe download them, but not install until you tell it to (of course, if you're not using Windows 10).

      No need to disable it fully - some critical patches may be released outside Patch Tuesday. Not all patches are also system patches - patching IE or Office is far less risky than patching Windows kernel - still, they can ensure protection against dodgy contents (IE could still be used by applications using it embedded, even if you don't use it).

      For example, I easily postponed for a while the Meltdown patches because there were issues with some Asus motherboard management tools.

      Patch that totally bork a system luckily are not that frequent, and may bork only specific systems, and still there could be ways to rollback without a total reinstall. While a ransomware hitting you could be far worse - a borked OS disk is usually still readable....

  24. Drew Scriver
    FAIL

    Fear and pride...

    Management is commonly driven by (mainly) two factors: fear and pride.

    Apply that combination to any project or service and the chance of success are greatly diminished.

    Pride drives hasty releases ("Watch me meet deadlines!"), a preference for the latest-and-greatest ("I'm hip and modern"), results in jumping on the latest bandwagon ("Always ahead of my golfing buddies"), cutting costs ("See me stay under budget!") - you name it.

    Fear drives hasty releases ("If I miss the deadline I'll be in trouble"), avoids patching ("I'm not going to be the one who causes stuff to break"), doesn't enforce standards and requirements ("The best conflict is the one avoided"), and so forth.

    Many companies therefore have created an environment where patching is all but impossible. Rather than saying that compliant applications are a requirement, all app-owners, vendors, and the like have to test and sign off - and each has the power to halt patches, even if only one application out of hundreds might break if, lets say, SSL3 is disabled...

    Of course, if they don't have the time (or knowledge...) to test their application they won't be able to sign off on the required patch, and fear then drives the decision to forego patching.

  25. mutin

    Patching? Actually it should b Vulnerability Management cycle

    Auto patching system may fail. I've seen that. In particular funny (sorry) was that I explained IT director that vulnerability scanning is standard way to check if patching works. He was not idiot but ... So, when they hired IT boss on the top of him and we resumed scanning, we found that 30% of computers were not patched while system reported they were. Then virus outbreak happened on the top of multiple vulnerabilities.

    So, patching we discuss here IS NOT THE GOAL. It should be always Vulnerability Scanning after.

    Is it possible to do within a month? Very hard. Almost impossible considering complex IT systems. The only one success story was when I did VM for Navy installation of 4,000 computers ten years back. Somehow IT guys managed to patch and I was able to do my scanning. Since that I see only sad stories.

    The chaos result of of what we have now was created by IT giants rushing for profit no matter what.

    They created the environment of "IT jungle" where we - the food for predators and them aka "hackers" - will coexist forever. The only one way to limit your risks is to limit your Internet connections. Pack your bag, forget your computer and go South. Bingo.

  26. Paul Hovnanian Silver badge

    Is it a patch?

    Or are you trying to shove Windows 10 onto my machine?

  27. Chris Evans

    Please enlighten me

    When patches are issued they normally give quite detailed information about the problem they are fixing. Why are so much details given at this stage?

    Many patches I read of seem to have not yet been exploited so why give the hackers many of the details they need?

    I would have thought those details would not be released for say a month, though reading this article makes me think three months would be better!

    If the cat is out of the bag and the issue is being exploited already then I understand there is no point in delaying things.

    1. It's just me

      Re: Please enlighten me

      Because even though you may not be able to apply the patch immediately, if you understand the details of the vulnerability you may be able to determine that your configuration doesn't have that vulnerability or expose it to attackers or that there is a mitigation that you can quickly put into place that will protect you until you can apply the full patch.

      Even if they don't give details, that doesn't stop the hackers from finding out what was fixed, for example, using binary diffs and disassembly.

      MS has stopped giving details on their patches and just pushes out a few big ones that may contain dozens of fixes. That hasn't stopped the criminals from releasing exploits soon after.

  28. JeffyPoooh
    Pint

    Blatantly obvious solution...

    When Microsoft (or whoever) issues a new patch, the IT staff should instantly roll it out to their selected several victims. Then, on the odd occasion when the freshly-patched systems malfunction, the IT staff can wander over, blame and ridicule the hapless users for their misuse of the systems and surfing illegal dodgy websites causing such crashes, all while quietly uninstalling the recent patches and rolling back the system. After a few days of this not happening, they can start rolling out the patches to more and more users - all within about a week at most.

    This should be the de facto approach.

  29. MAH

    Microsoft has made patching a complete cluster with these stupid monthly rollup patches.

    Seems every month they break something, but you really can't exclude the one broken component out of 20 so you have to skip the patch. the next month, they fix that broken component, but now a different component is broken so now you wonder which broken component is worse.

    Look at July rollup....41 serious issues with the rollup...including the .net which broke sharepoint, exchange,etc. If Microsoft can't even develop a patch that doesn't screw up their own inhouse applications (which should be really simple for them to test right) who trusts them not to screw up every other vendors applications.

    It comes down to this, Microsoft has completely lost everyone's trust when it comes to patching because they don't bother to test at all (which is obvious with the july .net patch) so no one wants to just set auto updates and go....

  30. Anonymous Coward
    Anonymous Coward

    In theory, and practise

    Currently, I'm a sysadmin of the firm I work for, and a number of our customers.

    Regardless of the network set up, bandwidth, automation, best practise and all the money/time in the world thrown at it, MS still struggle to release TESTED patches.

    Scenario : Patch Tuesday comes around, MS release 60 patches for Office, 20 for Servers 2008 -> 2016 (applicable for x64 infra), and Adobe throw in some for good measure. These get approved when they appear on WSUS.

    End Result : Patches get rolled out, finance complains because scripts to integrate the financial system and Excel for reporting doesnt work anymore, web services stop working because you're running them compliant to .NET 3.6 but they rolled out .NET 4.2 and your other software companies haven't yet updated their application to work with it.

    Sysadmin spends a couple of hours patching, several hours unpatching, meanwhile getting blamed for being out of scope/GDPR compliance on system security and integrity.

    OR

    Everything goes through fine, no problems reported, but all of your clients stop reporting to WSUS because the patch wasnt tested with WSUS deployment (NO, Not everyone wants to use InTune/SCCM!!) and now you have to manually patch all the clients to get them reporting again.

    Oh and dont forget, auto deployment of the 6 month updates to Windows is bad for on call users. Best to do that manually.

    As above, sysadmins seem to get a lot of the blame and responsibility when we are only responsible for the maintenance and upkeep - not actually developing these patches.

    And don't get me started on testing procedures in small companies....

  31. Roland6 Silver badge

    Kollective get top marks for the misuse of survey results

    There is a distinct lack of clarity about just what is being talked about. I see no real connection between "a critical remote-execution bug in Apache Struts 2" ie. 'datacentre' systems, and the way an organisation may go about fixing this and relying "on employees updating their own systems" ie. end user systems.

  32. Okole_Akamai

    I've done patch management for 15 years now for organizations with offices around the globe. Here's what I've learned:

    1. Don't break anything.

    2. It is a collaborative effort to distribute patches in an Enterprise environment. More importantly you need Executive support to do it right.

    3. Test the patches, document results, compare results, establish a schedule, obtain buy-in to move forward. If something isn't right or breaks, speak up.

    4. Communicate results, ensure you have buy-in to distribute patches and risk is accepted by delegated authority.

    5. Don't be the person everyone glares at when you come into the office the morning after a patch distribution.

  33. Lorribot

    What causes delays? Systems designed to finish a project and not to be managed. websites that fall over when their database server reboots. Having servers in the hundreds or thousands so only option is patch automatically, but applications need to be shut down gracefully before patching and that can't be scripted. Applications that need to be logged on as a specific account and run a specific application on start up (yes really in 2018 they still exist) SQL server that have multiple databases on and patching the apps in the right order and sorting it all out is beyond 5 minutes work. There is much more.

  34. SAdams

    To do fast patching reliably, you really need to have (the equivalent of) Winrunner and Loadrunner scripts setup on your pre-peoduction environment that are constantly maintained for all critical applications, and then a full team to manage all these scripts and update them each time an application, OS or middleware is updated. However now that most companies use VM’s on replicated storage, as long as the storage has snapshots (and there is some failover mechanism), security should really take presidence when roll back is an option.

    I suspect most companies with *nix patch less than monthly ?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like