back to article You weren't hacked because you lacked space-age network defenses. Nor because cyber-gurus picked on you. It's far simpler than that

The continued inability of organizations to patch security vulnerabilities in a timely manner, combined with guessable passwords and the spread of automated hacking tools, is making it pretty easy for miscreants, professionals, and thrill-seekers to break into corporate networks. This is according to the penetration-testing …

  1. jake Silver badge

    It can hardly be called hacking ...

    ... if the system has a huge sign on it that says "C'mon in!", now can it ... And especially not when there's a key under the welcome mat, the back doors are unlocked, and all the Windows are wide open.

    1. Mike 137 Silver badge

      Re: It can hardly be called hacking ...

      This has been the norm for decades now. As a consultant I have rarely found any serious attempts at continuous management of security. "Policies" are written but not verified for efficacy or followed, and ISO 27001 certification is often obtained on the basis of an ISMS that exists only on paper or as electrons somewhere. Most corporate cyber security consists of a mission statement and pure luck so far.

      1. Anonymous Coward
        Facepalm

        Re: It can hardly be called hacking ...

        And the report didn't even include the fourth P -phishing.

      2. Anonymous Coward
        Anonymous Coward

        Re: It can hardly be called hacking ...

        Don't turn your hand to government then, for there are only committees and publicity, Metrics, measurements, tests, verifications, reviews, improvements, are either scams or didn't happen. Authorisations are concealed, and only claims and counter-claims are allowed to fight it out in the commercial media.

        No wonder Gov+IT<=0

  2. Pete 2 Silver badge

    Too hard, too frequent, too unreliable

    > About 60 per cent of the web application holes used were deemed critical

    It seems to me that the basic problem is that systems are not designed with upgrades and patches in mind. You can play the IT support conversation in your head.

    Hi CIO, I.T. here. We need to take the whole corporate internet presence offline to perform a vital security patch

    CIO But you did that last week!

    No, that was the office system and that was because of a bug in the email server

    CIO And the week before that

    That one was the database

    CIO Dang! So how long will we be offline

    It's hard to say, in theory only 30 minutes but more likely an hour. If things go pear shaped maybe a week.

    CIO I'm not signing off on that. Cant you check the patches on a non critical computer first?

    We tried, but it failed. Our web suite is running version 4.10.122b and the test systems are at 4.10.121a so (obviously) it didn't work

    CIO No, you'll have to wait until there are more bug fixes, then we will take the system down and install all of them

    Don't you remember last November when we tried that and it took 2 weeks to restore the service?

    CIO My mind is made up. We were sold this package on the basis of 99.999% uptime. That's a 25 second outage per month. You lot in IT blow through an entire year's worth of downtime every week. Find another solution <click>

    1. Evil Auditor Silver badge

      Re: Too hard, too frequent, too unreliable

      While I partially agree, how about updating the test system to 4.10.122b first?

      I wouldn't be too happy either to sign off a patch that hasn't been run on a test system.

      1. Flywheel

        Re: Too hard, too frequent, too unreliable

        Update the test system?

        Hmmm.. that could involve money for an updated licence so probably no.

        1. Anonymous Coward
          Devil

          Re: Too hard, too frequent, too unreliable

          After all, it's not like it's something important like updating our company's websites to reflect our design consultant's recommendations.

    2. Doctor Syntax Silver badge

      Re: Too hard, too frequent, too unreliable

      " We were sold this package on the basis of 99.999% uptime."

      And there's the flaw in the thinking. Start thinking in terms of useful availability and downtime. You're trying to manage for minimum loss of useful availability due to downtime. If downtime isn't planned - you had a hardware failure, you got hacked, whatever - then there's no guarantee of it falling into a time of minimum usage. Planned downtime can be arranged fro when it will have minimum impact and its purpose is to minimise the risk of unplanned downtime. But risk is harder to measure than uptime.

      1. Stumpy

        Re: Too hard, too frequent, too unreliable

        Furthermore, if five nines (or even four nines) uptime is so important, you should have your systems configured for High Availability so you can take part of it down for the upgrade without affecting the remainder, then flip to do the other half once you've verified the upgrade works.

        Frankly, having a complete outage on such a critical system (for performing updates anyway) is inexcusable.

        1. Sabot
          Thumb Up

          Re: Too hard, too frequent, too unreliable

          Indeed, if it isn't setup with high availability, apparently it isn't business critical.

          1. Byham

            Re: Too hard, too frequent, too unreliable

            It was _sold_ to the board as up time of 99.9999% it is not really business critical although failures would be best if they were not in normal working hours. But the board got this warm fuzzy from the salesman that it was a really high availability system. You will get the same response 'what again?' With the cheapness of hardware and comms these days there is no reason why all businesses do not have an identical shadow system ideally in a different location on different power etc

        2. Anonymous Coward
          IT Angle

          Re: Too hard, too frequent, too unreliable

          This is what I was about to post in this conversation. "Wait? If you need 99.9999% uptime, wouldn't you have a DR/business continuity protocol in place, with full data replication? Can't you just plan a switchover from the primary datacenter(s) to the backup centers, update the primary centers, then switch back and update the backup centers?"

        3. Rich 11

          Re: Too hard, too frequent, too unreliable

          you should have your systems configured for High Availability

          Remember when you forcefully recommended that right at the beginning of the design phase, but it got nixed by the beancounters, and the project manager said there was no way the extra time could be allocated because the business-critical deadline had already been decided, and your boss said there was no chance he could clear someone from another project to help with QA, and the consultant said the guy at his office who was the HA expert had just that morning resigned after calling their boss a lying shitbiscuit? So of course it's your fault for not being able to provide the expected uptime.

        4. MrNigel

          Re: Too hard, too frequent, too unreliable

          Reminds me of a project about 10 years ago from the early days of BPOS. One of the first actions was to clean up on-prem AD before implementing the schema updates. Reported back to the CTO - Do you realize your PDC failed over 11 months ago? His reply: "Speak to IT to find out why!"

      2. Giles C Silver badge

        Re: Too hard, too frequent, too unreliable

        The 5 nines uptime figure is all about availability not downtime.

        So if you have have 2 sets of machines where either can handle the full load, you take set 1 down and patch/test.

        Then bring that into production and repeat for set 2.

        I have seen services run at 99.999% when components have been down for a couple of hours that week for patching, because this was planned maintenance work.

        1. Kientha

          Re: Too hard, too frequent, too unreliable

          One org I worked in had a highly available setup... and decided to patch both systems simultaneously which then broke the systems causing a 2 day outage

          1. jake Silver badge

            Re: Too hard, too frequent, too unreliable

            That would be a management issue, not a system issue.

          2. Anonymous Coward
            Anonymous Coward

            Re: Too hard, too frequent, too unreliable

            Updating both systems at the same time, even for something that appears very routine, flies in the face of the principles of "high availability".

          3. Anonymous Coward
            Anonymous Coward

            Re: Too hard, too frequent, too unreliable

            Am I the only one that read outage as outrage?

        2. Stuart Castle Silver badge

          Re: Too hard, too frequent, too unreliable

          2 (or more) sets of machines is a good idea. The trouble comes when you need to upgrade or replace one set and the beancounters notice that the system only switched to that set once or twice in the year.

          You can bet if that happens, they’ll be asking why you need the machines.

          You can usually persuade them by telling a horror story about what would happen if the only machine left running the system failed..

          1. Aitor 1

            Re: Too hard, too frequent, too unreliable

            Then they outsource the system "to the cloud" that promises 99.999% but delivers only 99.999% "on average" but not specifically you and has no SLA indemnity.

            Oh, and your job goes too.

            Of course they are going with a worse service for too much money, but that sill not bring your job back.

      3. Claptrap314 Silver badge

        Re: Too hard, too frequent, too unreliable

        Yeah, the SRE-educated burst into laughter at that point. They might have been SOLD five nines, but clearly only two were delivered.

        1. Snapper

          Re: Too hard, too frequent, too unreliable

          The bean-counter in chief is probably wondering if a nine-fives service would be fine if the IT department has two lots of kit and a different connection.

  3. Pascal Monett Silver badge

    Ah, now I get it

    All those major companies that get hacked, proclaim that customer data security is their #1 priority and then claim to have installed "top level" security measures, they're just installing the patches now.

    Well, if you need to be hacked to get the idea, so be it.

    1. jake Silver badge
      Pint

      Re: Ah, now I get it

      You only just figured that out?

      It's been a rather sad industry standard for decades.

      Probably the primary reason for --> that icon. Have one on me ...

  4. jezza99

    In a corporate environment this is a hard to solve problem.

    At home I patch now and ask questions later, as is best practice.

    But in my last employer they were dependent on software by vendors who did not get computer security at all. And some of them are big names in the field. We were forced to run versions of MacOS and others that we knew were insecure as a result.

    Then there’s Windows and Active Directory. Do they support dictionary checking passwords out of the box now? If not, why not?

    1. Anonymous Coward
      Anonymous Coward

      re: dictionary and AD

      No, they don't, but there are companies that sell software (probably in the form of a password proxy) that will do this for you.

      1. storner

        Re: re: dictionary and AD

        As I understand, you can enable this kind of checking. https://techcommunity.microsoft.com/t5/azure-active-directory-identity/azure-ad-password-protection-is-now-generally-available/ba-p/377487

        Yes it says "Azure AD", but it does offer the same for an on-premise (or at least hybrid) environment.

      2. Anonymous Coward
        Anonymous Coward

        Re: re: dictionary and AD

        There are password filters, but do you want to send even a hash of a password to the cloud for checking?

        1. Robert Carnegie Silver badge

          Re: re: dictionary and AD

          If you're talking about password strength rules, their only effect on me is that after I carefully and securely make up an utterly random letter password, the rule rejects it.

          The other day I hit a system which rejects a password that uses any letter twice in a row. So mrmxy zpplx isn't random enough for that. (And because it's an arch enemy of Superman in comics, nearly, but never mind that.)

          Also my employer recently subscribed me to an online security tutorial. So I made up a password for that but no! My password is too long!

    2. EnviableOne

      AD does, there is a password hook into the reset process ...

      Oh and there's LAPS and Bastion domains, and all that.

      TBF there is a lot windows can do that people arent using.

  5. Anonymous Coward
    Anonymous Coward

    Security/Privacy?

    The problem is that even if you follow security best practices yourself someone else will most likely leak your details.

    I was just looking through the list of major data breaches listed on Wikipedia and I counted almost a dozen breaches listed there where I have been informed by the companies themselves that they lost my details or my details were listed on Have I been Pwned.

    (And these are just the ones I am aware of)

    https://en.wikipedia.org/wiki/List_of_data_breaches

    https://haveibeenpwned.com/

  6. Ashto5

    Facepalm This is an easy solve

    Ok

    Google / Firefox / Edge & Safari

    You guys really need to sort this **** out

    Enforce encryption, scan for CDN JavaScript files and block them

    IF you need jQuery or any other framework YOU should be hosting it and it should be encrypted

    It seems everyone knows the patches exist but they don’t apply them

    Well if the big 4 browsers put a block on sites using older versions of libraries that are not patched then the CLIENT would take it more seriously.

    Either take it seriously or remove the obligation on financial institutions to refund lost money and make the person responsible

    So many years of the same issues

    1. hoola Silver badge

      Re: Facepalm This is an easy solve

      No it is not when you are running thousands of applications, many of which are older for all sort of genuine and spurious reasons. It is all well and good to try and enforce client access blocking in the way you suggest until something breaks. If the system that cannot be accessed does not have a patch then you have no alternative but to roll back all the clients. That actually increases the risk as you now have a far higher number of devices that are also in the hands of users that are now at risk.

      Patching is important and as we have seen, all too often is not maintained in a rigorous way. Where patches are available they should be applied in a timely manner. There is no excuse for shirking that responsibility but it is simply not possible to have everything at the very latest patch level with in weeks of release. Maybe in years to come regulation may start to enforce that but it will come at a huge financial cost. And this brings us full circle. Often the reasons that there are unpatched applications in use is because the organisations using them cannot afford to upgrade to the latest version or the are linked to other systems/machinery that itself does not support the latest and greatest.

  7. EnviableOne

    KISS

    basically, if its broke fixit.

    you dont need to outrun the lion, you just need to outrun the other guy ....

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like