back to article Equifax IT staff had to rerun hackers' database queries to work out what was nicked – audit

Equifax was so unsure how much data had been stolen during its 2017 mega-hack that its IT staff spent weeks rerunning the hackers' database queries on a test system to find out. That's just one intriguing info-nugget from the US Government Accountability Office's (GAO) report, Actions Taken by Equifax and Federal Agencies in …

  1. Pascal Monett Silver badge

    Impressive consequences

    I'm not an admin, but it seems curious to me that something had been monitored while the cert was good, and when the cert expired nothing special happened, it just stopped monitoring that equipment. Feels like a monitoring tool should notice that it can no longer monitor something and complain.

    On the other hand, I am most impressed by the level of competence of the scumbag(s) that found the flaw and exploited it. It seems to me that there is a highly qualified someone out there who would be a 1st class BOFH. Shame he chose the dark side.

    1. goldcd

      Re: Impressive consequences

      Ah - but if nobody chose the dark-side, there'd be no jobs on the white.

      1. cbars

        Re: Impressive consequences

        "no jobs on the white"

        hmm... your white is [country/orgs] black

        so long as there is humans, there will be opposing sides

      2. This post has been deleted by its author

    2. Doctor Syntax Silver badge

      Re: Impressive consequences

      "Feels like a monitoring tool should notice that it can no longer monitor something and complain."

      That depends on there being somebody monitoring the complaint and deciding it's their job to respond.

      1. Anonymous Coward
        Anonymous Coward

        Re: Impressive consequences

        That depends on there being somebody monitoring the complaint and deciding it's their job to respond.

        Seems a bit 1980, surely? Easy enough to code the monitoring software to block all traffic if there's errors in any parameters. The business would soon squeal when all traffic is blocked, but an embarrassing four-ten hour global interruption in service would have been a lot cheaper than the mess they've got themselves into, which is now forecast to cost $439m.

        I note that the words used were "mis-configuration", and perhaps that failure to block all traffic when in doubt 's exactly what happened?

        1. Julz
          Big Brother

          Re: Impressive consequences

          Maybe I'm being over conspiratorial but, it seems that the data breach couldn't have happened (well it would have been much more likely to have been spotted) if the monitoring system had been operational and that is just too much of a coincidence to be ignored.

          1. Joe W Silver badge

            Re: Impressive consequences

            Re: conspirational

            That's selection bias. Had it been blocked we would not have heard about it...

            (good / obvious question, though)

          2. Destroy All Monsters Silver badge

            Re: Impressive consequences

            that is just too much of a coincidence to be ignored.

            That's 9/11 tier of coincidence to be sure.

            Like finding a trout in your milk.

            1. Ivan Headache

              Like finding a trout in your milk.

              I did once find a toy soldier in my cornflakes.

            2. Robert Helpmann??
              Paris Hilton

              Re: Impressive consequences

              Like finding a trout in your milk.

              You've had that happen too? Good to know I'm not alone after all.

        2. FuzzyWuzzys

          Re: Impressive consequences

          "Easy enough to code the monitoring software to block all traffic if there's errors in any parameters."

          You can code and buy all the monitoring software in the world, spend as much as you like but in the end someone still has to respond and take responsibility for it's upkeep. I love how all the security vendors will boast about how great their software is, and I'm sure it's bloody good, but they always leave one vital thing off the list, the "fleshbag" that must be competent enough to take responsibility for it at the end of the chain. Getting the right person is something you cannot simply pay money for, you have to find someone sharp and willing to do a good job. Sadly in my 30 odd years of working in IT I still see there are a lot of people in IT because it's a well paid laugh playing with computers, they don't have the passion for tech that will ensure they do a good job.

          The smarter the software seems to get, the lazier we think we can be and we put lazier people in charge if. It should be opposite, the more complex and clever the software gets the more vigilent the "guards" should be.

    3. Anonymous Coward
      Anonymous Coward

      Re: Impressive consequences

      Feels like a monitoring tool should notice that it can no longer monitor something and complain.

      Also implies a critical dependence upon a single piece of software. If I were designing data security architecture I'd be looking at the consequences of failure of each element of security. That needn't mean multiples of everything and vast duplication, but there's some activities where you certainly would want different defences running concurrently to cover the same risk, and this appears to be one.

      Of course, I'm never going to be in that situation, given my total absence of relevant qualifications, and only a personal interest in the matter.

    4. anothercynic Silver badge

      Re: Impressive consequences

      This is where Monitis and other services come in handy... they monitor from outside the perimeter (i.e. 'run a bad query and see if we get a response, if so, something's wrong'). And internally, yes, being paranoid about anything certificate-related would be good. It amazes me how there are no double checks (both a positive and a negative check) in data critical infrastructures like Equifax's!

    5. Primus Secundus Tertius Silver badge

      Re: Impressive consequences

      Two possible reasons why it was not noticed.

      1. It was noticed but managers and beancounters ignored it.

      2. The original person responsible had moved on, and management were simply unaware of these things.

    6. steviebuk Silver badge

      Re: Impressive consequences

      They prob chose the dark side due to getting screwed by management all the time so had had enough. Not an excuse but I can see why they prob went down that path. When you have the likes of companies like Equifax that no doubt won't pay for the talent and would rather do everything on the cheap. You get what you pay for as the saying goes.

    7. Jon 37

      Re: Impressive consequences

      There were 2 problems:

      1) Some bug that let them get hacked

      2) The monitoring software that eventually detected the intrusion was broken due to the expired certificate.

      It's very easy for a PHB to refuse to fund the certificate renewal for (2), or for it to get tied up in the budget/purchasing process. After all, it's only monitoring software, it's easy to claim it's not critical.

      1. Yet Another Anonymous coward Silver badge

        Re: Impressive consequences

        >After all, it's only monitoring software, it's easy to claim it's not critical.

        It's not like security is a core part of their business

    8. Michael Wojcik Silver badge

      Re: Impressive consequences

      I am most impressed by the level of competence of the scumbag(s) that found the flaw and exploited it.

      Why? I don't see anything particularly out of the ordinary in this case.

      1. A Struts vulnerability is published.

      2. Attacker scans for vulnerable systems. No doubt many people did so.

      3. Attacker happened to find Equifax was available, and broke in.

      4. Monitoring system was down (because of incompetence on someone else's part, but that's irrelevant to this question), so the attack wasn't discovered for a long time.

      6. Attacker continued to exploit the hole because it remained open.

      The broken monitoring was simply a lucky coincidence. There were probably plenty of sites with the unpatched Struts vulnerability that either didn't monitor properly, or didn't even try; they just weren't as valuable and interesting as Equifax.

      As attacks go, this was barely more than script-kiddie work, at least based on what's in the article. Perhaps there's evidence of something more impressive in the full report.

    9. RickyRickyWrecked

      Re: Impressive consequences

      This wasn't a genius, it was a simple struts attack that was identified by basic vulnerability scanners for 30+ days ahead of this attack. I saw 100s of these hit my data center for weeks before equifax got hit.

      Basic IDPS systems had signatures for this for 30+ days before Equifax got hit. This is security 101 basic stuff. The enemy here is nothing more than a simple script.

  2. Anonymous Coward
    Anonymous Coward

    Shocked, I'm shocked to the core I tell you!

    There is a "US Government Accountability Office", could someone tell me what they actually do?

    1. Doctor Syntax Silver badge

      "could someone tell me what they actually do?"

      Get ignored?

    2. anothercynic Silver badge

      They do what the National Audit Office does in the UK... checks if there's value for money in what people do, do forensic auditing in things like these pesky data bureaux (like Equifax), etc etc etc...

      1. Yet Another Anonymous coward Silver badge

        And release reports 5 years later saying that the Millenium Dome or some PPI hospital deal didn't turn out to be so wonderful - they could easily be replaced by a subscription to Private Eye and a rubber stamp marked "Doh"

    3. a_yank_lurker

      @AC - Mostly bloviate and show their ignorance.

  3. Anonymous Coward
    Anonymous Coward

    No-one at the Exec level gives a shit

    Money was saved, their bonuses were paid, people forget and the share price recovered.

    Only people were screwed over, and nobody gives a fuck about them.

    Avoidance was simple of course. Do the IT basics properly - like patch all software not just the o/s. And build software with the assumption it's insecure and breakable so monitor for malice and then react when it appears.

    1. Anonymous Coward
      Anonymous Coward

      Re: No-one at the Exec level gives a shit

      Do the IT basics properly

      Absolutely, but don't overlook that good data governance extends well above the cheaper basics of patching software and keeping certificates up to date. You need highly competent (and expensive) people to look at your data governance, you need a well resourced ITSec team who are continually monitoring the external threats, continually poking around in system logs, you need a willingness to undertake expensive pen-testing, and you need people able and willing to force through measures that will be unpopular with the business and senior managers, and possibly very expensive.

      Looking at the lack of (reported here) data silos and firewalls, it would seem that ENABLING easy access across the entire data set was part of Equifax' operating model - I assume their management specifically had it set up this way for operatonal convenience, and perhaps lower cost, and any who said "is that really a good idea?" got patted on the head and told to shut up or find themselves another job.

      1. Doctor Syntax Silver badge

        Re: No-one at the Exec level gives a shit

        "told to shut up or find themselves another job."

        I'm not suggesting that whoever committed the breach was such a person but that sort of managerial response, which I'm sure most of us won't find improbable, just piles one risk on top of another.

        1. Come to the Dark Side

          Re: No-one at the Exec level gives a shit

          Probably went un-noticed for so long because it didn't change a number in an excel spreadsheet. Substituting "Excel" for "Exec" tends to give a more realistic idea of how these businesses are run...

  4. EnviableOne

    I'd like to know

    How does a cerificate being expired prevent a security tool from working,

    and if this is the case, why wasn't it picked up?

    or do their IT team have Alert Fatigue? seeing as all these unpatched uncertificated services will be flinging alerts at them

    1. sweh

      Re: I'd like to know

      If the cert was being used for passive TLS decryption (a common technique for Data Loss Prevention) then an expired cert may not trigger alarms (the device manufacturer may consider that a normal case; certs do expire, especially if the cert store can handle multiple ones) but the TLS decryption would fail (also a normal scenario).

      Since, in this scenario, it's passive no traffic gets blocked and data is no longer inspected.

      Cert management needs to be proactive, not reactive.

    2. Martin Summers Silver badge

      Re: I'd like to know

      If they can't man in the middle encrypted traffic, by decrypting it on the monitoring equipment and then re-encrypting with their own certificate to send it on to its destination, then they can't see what's in it. The bad guys used an encrypted connection to carry out their nefarious activities.

    3. stiine Silver badge

      Re: I'd like to know

      Assuming that their production systems had current, non-expired certificates, then the copies of the old certificates on the monitoring system wouldn't have allowed the monitoring system to actually decrypt the data, and as was pointed out above, their system was configured to fail-open, instead of fail-closed.

  5. monty75

    Sounds like a future article for Who?Me?

    1. stiine Silver badge

      You owe me a new keyboard...

      Somehow, I don't see the guilty party writing it up as you seem to think they would.

  6. DCFusor

    Sounds like a cover story

    Someone with that access couldn't have diddled logs, and the DB, and used some ex-filtration merely to cover their tracks so there'd be a way to save face and keep on with biz as usual?

    I'm not a black hat, but if it were me - I'd have more imagination than just to steal credentials. How about creating some? Make your associates effectively rich, fix bad credit, and so on.

    If you were in OPM, why not give your spies good background checks and even security clearances?

    It's pretty short sighted and crass to merely ex-filtrate or erase for ransom - which tends to get you caught as someone knows *which* money to follow. How about making so many false trails no one can find the one you used? Huge amounts of activity are normal in these DBs - look at all the articles here about storage and other products to help make them work under load - it'd be trivial to sneak a few fakes in. And very hard to separate them from the legit stuff without an extremely laborious, slow and expensive comparison with a paper trail I've not heard of anyone doing since the '70s.

    Is is that both sides of the equation are 100% - no exceptions - that dumb, or is it that I'm the smartest guy in the room? I find the latter rather hard to believe. My previous posts prove I'm not!

    Either it happens and no one notices, or they do and keep mum. Just copying a database is the silliest thing you can do with it. Other people call that a backup.

  7. Hawkeye Pierce

    Monitoring isn't monitoring...

    Just as a backup isn't a backup if you don't (albeit periodically) prove you can restore it, neither is a monitoring system monitoring if you don't periodically test that it's working as desired.

    Put another way, if a software system test doesn't throw up any bugs, my first instinct is to question how thorough the testing was. Likewise if I don't get any alerts from a system designed to raise alerts, in any given period, I need to question whether it's working!

  8. Irongut

    "attackers to execute approximately 9,000 such queries – many more than would be needed for normal operations"

    What total bullshit. At the very least a time period is needed to put those 9k queries into perspective and preferably an idea of the normal rate to make any sense out of that statement. 9k queries in 6 months is nothing, 9k queries in 6 minutes could be a lot but I'd hazzard a guess that Amazon or Google wouldn't think so.

  9. Anonymous Coward
    Anonymous Coward

    Kit with Expired Cert?

    "Ironically, the security breach was only picked up when someone updated an expired certificate on a piece of kit that was supposed to be monitoring outbound encrypted traffic"

    I wonder what "kit" was being used to inspect encrypted traffic.

    It is not mentioned in the report nor is the certificate.

    Depending on where this "kit" was placed in the route of the internet traffic it may have only required a single self-signed cert to decrypt ALL the encrypted traffic.

    More info is needed and would be critical info for the report.

  10. Tomato Krill

    Also is it common practice to encrypt data as you steal it, or did the miscreants just happen to choose this breach to do it, where by pure coincidence it happened to allow them to evade detection?

  11. Paul Johnston

    The bit I found interesting

    Reading the report one of the problems was the mailing lists of Sys Admins was out of date so when they e-mailed people to warn them of the vulnerability not everyone got it. I thought if you looked after something part of the job was to keep any eye out for announcements for said systems and if necessary pass that information upwards. I understand you cannot go patching stuff as and when you feel like it but at least try and keep them safe! Oh what do I know?

  12. Aodhhan


    I can tell the author doesn't have a lot of experience in InfoSec. Also, many of the commenters don't as well. I've been penetration testing for over 15 years, so I've noticed many security cock ups, poor risk management, etc. What I see more of though, are people making comments without thinking it through.

    First-- Reworking and following the exact steps a hacker does to your system is common place. It's often necessary to ensure you find everything. This is particularly important with databases... where there is a lot of information. Usually too much for the hacker to scrape and copy in full; so you need to figure out exactly what was copied, removed and/or changed. NOT REWALKING THE STEPS is considered negligent. Making fun of it like this author does, is ridiculously stupid.

    Second--ANYONE who thinks their system is so secure because they do everything right is a moron. Not ignorant, but a moron. This includes certificate management. I'm willing to bet I can find a bad cert somewhere in your network. I find them about 70% of the time I look. Or I find they aren't bound correctly, etc. Chances are, your network has at least one, and the system using it doesn't fail because of it.

    Third--While no doubt Equifax messed up on this; however, if you don't get why a system doesn't quit working due to an expired certificate--then you haven't worked with really large networks. Also remember, this type of risk is often accepted. Probably on your network as well.

    Fourth--Speaking of risk acceptance. Chances are your CIO has accepted some risks, and at first glance (since you're ignorant and don't get the entire picture) you would think he's crazy to do so. ALL NETWORKS HAVE ACCEPTED RISKS.

    Fifth-- Struts was a particularly nasty beast. Easy to do (even for you script kiddies) remote exploit which was being actively exploited the same day it was published. Many companies decided to wait until Monday to patch it and became victim to it. Many more would have become victim to it, but were saved by proxy systems being correctly configured to stop outbound traffic. Heck, the system you work on may have been hit, exploited, but saved because of a outbound setting. So... be careful what you gripe about.

    So before you begin to throw stones (and nobody in InfoSec should), look at your company's network to see how many exceptions to policy and larger network accepted risks there are.

    Also, anyone in InfoSec who believes their network is completely secure from malicious activity should give up this career field, because you don't have what it takes to think forward enough to do the job correctly. All large networks are vulnerable in one way or another... ALL OF THEM. The key is how you respond and gracefully recover from an attack... not just how you work to stop it.

    1. EnviableOne

      Re: Igorance

      I am fully aware of risk based security, but if as you said this was seen as so small a risk it could be accepted, then their risk manager needs shot as well, cos they let this happen.

      As others have said if its core to your system, it should be maintained, and from the details comming out, Equifax was a hive of poor oversight, poor practice and poor security, if this system is core to their monitoring, it should have been reporting on expiring certificates, and someone should have had the job of making sure something was done about it.

      I am not saying I'm perfect, but i am pretty sure i know where the holes are and have multiple layers on the important stuff.

  13. Potemkine! Silver badge

    Hang 'Em High

    - An exposed STRUTS server not patched

    - A 10 month-old expired certificate in production

    - At least one database with the infamous 'admin/admin' credential

    - Unencrypted names and passwords in database.

    Equifax's IT has a lot to explain.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like