Because no one has ever stolen records from a filing cabinet before.
Although to upgrade security they could always hang a sign on it saying "Beware of the Leopard" or something.
A ransomware infection has cast the Alaskan borough of Matanuska-Susitna (Mat-Su) back to the dark ages. The malware was activated in mid-July, infecting 60 of the borough's Windows 7 PCs. As the IT department tried to clean the infection and reset passwords using a script, the malware started "attacking back", spreading to …
The attackers gained Active Directory admin access
Only criminal negligence, or deliberate criminal intent of an insider, could allow that to happen, surely.
This doesn't sound like a happenstance ransomware or malware infection, but a deliberately targeted attempt to destroy the borough's IT.
Yep, blame the victim, that always helps.
Finding the root cause of a problem usually does. Attempting to gloss over it or 'move on' means less chance of anyone learning from the mistake. Absolutely it shouldn't be a witch hunt and no-one should lose their job over it unless criminal intent or utter incompetence is discovered. But those responsible need to be made aware of what they did wrong so that they work out how to stop it happening again.
That's why I dislike the term 'car accident'. By dismissing such events as 'accidental' you're implying that there's nothing anyone could have done different and therefore no reason for anyone to change the way they drive.
Things go wrong. Mistakes get made. People shouldn't be vilified over them but people who make mistakes should be told then helped to avoid repeating them.
It seems that in our language and culture the definition of the word "victim" keeps receiving an ever greater inclusiveness. If blaming the victim is taboo, it becomes vitally important to qualify as a victim. In this way the negligent and foolish become immune to responsibility and critique.
Given that their recovery plan involves using backups, some of them up to a year old, it seems at least possible that they may have pinned the target to their own forehead.
They may have also gone that far back in time to make sure that they weren't restoring the trojan. I guess only the people doing the work know for certain though.
It sounds like they MIGHT be able to eventually very carefully recover their data from the infected backups. Personally, I'd look into using a unix to do so in order to minimize the chance of propagating their old infection back into their system once they get it decontaminated and running again.
The barrier is not the infection, which can be controlled and contained, but the encryption.
A good trapdoor function will leave you with no chance of getting the data back before the heat death of the universe... unless someone comes up with much better cracking tools in a hundred or a thousand years.
the story says that the backup infrastructure was also infected
It doesn't. It says their "disaster recovery" systems were infected. That's unfortunate, but should simply have meant that they were not able to recover instantly, but they were able to recover in reasonable time to a recent point in history.
If you're having to go back a year to find backups that aren't infected then you either didn't notice for 12 months that you were infected or your backup process is not worthy of the description. Copies of files sitting on active systems that are capable of being infected are not backups, they're hostages to fortune.
Seriously? For most people having a recovery plan that involves using backups is not only normal, it is part of best practices.
The fact that some of the backups are a year old isn't abnormal either. If the source code of a software package hasn't changed in years, artwork for logos, etc. then why NOT use a years-old backup that you know is safe.
When restoring a backup in this situation you want the OLDEST backups that have the data you need, not the newest.
They had "disaster recovery" servers. I read that as hot spares with automatically replicated data. Unfortunately, automatically replicated data means a lack of air-gap, so they got infected with everything else because they didn't consider this type of "disaster". How do you recover from that? Well, you bust out your second-tier recovery solution which is generally archived backups.
Yes, this "security event" was enabled by insecure policies and practices. Most likely some administrator had made a decision that a network-wide share that housed executables needed to be read-write (or the applications used demanded it), and/or one or more people with admin access used their admin account daily instead of having a second account. Those two situations - found in the MAJORITY of small networks, cause this type of problem to go from "annoyance" or "major catastrophe"
"a network-wide share that housed executables needed to be read-write (or the applications used demanded it) "
Ack.
I've griped at Micro-shaft before about putting WRITABLE files *anywhere* within the 'C:\Program Files' tree... MANY TIMES before. At one time, they were doing this with SQL Server, actual database files within that directory tree. The problem of writable 'executable file' directories goes right back up to the source, at Micro-shaft, where they had DESIGNED IT THIS WAY.
In any case, that kind of hindsight won't fix the specific problem at hand (the ransomware encrypting things and spreading itself) nor get the data back. And if the machines hosting the various services are compromised, then malware with admin-access could simply do 'whatever' and not be stopped. So even with proper practice of "nothing writable in directories with executables in it" the admin-level access by the malware would overwrite things anyway and bypass all of that.
It doesn't stop me from figuring that maybe, JUST maybe, the original vector _WAS_ something so simple like user-writable executable file directories. There was an 'outlook express' virus/trojan that did something like that, a while back, now wasn't there? And MSN Messenger (on by default) spread the thing, as I recall...
Hardening active directory to make attacks like this difficult (not impossible) requires significant investment, you need third party tools, and highly competent (ie expensive) staff. Chances are this organisation didn't have the budget required to hire such staff, or do so in sufficient numbers to manage and monitor a network of this size.
If not suitably hardened, active directory is extremely easy to compromise and since it's often tied into everything - that means you now have control of the entire organisation and are extremely difficult to remove.
Only criminal negligence, or deliberate criminal intent of an insider, could allow that to happen, surely.
------------------------------------------------------------------------------------------------------------------------------------
Not really.
Once a remote root exploit is achieved against a relevant target then a technically adept attacker can bootstrap that into almost any level of access to anything on the network, including active directory servers, anti-malware servers, intrusion detection systems, and the like.
It takes some skill and patience, but it is easily within the realm of the possible.
The initial compromise does not need to be within the corporate network. A compromised offsite computer used for remote administration and tech support, for example, can yield all the information needed to gain administrative control over key servers and services. A keylogger and patience will eventually get you everything.
For that matter, if our hypothetical administrator were to use a USB key to transfer data from a compromised computer (any kind) to machine(s) inside the network, you might not have to wait for the right logins to be captured on the first machine.
Given the large number of zero day remote exploits, a persistent attack will eventually succeed.
Also, once you are inside a corporate network, machines are often running some older (more easily compromised) software for compatibility purposes. There are still applications - often legacy customised applications that would take ages and piles of cash to re-implement or replace - around that insist on talking only to Internet Explorer, for example.
Couple that with the disinclination of many executives for spending more than the minimum on security, redundancy, and testing, and this state of vulnerability is not surprising, nor is it often seen as an issue until it becomes a disaster. Humans are remarkably poor at risk estimation. Think of how many people are scared of terrorism or flying, but think nothing of riding in a car, eating rare hamburgers, or going skiing.
Insecure POS.
Apart from the 7 infected PCs must still have been connected to the network for it to spread even though they were know to be in that state.
You need an second alarm system along side the fire alarm for people to swicth off their PC when it goes off, shite like this spreads faster than a fire.
When I learnt computing, the first thing we were taught is, when you implement a new IT system, you also document the manual procedures to carry on working, if those systems go down.
It looks like they managed to cope reasonably well, given the circumstances, although I doubt the manual procedures were defined in the disaster recovery plan.
The attackers gained Active Directory admin access, compromising the controller to reconfigure its security settings.
Ouch!
The borough is now reimaging its systems using backups, some of them up to a year old. However, a lot of data such as email has been lost.
I can imagine desktop / laptop images being out of date by a year but losing data on servers how did that happen? What is the backup process? surely they weren't just doing disk to disk backup.
DR backups were also infected. It's all well and good having these swanky connected backup systems but as said they are connected and thus can also be infected.
I assume they were diligent and had offline backups (though not that diligent if they are a year old) and these were the ones being restored.
What are the odds that one/some of the admins used Domain Admin creds on their normal day-to-day account? You know - the one they open their email and browse the web with.
Obviously I can't say that this is definitely what happened, but plenty of us have done it in the past, and have only been lucky enough to get out of the habit before something like this kicked off...
yeah we all can pretty much interpret/know that the ransomware 'ran under windows'. however, it's worth pointing out that if you use a utility (one like rsync) to back up files to a Linux box, FROM WITHIN the Linux box, such that it reads files from remote systems but does NOT allow those remote systems to write TO it, and does so in a manner that can restore files 'to a 'point in time' (i.e. the July 12th version of that particular file, before it got encrypted by malware) then having live systems doing daily backups isn't so much of a security risk, keeping them "on all of the time".
However, I suspect in THIS case that such backup/recovery/disaster systems were, in fact, ALSO running windows...
So yeah the basic model here would be for a Linux box to use standard utilities, maybe Samba, maybe rsync, or maybe some 3rd party backup software, such that the backup server PULLS the data [and does NOT get data PUSHED to it], and then LOCALLY files it someplace in a manner that allows for getting back "the state of things on a particular date/time". Anyway that's my $.10 on it, and a Linux server running those backups with its own security context could help to prevent network-wide malware from infecting the disaster recovery backups.
"The attack is notable not only for the way it dismantled an entire organisation's computer infrastructure, but the remarkable honesty of the victims."
While I was living and working in Alaska, it was always refreshing that the IT staff I worked with didn't bother to waste anyone's time trying to hide mistakes. There wasn't really any need to. We just didn't have the money to hire the expertise that would have prevented this kind of attack, let alone the software and hardware. Besides, it helps with post mortem troubleshooting when you can step through mistakes without having to worry about whether a job is on the line. People are a lot more honest when they see that owning a mistake results in trying to figure out how not to make it again rather than finger pointing and disciplinary actions.