OMFG
Yet again we have to ask why mission critical systems are on the same network as vulnerable e-mail clients, and why said clients don't enforce ruthless discrimination against unidentified attachments and links.
World-leading Papworth Hospital has escaped a full-on zero-day crypto ransomware attack thanks to the "very, very lucky" timing of its daily backup. It's believed that an on-duty nurse at the heart and lung hospital in Cambridgeshire, UK, unwittingly clicked on something in an infected email, activating the attack at about …
I'll get downvoted for this but the simple answer is that clinicians wouldn't stand for it. Easy access to everything allows them to get on with treating patients and every clinician loves their e-mail.
Apart from anything else network controls should limit damage malware can do, that's assuming it can run in the first place which is something many NHS trusts/boards/CCGs are managing to block using application whitelisting, sandboxing etc.
"As for ransomware, how do you stop it when you have a shoestring budget preventing a proper prevention strategy and the most likely zero point is over your head?"
I'm on a shoestring budget. Here's how I stop them.
1) Write a list of what programs your users use that are not installed in the /program files directory. These are typically programs that "run" from the network. Create a list of network paths that need to be executable.
1A) Check the security permissions of the executables on network shares. Do the users *need* to be able to overwrite your executables, potentially with a Trojan that everybody then accesses via a shortcut when they try to login to $program in the morning? This is allowed by default, but the default is stupid. Consider changing it if you don't require users to do this.
2) Create a new Group Policy Object. Goto the Security Restriction Policy options. Change the default level from "unrestricted" to "disallowed". Whereas before any program would run anywhere with no questions asked, now programs by default will pop up a message saying essentially "Sorry Dave, I can't let you do that" unless the program is on the exceptions list. Your list already includes %program files% by default, so you only need to add network paths where people need to be able to execute .exe files. Add the paths that you gathered at step one.
3) You need to download some additional extra GPO bits. There are GPO addons for Office, and Adobe. You want the Office ones to prevent Macro's from running, and the Adobe one to disable scripting. Have a look at EMET5.5 while your thinking about security. Nice freebie, but probably too much effort for most people to install since it needs doing on each computer.
4) Assign the new policy object to your desktop, and check you haven't prevented things from running. When your sure that you haven't, roll this out to a handful of users, then to wider groups, and then to everybody, having added the programs that you forgot about that only two users used.
Money required from management, £0.00. All that is required is your time. It's now impossible to:-
1) Run any file that is vaguely executable (.exe, .pif, .bat, .scr, .etc) which is not in program files or in an authorised directory.
2) Have macro's in MS office ruin your day. (or if Macros are in use, digitally sign them and run only macros signed by you)
3) Have PDF documents running scripting on your computer as well as just displaying a document.
With a spend of zero, your network is now hardened to the point of being impervious against your users running trojans. Your users are unaffected by the security precautions that you have just taken, since they only become apparent when they try and do something like running a program attached to an email or on external media.
The only real downside is that executable autoruns on CD's will now fail, you could enable these through putting in path exceptions for your CD drives if your organisation security policy allows running random potential zero day viruses from external media sent from outside the company. Given the option, our top management went for complete paranoia and disabling any potential threat.
Ultimately though, if your being given solutions (even good ones) to implement then your doing it wrong. What you should do is what I did to get to where I am, which is sit down with a pen and paper and consider what threats you face, and how you can mitigate against each of those entry vectors/threats.
"your network is now hardened to the point of being impervious against your users running trojans. Your users are unaffected by the security precautions that you have just taken, since they only become apparent when they try and do something like running a program attached to an email or on external media."
OK so far. Now what about the rest?
How, exactly, does your suggestion help protect against the apparently never ending stream of exploits that use authorised (non-blacklisted) executables and just require the user to access a specially crafted JPG, PDF, web page, or whatever. In the case of a web page, it might even be served up by a whitelisted website, especially if the whitelisted website serves 3rd party adverts and isn't accessed with the aid of an ad-blocker or script blocker. Then how does your strategy help?
""consider what threats you face, and how you can mitigate against each of those entry vectors/threats.""
Exactly. Consider what you do about approved applications with as-yet unknown vulnerabilities which can be exploited by permissible (and often necessary) data formats. What's your mitigation there?
Start again, please.
It's not rocket science, but apparently to some people it might as well be.
"I'm on a shoestring budget. Here's how I stop them."
How on earth does that stop them?
You haven't even identified different groups of users with different sets of access rights to different sets of data. Y'know, a scheme where people's access (a user's access) to data depends on their role within the organisation, and where their activities can be authenticated, authorised, and logged accordingly.
You definitely haven't deployed an OS with a securely implemented scheme for rights management and object access control and trustworthy audit logging facilities.
Is that too much to ask for?
All you've done is provide a false sense of security, which sadly seems to work well for lots of IT people.
"Easy access to everything allows them to get on with treating patients and every clinician loves their e-mail."
Yes, but seriously it's not a conflict between "easy access" and security. It's a conflict between stupidity and security. If you can just stop people from being stupid you'll have solved most of the problem.
Just like there are basic safety standards for things like light fixtures, the NHS could enforce those for the software they use. Since software security doesn't really cost money (only features) that should be easy to do.
If you can just stop people from being stupid you'll have solved most of the problem.
OK, I'm getting really, really tired of this meme, so let me give you the benefit of close to 3 decades of experience in dealing with security, people and protecting organisations including military and governments (hence anon).
Stop treating people as a variable you can just tweak to your satisfaction. You should accept they are your reality, so stop blaming them and start working on ways to incorporate that established reality into your work, otherwise you are quite frankly busy finding excuses for not doing your job.
Working on any model of protection starts with accepting current realities. Yes, there are things you can change but, trust me on this, people are not part of that helpful set. You can tweak them a bit, but you will find that over time they will meander back to that mean value - in short, seeking to change them is wasting time and resources for frankly appallingly little benefit.
Now, ransomware. Assume that your users WILL open these emails, irrespective of how much advertising, training and awareness campaigns you throw at them. Start with wondering how it is possible that they can run unauthorised executables and work from there.
Just stop the excuses already.
You can't work that way because Murphy means you MUST assume EVERYONE is a Darwin Award candidate with Domino Effect potential. The one you ignore or are forced to overlook WILL be the one that destroys you.
As for ransomware, how do you stop it when you have a shoestring budget preventing a proper prevention strategy and the most likely zero point is over your head?
Put it this way. Try stopping the Black Death with nothing but a net.
"Start with wondering how it is possible that they can run unauthorised executables "
Actually in quite a few places I'm familiar with, it might also be interesting to wonder *why* people think they *need* to run "unauthorised" code to do their jobs (non-work-related stuff is a different matter). The answers might be interesting inside and outside the IT department.
But eliminating unauthorised executables still leaves a zillion vulnerabilities (known and unknown) in the authorised executable list. You want to blacklist Outlook or Word or even Acrobat, good luck with that. But if that's not done, the vulnerabilities still exist, whether the IT Department like it or not.
Well even if you cannot change people themselves, you can easily influence their behaviour. How many people do you know that got electrocuted by their household appliances? With all those appliances around, that must be a high number, doesn't it? The fact that this number is rather low is that household appliances are designed to prevent you from doing stupid things. You cannot simply touch any conductors inside, because they are encased in plastic.
However in computing there is no sense for security. Yes we tell people not to execute code from the Internet, yet when you click on a link to download an executable in your browser, it'll actually offer you to execute it right away. That's a stupid thing that should never have been offered. Same goes for all kinds of app containers like apk or flatpack. If you click on a link, and your system will install software that's a _really_ bad thing.
Instead you can make stupid things hard and provide safer alternatives. This then will influence people into not doing stupid things. Also make sure that the things they actually need to do (e.g. opening PDF files) is as safe as possible (e.g. not using a feature complete PDF reader).
BTW the stupidity doesn't always just lie on the end user side, often it's also in the IT departments. Just think of the many computers that have office software installed without needing it, or Acrobat Reader when a more secure PDF reader would be good enough.
"household appliances are designed to prevent you from doing stupid things."
That's a very 20th century remark, which seems increasingly inapplicable to various modern pieces of consumer electronics and even white goods (and other stuff).
Expect increasing accident rates as the years go by and appliances are designed more by the "industrial design" people and less by actual clued up engineers.
This applies particularly to software-based systems in recent decades; far too much software is not fit for purpose, defective by design, and yet it's extremely rare for anyone to be held accountable for providng or procuring stuff that isn't fit for purpose.
"Expect increasing accident rates as the years go by and appliances are designed more by the "industrial design" people and less by actual clued up engineers.
This applies particularly to software-based systems in recent decades; far too much software is not fit for purpose, defective by design, and yet it's extremely rare for anyone to be held accountable for providng or procuring stuff that isn't fit for purpose."
Yes a lot of poor software is and will continue to be created but not very little of it has any safety impact.
I would be surprised if the SW in a home appliance was required to operate correctly to ensure safety. It is still engineers who are responsible for making things work and pass the appropriate regulatory requirements. Demonstrating that the requirmenst for functional safety for SW are met is rightly very demanding which is why designers avoid it unless absolutely necessary. I do not know what you have against industrial designers but in the real world they work in a team with the engineers. My prediction is that despite an every increasing number of devices overall accident rates decline due to the slow accumulation of knowledge and experience and improvement of regulations and design.
Much as I hate the "why have a dog and bark yourself" attitude, this is very true.
<massgenerlisation>
It's our job to design, build and secure IT environments
It's their job to help people with very serious heart conditions
Trying to either side to do the others job isn't going to end well
</massgenerlisation>
Start with wondering how it is possible that they can run unauthorised executables and work from there.
They probably aren't "running executables" at all. It's far more likely that they are opening an EMail attachment that contains an image or other binary file that has been specially crafted to take advantage of a bug in the software on the PC that displays that kind of file, and by exploiting the bug the attachment gains execution access, possibly with elevated privileges.
Short of blocking all attachments, the only defence against this sort of attack is to run only bug-free software, and we all know how hard that is to come by!
I've found it relatively easy to set up secure systems as demanded by management - the same management that demands it taken down because they are too bloody stupid and lazy to follow their own rules. However I'd still imagine that 80% or more of the files could easily be un-encryptable as there is no reason to modify most data.
As someone working within the NHS I can honestly say it varies massively. My own NHS don't, it's highly restricted but still reliant on the NHS Mail system which itself was letting through a ton of ransomware e-mails at the tail end of last year, thankfully better now.
The bigger problem tends to be access to personal e-mail, required by students and typically it's attachments from those which cause the problem. However with proper network controls the damage ransomware can do should be extremely limited and quickly rectified - that's assuming it doesn't just start uploading that information - which is the nightmare scenario for many of us.
Maybe they shouldn't be using the same email system for both communication between fellow clinicians and the general public. There should be an internal and an external one.
The internal one requires that secure messaging standards (S/MIME or OpenPGP; the former might be somewhat easier for the NHS) be used: on sensitive machines, they only have access to the internal system.
Then they can email their colleagues internally just as they ordinarily could… digitally signed so they know the email hasn't been spoofed, and possibly encrypted to for patient privacy.
The external one talks to the outside world, and is only accessed from suitably protected endpoints. Those same systems could have access to the internal one so you can forward information to people on the internal network after manual checking.
Least then, the malware has limited reach.
Then again, paper does not have these problems… the other solution would be that they walk around with pencil and paper and record things the old way. Sure, use the computer system to transfer the data from place to place, but hold onto some paper records for as long as it takes to ensure the digital copy is safely backed up.
Our local NHS Trust uses Cisco Ironport's for everything outbound, not sure about the rest.
ANY email that's sent out is encrypted by default. If you want to send non-sensitive e-mails you have to placed [DONOTENCRYPT] anywhere in the subject. They also had DLP keyword triggers which placed the email in a queue if it contained any number of patterns (e.g NHS Patient numbers, and so on).
Nor does a properly designed and run computer system, especially one with fewer privilege escalation vulnerabilities than the legacy OS (and applications) in widespread use in the NHS.
As noted elsewhere, why does Joe/Joanna Random User have any access rights to change other people's data? Maybe they didn't, if an unauthorised privilege escalation is in the picture.
This stuff shouldn't be rocket science. What happened here shouldn't be acceptable either, and any costs should be charged back to the people who provided the system (and to those who signed it off as acceptable). That might start to change behaviours a bit.
The bigger problem tends to be access to personal e-mail, required by students and typically it's attachments from those which cause the problem.
Would blocking gmail, hotmail etc. whilst whitelisting *.ac.uk, *.edu and other universities'* email servers work in this scenario?
* I know this gets a bit complicated in places like France or Switzerland, but should be doable.
"Just how did a nurse's account have permission to cause any significant damage?"
Depends who they are. A senior nurse with many years of training and experience can easily outrank some doctors. And don't forget, the article described it as a "zero day" so maybe it was a privilege escalation attack.
A friends daughter is a nurse. She has two degrees. She's a senior Sister and orders doctors around every day. That's part of her job.
I just read your comment, saw the highlighted letter, and thought (in order)
But that IS how you spell it.
Has El Reg made a typo ?
OMG, perhaps the g wasn't there in the first place.
That would make, erm, Goole. That's a place!
I'll go back and check the article!
Have an upvote
Indeed there is such a place - I used to travel through it a lot on the train in the 1980s on my way to university in Hull, and always had a titter at the brick water tower. All it needs is a couple of goolies at the bottom.
http://www.goole.com/wp-content/uploads/2015/01/water-towers.jpg
Lots of wards are running XP. Even if the accounts are setup correctly, when you're running something that old there are going to be cracks criminals can creep through.
BTW those systems have email because it's how the different departments sent each other results e.g. get an x-ray and the image is emailed to the consultant before you can walk back to their office.
I will take a small wager that whatever hit them wasn't able to do privilege escalation on a patched remote server just because it happened to be running on XP. Even if it managed to do local privilege escalation.
...and if the servers were running XP then there are even more questions to be asked.
We have 8 XP machines, all on a separate LAN with no connection to anything else. The rest of the desktops/laptops are Windows 7 or 10.
Ward PCs are all Windows 7. We demanded clinical system suppliers ensured compatibility before the deadline for support on XP. The remaining 8 XP machines are there due to specific lab equipment not being compatible with Windows 7.
Hey Terry, I know what you mean (every tech here does). Finance people do not walk through reality, they live in alternate reality.
I've been trying to push in a system for disaster -prevention- into our setups, but how do you prove to them how much you -will- save? They don't care. It won't be them up all night fixing broken stuff. And broken stuff will be replaced out of a different budget. Sigh.
I've been trying to push in a system for disaster -prevention- into our setups, but how do you prove to them how much you -will- save? They don't care. It won't be them up all night fixing broken stuff. And broken stuff will be replaced out of a different budget. Sigh.
That's why we use BCM (Business Continuity Management) scenarios. BCM planning is based on estimating the cost of certain scenarios, which then drives decisions about business risk mitigation. In other words, you calculate the cost of things going AWOL and then decide if you engineer a solution, insure it or mark a budget for taking a hit. The fun part is that especially in a business with shareholders there is a duty to protect a business, so once you have a doomsday scenario where security fails you have a financial basis on which to base decisions - and board members who are responsible for making it happen. It moves the pain of failure to where the money decisions are taken which rather helps.
BTW, if someone talks about BCM being an IT activity, send him or her back to school. This plays at board level. IT is an important part, but it's BUSINESS continuity, not IT continuity :).
"finance people only like to plan for what's actually going to happen,"
Finance people should never, ever be allowed to run a business; advise those who do - fine, but beyond that... no.
Quite apart from that an attack on a corporate IT system is going to happen; the only unknown is when. A hospital IT system is simply too tempting a target for a hacker to ignore.
"We've got some fairly ancient application architecture so we've got some file-shares, and actually that's what happened to us – a crypto attack went through our file-shares and encrypted the data."
"Thank God for that full backup, then," she added.
Correction: Thank our IT staff for that full backup.
"Correction: Thank our IT staff for that full backup."
You mean Jane Berezynskyj, ICT director? She just might have something to do with the IT staff. Like, maybe, the final say on the use of budget and where it gets spent. Such as on robust backup systems and mitigation strategies due to, again as per the article, previous attacks on their systems.
Dunno, for me it parses just fine.
Digital backups come in different flavors: disk-based, tape-based, optical media, online, offline, very offline aka vaulted. Some of those can be easily targeted by cryptoworms, some not.
For some bits of information there are paper printouts that are obviously non-digital.
Dear sub-editors,
Please learn precision when converting between units[1]. That conversion was obviously originally in Euros and to the nearest ten thousand; the Sterling equivalent should have been £80,000-£160,000.
Thank you.
[1] unless you're converting to Reg Units, of course, in which long fractions are mandatory.
"One of our key weaknesses is our people and user behaviour,"
We all know that we can do a lot to restrict the scope of the mistakes which our beloved users might make, but it really needs to be stressed that users do make a shedload of stupid mistakes, training programmes notwithstanding.
Hands up everyone who has had training programmes cut and mitigation measures go unfunded because senior managment don't understand what the consequences will be until they personally are struck by someone's mistake (especially their own!).
We run our own training programs and always educate our users on each visit. That and a small peppering of 'you don't wanna be that guy/girl' has raised the level of awareness at our company.
I also quarantine ALL doc/x files and zip attachments with impunity, and encourage the use of secure file sharing applications. There's a small management overhead, yes, but it's a damn side better than dealing with an outbreak, which has not happened yet.
Email is supposed to be text; it got to support HTML because advertisers like to push stuff at users and companies like Microsoft just can't stop obliging them. So they not only support links in mail traffic but they also allow execution of programs via those links.
You'd have thought they would have learned their lesson in the 90s. Obviously not. For now -- set you mail client to 'plain text only'.....nothing is that important that you have to click on a link.
As a consultant, I often work in banking environments.
In one of those, Internet access was not allowed from the desktop, but you could launch an Internet Explorer session which connected to a VM that allowed to go on the Internet - except you could download nothing because the VM had no access to your PC. It seems to me that this is the solution to that problem.
This solution is probably not easy to implement, I have no idea since I'm just a lowly programmer and not a sysadmin, but dammit somebody has found the solution, so it is possible. And knowing the bank in question, it likely did not cost an arm and a leg to set up.
So let's get cracking. Forbid everything from the Internet, create a sandbox environment that can access Internet, and this kind of problem is gone.
If the scope were to be confined solely to active fileshares (e.g. all backup provision is the same, system is only used for file sharing) and there are no "maxed-out" issues (e.g. no spare rackspace, no more UPS'd power) then a shared FS of up to 50TB could be made highly ransom (and user cockup) resistant for under £50k; project duration (excluding authorization and procurement) max 10 working days and perhaps 2 hours of outage.
The problem is that it looks like an unecessary expense until disaster strikes. As usual, my principal complaint about bean counters is that they often neglect the more actuarial aspects of their roles and focus too much on day to day and short term accounting.
Gosh aren't they clever.
It's at least five years since my other half's hairdresser moved into the 21st century with a computerised booking+billing system.
It too had hourly backups.
Nice to know those in charge of NHS IT (are those letters in the right order?) have caught up with the advanced world of hairdressing and the NHF.
'Hourly backup' does not mean very much. Backups are easy, but good backups are surprisingly hard. Well tested nightly backup is sometimes worth more than untested one from 5 minutes ago.
Without knowing how they're done we cannot really judge.
That's where it gets tricky. Disk snapshots, clever backup software that can deal with open files, lots of additional procedures and lots of administrator work. And it's still bloody hard to get consistent data. Backing up something is easy, but getting any levels of confidence is hard.
"Backing up something is easy, "
Absolutely.
It's only when it doesn't usefully restore that you find out how good the backup is, e.g. about issues of data synchronisation and application quiescing and such.
"clever backup software that can deal with open files,"
Even if it can deal with open files, who's to say that what's on the filesystem at the time the snapshot is taken is something that will be useful when restored.
These are very valid concerns.
Usual way of getting a good snapshot of a database is to quiesce its disk I/O for a moment, issue snapshot commands on the storage system, and resume I/O. Takes about 5 seconds or so. After that it's sensible to mount snapshot volume on a separate server for doing consistency checks & backing data to the tape.
"what if your database is very-high-activity, such that even a momentary pause can be costly?"
Then use the right tool for the job. One size does not always fit all. If the integrity of backups is important, and near-continuous uptime is also important, some investment in time, expertise and maybe even in product may be called for.
Or people can carry on using cheap approaches which are worth every penny, and carry on relying on faith (rather than testing) for assurance that everything will be OK. And quite often it mostly will be OK. But not always. Then what?
"Or people can carry on using cheap approaches which are worth every penny, and carry on relying on faith (rather than testing) for assurance that everything will be OK. And quite often it mostly will be OK. But not always. Then what?"
That may be all you have to work with if you have a high-activity server you MUST back up but only a shoestring budget with which to do it. It's like being told to set up a communications network with nothing but a few dented tin cans and a wet noodle.
We've evolved well past the time when viruses were written for fun or to show off. IMHO if a ransomware attack would have disrupted (or will in the future) any surgeries or other healthcare operations having a deleterious effect on patients, that the scum that distribute such malware should be tried for assault up to and including homicide. (perhaps the law already provides for this)