"collosal hospital hack"
Please. Colossal.
Spool chucker on strike?
Security researchers have exploited notoriously porous hospital networks to gain access to, and tamper with, critical medical equipment in attacks they say could put lives in danger. In tests, hospital hackers from the Independent Security Evaluators research team popped patient monitors, making them display false readings …
Having managed the IT in a Hospital in the South West of the UK I have some first hand experience of this.
We had a number of PCs on wards, and A&E all of which were public spaces. A quick audit one day showed that the PCs were left logged in, usually with a doctors credentials and unlocked.
When I proposed having the PCs automatically lock them selves after a period of activity I was told that that would compromise patient safety as the Doctors would have to spend a few seconds unlocking the PC or Logging back on, when he / she needed to use the terminal.
A bit of shroud waving and senior management told me to not to implement that, or any other security processes that might 'inconvenience' the Doctors.
I have made a point of avoiding that particular hospital trust since then.
Having also supported medical networks, I can second this wholeheartedly. Doctors are picky down to the number of mouseclicks it takes to perform an action and will complain if something now takes 5 mouse clicks (due to needing to check a "verified" box due to Meaningful Use) instead of 4 clicks. Also, money is not spent on IT if it can be avoided or reduced. Need a new switch? "No, not Cisco, try one of these 'ZyXEL' switches I read about in Skymall."
The perennial lure of USB as bait works too. The team dropped 18 sticks around hospitals loaded with malware that executed on nursing stations - terminals that are something of a gold mine for attackers because they retain harvestable credentials for nurses and physicians who log in.
And this is one of the the really sad parts of this report.
The wherewithal to eliminate this attack vector has existed for years. It's been known and solved and sold to the NHS in the past. But as one of the other commentards remarked, "Thou shallt not piss off the clinical staff" was a maxim that the technical staff had to live by day in and day out.
Wonder how big a data loss will have to happen before the balance is tipped in favour of security over inconvenience.
I don't think it will be the volume of the data loss but the type of data lost.
The trust I worked in had a system specifically set up to manage 'Sexual Health' cases, and yes that was one of the system I found 'logged into' on a unlocked PC.
All it would take is the loss of say the 'Sexual Health' records of an MP and I am sure security might be given a higher priority.
And yes we did buy some software that would lockdown USB ports, but had to unlock them again as the doctors kept complaining that if they couldn't store records on their personal USB sticks then it would 'compromise patient safety'. Once again senior management, all ex- clinicians themselves, decided that the Doctors knew best.
I'm just glad I got out when i did, I really don't want to be involved when the inevitable happens.
"Once again senior management, all ex- clinicians themselves, decided that the Doctors knew best."
This reads to me as
"Once again I failed to properly explain to management why security is important, and because of my complete lack of communication skills they overruled me thinking I was just some ranting IT nerd."
These people are not idiots, they are highly educated professionals. It is unlikely that given a reasonable argument and sufficient information, and more importantly a workable solution more detailed than "must block ports", they would not choose to make the necessary changes.
Yes, I have worked within the NHS in several trusts and two countries. Not once have I been ignored or overruled.
I would suggest the logging off issue could have been solved with access cards which instantly recover their session to any terminal without cumbersome password authentications every time. Thin computing also removes completely the requirement for USB drives since the computer and therefore data follows the user. This would also allow for home access to give the doctors a better work/life balance where appropriate.
"Once again I failed to properly explain to management why security is important, and because of my complete lack of communication skills they overruled me thinking I was just some ranting IT nerd."
Actually I spent quite a few hours in various meeting with them and representatives of the Doctors explaining in both simple and detailed terms the need for security, even bringing in somebody from another trust who had implemented the sort of things i was suggesting successfully.
They may not be 'Idiots' and I never described them as such, but they did, in the trust I was working for, feel that their clinical concerns and convenience were of overriding importance.
I suggested using 'access cards' but that was rejected on cost grounds, as was thin computing. And the point regarding USB drives was that they wanted to be able to take the data off site so even having data following them around the hospital wasn't going to cut it.
The bottom line was that the Doctors put their convenience above everything else and they were supported in this by the senior management. It isn't that they couldn't understand what was being explained to them, they just didn't want to.
Doctors also need to do low-security work and higher-risk work like reading and downloading scientific literature from libraries and paywalled web sites where the institutions have subscriptions, and writing and uploading information. My institution is blocking access to webmail, and there are strict rules against storing material on the hospital system - so how do we manage without external storage? And how do we secure our non-patient work with backups? Providing airgapped systems with the necessary provileges would cost a lot less than all the new leadership, but it would give power to and save time for the doctors - which does not have priority in the system.
IT personnel and doctors should really be in alliance. In my experience too many of the IT people have allied themselves with the managers. We need more resources, not more restrictions.
(anon for a reason)
"My institution is blocking access to webmail, and there are strict rules against storing material on the hospital system"
Two networks, a secure one and one with internet access. I've worked on such a site - not medical but the production network had data that had to be secured, the office didn't.
"The bottom line was that the Doctors put their convenience above everything else and they were supported in this by the senior management."
And what about the letter that you asked them to sign? The one that started "I confirm that I have been warned that..." and ends "I have evaluated the risks and have taken the deliberate decision not to act. I accept complete and personal responsibility for the consequences of any IT security breaches including, but not confined to temporary or permanent damage to equipment, breaches of the Data Protection Acts and patient privacy and injury to or loss of life of patients. I agree that this letter may be produced in evidence to any subsequent inquiry including Coroners' courts.". The one you offered along with the draft of an affidavit that you'd made an appointment with your solicitor to swear out if they didn't sign.
In at least one place where I've worked, behaviour like that would be classed as "challenging" (at best), and lead to a place in the "further salary not required" queue.
Even if it was entirely justified and correct, pointing out the risks/errors of management decisions is often not a career enhancing move,
"Those of us able to properly communicate with management are quite happy to disagree with them and get our point accross without worrying about future prospects."
Hmm. Tell that to the engineers at NASA whose concerns were overruled and caused the loss of the Shuttle and the deaths of all on board.
Actually, in this week of all weeks, why not tell it to Savile's victims. People at the BBC knew what he was up to. Brave people raised the topic with management. They were ignored. And now, decades later, the truth starts to come out into the daylight. Truth which includes managements suppression of the inconvenient facts.
It's not just a public sector thing either. I've seen valid concerns (not mine) ignored in widely used safety critical systems (hello John got a Toyota?).
A widespread cult of managerialism. Management by spreadsheet. Not universal, I'm pleased to hear your employers haven't caught it yet, but there's a helluva lot of it about. It's dangerous. It ought to stop. But will it?
At the moment, a colleague and I are trying to get the management of a large medical school* to realise that students using their personal GMail/Hotmail etc accounts for sending case studies is not secure, and should be made clear to them, followed by disciplinary action. We are both excellent communicators, but because a) we are not doctors and b) we actually know something they don't and have had the temerity to point it out, we are getting nowhere. I have tackled this from all angles - benefits to students, benefits to patients, the picture of what is going to happen if one of these things leaks, but no - it is as if we were pointing out that some of the doors don't shut quietly.
Communication only works if the other person wants to hear, regardless of the skill of the speaker.
Oh, and yes - we would be at the head of the next redundancy list if we went the "sign this" route.
*An organisation where lots of doctors who fancy taking time off from patients fail to effectively teach new doctors because they all think being a doctor means that they perfect teachers too!
Try quoting the law to them. Even to me your argument sounds like an option. There is no "benefit" here it's simply compliance with regulatory requirements. if you fully understood the reason they shouldn't use hotmail to send personal data you'd be able to explain it onwards and get your point accross but it seems you just think it's obvious that it's better not to. In business this attitude is ignored because you're asking for costs without showing business benefit. Cost for compliance with the law is always understood. Cost (even loss of time) because you *think* it would be more secure will always be ignored and for good reasons.
Know-it-all doctors? The nerve! That's our* gig!
There is a little story I have told each and every doctor who laid hands on me the last 30 years or so:
On the first day at the university there was a introductory event for us newbies. One of the profs said smething along the lines of "Ladies and Gentlemen. You will be engineers. That means that at least some of you will work in jobs where mistakes can kill actual people. One of the things you can do to avoid this is to make sure anything you write down is clear and legible."
Usual response from doctors: embarrassed/sheepish grin.
*engineers (real ones)
From the article, quoting the research paper:
"This attack would have been possible against all medical devices … likely [...]"
The paper in question actually says "possible against all medical devices [identified in the previous step of our tests]" (p 36, paraphrased very slightly). Which may or may not have the same meaning as the extract.
Readers might want to read the whole paper rather than a selection of extracts which omit important words. I happily admit that I couldn't think of an easy and better way of summarising that chunk of the text, but then no one's paying me a journalist's wage.
I don't think that health organizations (or almost any company) will ever take security seriously until the manglement (and in this case, the doctors also) get hit and hit hard by the miscreants. As long as it's just patient or customer info, no worries for them. The minute they get up some morning and find that the company has been attacked and their bank account cleaned out, then maybe... just maybe security will be taken to heart. The sad thing is, not many companies have taken the attacks that have already happened to heart.
"Security design issues are those decisions made by the organization which, if followed, should best protect the organization’s interests .. Arguably the most detrimental issue with hospital security is the lack of funding available to both design and implement good security."
Have they tried not running their critical medical equipment on Microsoft Windows. Or how about running their critical medical software on embedded hardware that cannot be altered without the presence of a security dongle and the maintenence techie entering a security code. The security dongle and the maintenence techie both being required to be present to upgrade the equipment.
Our client BIO-Key International (BKYI) is a developer of software and hardware fingerprint biometric solutions that address security issues such as this. I am posting below the gist of what our head of sales said in response to how BKYI could have helped prevent these attacks in concept (given the complexity of the situation and the limited detail provided – it’s always hard to know exactly the situation and what could have prevented it).
“First, these malware hacks intercept keystrokes, enabling them to steal typed credentials like usernames and passwords. Our patented WEB-key technology can be implemented to secure the biometric authentication process away from any software on the client or even on the server. That would significantly raise the bar in preventing a hack like this.
The Internet of things (IoT) element comes in as a software opportunity for BIO-key, once the accessing PC's have fingerprint scanners. In this scenario, the devices and systems hacked likely were allowing remote admin or user login via web browsers like your home cable modem or router, using usernames and passwords, so once malware was on a shared workstation or Computer on Wheels (CoW) access point, all credentials being typed to sign into other systems from that access point, could be stolen.
If those other systems used our WEB-key technology to authenticate (or specifically, if they required that you login against a central WEB-key service at the hospital), they would have to use a local fingerprint scanner to sign in remotely (and that credential would be checked against a central database using WEB-key’s triple encrypted transmission technology), and malware would not be able to access those credentials.
Note that Aesynt (spun-off by McKesson and now part of OmniCell), which makes Medical Drug Dispensary systems that, are secured using BIO-key technology for this very reason.
While it has taken decades, the world is awakening to the inherent weaknesses in the current mainstays of security that cannot positively confirm that a person using a security token – be it a password, a security ID card or even a token – in person or in a digital environment – is actually the person that their security method asserts.
Watching people pass through security gates into office buildings the other day, it became clear that despite this attempt at security – all you really needed to get into the building was an ID card that would let you past through the turnstile. Neither the guards nor the turnstile were able to ensure that the person being granted access was the actual person who was intended to receive such access.
In the online realm, there are no security guards and all of the ID methods except for biometrics can be used by someone else if they gain possession of the password or token that is required.
Biometrics are the only way to positively identify the individual in question, and they require access to the individual’s fingerprint – which is much harder to secure from a distant land than other methods of ID. These powerful differences are starting to be recognized as playing a critical role in delivering an improved method of security.
Now while there have been a few reported hacks on biometrics – where a reader has supposedly been beaten by some effort to recreate a fingerprint using gelatin or some other substance – such examples are quite rare and very hard to replicate.
And even if this is possible for some – it represents a much more challenging effort that would substantially reduce the potential for a breach compared to the failed systems currently in use. Let’s face it – security is never 100% foolproof – so the goal is to deploy the methods that have proven to be the most effective and to also consider the user friendliness and efficiency of any security approaches. On this front Fingerprint becomes even more compelling.
Lastly, to reduce the potential of spoofing a fingerprint reader, these readers are becoming increasingly sophisticated and relying on larger and higher resolution scans, along with a growing base of “liveness” detection that assesses the fingerprint for the qualities of a live finger that just cannot be replicated in Jell-O! And because fingerprints are unique to each individual – they can also solve a problem not currently addressed by other methods – which is fraud prevention/protection.
If you utilize fingerprints for authentication – you can also cross-reference them to see if this set of prints is already being used by another account or identity. The will ensure that the benefits you deserve to receive are made available to just you – and with the potential to save billions and billions in fraud – this capability will shut down those who use multiple identities to scam business or government subsidies, etc.
The stakes are very high – and we are putting more and more at stake in our electronic world. This escalation of risk must drive the adoption of better methods of security and we feel biometrics are the obvious choice – and fingerprints are the most widely deployed, tested and cost effective means to accomplish this goal.
One of the things management seems to get stuck on is the inconvenience of security. They also lose sight of the liability they are accruing by not having security in place. And there is an added cost that they may want to avoid. This is one of the reasons the U.S. has HIPA laws. They are an attempt to force a minimum level of security on the medical care providers. The best approach I've seen for convincing reluctant managers to implement security, is to point out the liability they are creating by not having it. If you can demonstrate how much disruption an incident would cause. (Potentially the entire shut down of hospital IT systems in the event of a crypto virus being installed) they start to push for the needed security. Sometimes it takes articles like this to make them aware of how vulnerable they are. Sometimes it takes a real life incident on their watch. But make sure your covered by keeping paper copies of all related communications. Maybe with your lawyer included in the loop. Because the first response to an incident will be to blame IT for not preventing it.