Oh dear...
Epic FAIL.
Perhaps what they need is some more money....yes, give them more tax-payers money.....
'Piss up' and 'brewery' spring to mind.
Paris cos even she could arrange aforementioned drinks.
The database that stores vital medical information on millions of NHS patients crashed last week. Outsourcing giant CSC, which won a £973m contract to run part of the Care Records system in 2003 and has picked up more contracts since, was forced to invoke disaster recovery when hospitals and local surgeries were unable to …
...yet there was no impact to patient care?
So CSC are saying that no GP surgeries in their territory needed choose & book and noone needed to give patients results of biopsies/blood tests? etc.
Then again it's no surprise. Only last summert at the Intersystems Symposium CSC were they saying their system is 100% reliable, with so many backup systems it would be impossible the system would be unavailable.
Yarr, pirate icon because you don't have a highwayman icon - because what CSC are charging for a glorified data centre is highway robbery.
That under the 'design' of the care records system, there would be no records on a hospital site. Nothing. All done in a remote data centre.
If you lost your network connection, then the hospital would have no access to records. Its own, or anyone else's.
And when there is any fault in the data centres, all sites are taken out, instead of just one.
And in those millions of records, some poor sod who has a record for back pain, probably gets corrupted and now is down as having cancer or some other ailment.
I wonder if individual records have a CRC/Hash checksum against them?
I recently had cause to get a letter from the government telling me I had no NHI contributions for 2008. Yet they had sent me a computer generated P60 showing me I had.
And they want to give us computer held IDs....
But this was not 'disaster recovery' - nobody said anything about fire/flood/dead servers or the like. The system was just unavailable for an extended period of time, which a hot standby would have coped with seamlessly. And even if it had been a disaster, the same rules apply. If this database is so critical - which it apparently is - the backup should kick in without any fuss. Sadly it would appear that no such backup is available. Fail on every level.
Do you have reading problems?
"There was a temporary loss of services to a small number of Trusts within our region on 10th February 2009"
Loss of services means they were unable to access the patient data on the system in several Trusts - that's quite a lot of people who's records were unavailable when they attended an appointment.
How does that not jeapardise patient care?
Well yes that's how it's supposed to work. The shared database is is a way for different clinicians involved in the care of a particular patient to share data in an electronic form, rather than via letter or fax. Today, when you leave hospital, your GP is sent a letter describing the outcome of your hospital stay (what was done, medication given, etc). The shared database means that he can now receive that data electronically. This doesn't preclude the hospital system or the GP practice to have its own electronic record, as it's always done.
So, if the connection to the shared database goes down, patient care is not severely affected, it just means that some information generated by another system will only be available once the connection is back up. And if some communication needs to happen in the meantime, they can always revert to good old manual methods, such as picking up the phone.
It'd be nice if you stopped spreading FUD over that programme. I won't deny that it's grossly late and over-budget but it's also been designed so that if one component fails it has minimal impact on patient care. And obviously, the SLA around any system takes into account how critical that system is to said patient care.
I mean, at El Reg, you do have email don't you? If your email server fails and is down for some time, it's inconvenient but it doesn't prevent you from doing your job does it? That's an over-simplification but you get the point.
AC for obvious reasons
I have a hot backup for our reports system at work, and they can't be bothered to have a hot backup for a system that provides half the country access to individuals' health records?
What if someone with an allergy to penicillin had gone in and, because they couldn't check out their health records, they gave said patient penicillin?
No impact on patient care my arse!!!
There's no muppets icon, damn!
Quite.
What makes this even worse is that there are actually bits of the NHS that have a clue (or know who to call when they need a clue).
For example, if the systems in charge of the UK's national blood transfusion service were to fail for more than a few minutes, what would happen? What if they were unable to cope with the load at a time of particularly high demand? What would happen if three previously-independent regional databases had to be merged into one, with minimal disruption to service?
Give it some thought, and then read:
http://www.availabilitydigest.com/public_articles/0310/uknbs.pdf
If somone with an allergy to penicillin had gone in they would probably know about it by either asking the patient of looking at their notes which they still have and will for years to come (do you have even the faintest idea how much paper there is in the NHS?)
Alternatively being doctors with training and common sense and professionalism and all that lovely stuff, if there's a doubt they wouldn't take the risk, just like it's always been.
As a result of all this paper and the fact that it often gets stored off site, for the foreseeable future the IT systems can go down for short periods of time with absolutely no impact on patient care as all the relevent paperwork will have been collated days in advance.
Nice bit of self awareness re the muppet icon by the way :-)
No, no one is telling you there were no back ups, because there were, and they kicked in, and the DC failed over, hence no interuptions.
Come on el reg, i knew you wouldn't want to post my previous comment, but this scaremongering is way way below you.
I don't see the point in jumping on the public purse commisions 'good time' bandwaggon, by posturing every indifferent event with a negative slant, all they're doing is tryinfg to justify their existance. leave them to it, you don't have to join in.
I'd love to know your sources for this crap...
When I worked on the development of National Unemployment Benefit System 2 back in the early 90s (a project run and delivered by ITSA - Information Technology Service Agency which was a government department not an external corporation) backup and redundancy were paramount. The entire system had 4 sites around the country for redundancy and it would require all 4 of the sites to go down at once for the system to fail.
Sadly the site where I used to work is now occupied by EDS and it seems redundancy has become a thing of the past for government IT systems. NUBS2 was by no means perfect and was replaced by Jobs Seekers Allowance, but it seems to me that things back then (when they were run by civil servants) made a lot more sense from a development perspective than they do now.
This post has been deleted by its author
"That NHS could break the most reliable system .. Why do I have the feeling that if if the NHS was running Unix they would be the first major organization to be wiped out by Unix virus ."
if? Cerner Millenium for the southern cluster currently provided by Fujitsu (until someone else takes it on) runs on a UNIX OS. I assume BT who are also using Cerner Millenium for the London cluster use a UNIX OS.
Don't blame the NHS. At least not all by itself.
After all, this is all being outsourced to other companies, who should have the necessary knowledge to secure their servers effectively.
That said, yeah, if the government runs a system entirely on unix it will be only a matter until they get screwed up somehow.
I can't believe that CSC.... no I'll start again, I suppose I can believe that CSC rely on manual DNS changes for DR, Accenture were by no means perfect but once they handed over to CSC all of the systems were "downgraded" as far as they could be to start bringing in profit as soon as possible.
I'm afraid that NPfIT or CfH whatever they call themselves this week are complicit with CSC as they do an appalling job of checking that delivery is to schedule, oh and by the way, security is one of the things cut to the bone, you might want to opt out of the PCR system now.
My recollection is that a second server situated geographically x (quite a large number) kilometers apart was in the OBS - upon which the contracts were based?
If so, does anyone know whether automatic roll-over to the second server was also part of the OBS?
I suspect the contracts (commercially confidential) do not hold any such requirements - seeing this is not the first time CSC servers have failed without automatic transfer to the backup servers.
I take it that if there was no impact on patient care, the server which went down held no clinical or mission critical information and was not holding any GP or Community TPP SystmOne records? These are only held on central servers - and, to be fair, I believe TPP has had two servers for about 18 months now.
How will the situation change change if Lorenzo is ever delivered, and *all* medical records of all sorts are held off site on CSC servers?
Could I exercise my Choice - as a patient - to receive treatment at a site on a different system with, at the minimum, on-site backup?
(living in NME: maybe I should emigrate?)
I worked for an investment bank employing redundancy with one database server in the UK and the backup in the USA.
If one failed I could switch to the other in less than 10 minutes. Both databases were always uptodate with the same set of records.
And they were holding 100 million records.
Rocket science it ain't.