Fujitsu are either putting the wrong people in prison (Post Office) or letting them go (police)....
UK leader Boris Johnson has admitted he does not know how many live legal cases "will be frustrated" by the loss of 400,000+ records on the Police National Computer and cannot say when the data will be reinstated. In a testy Prime Minister's Questions, Johnson explained that 213,000 offense records, 175,000 arrest records, and …
The article suggests that the problem has been there since November, I guess this means that they'd have to restore to that point in time, then replay the transactions until now and finally delete the records that shouldn't be there.
I guess that's why it's taking so long to sort.
This may not be feasible. If the script is designed to delete records the police have no legal right to hold then it should be a good-and-proper permanent deletion. Retaining things they have no right to hold - even in their transaction logs - should not be something they're doing. Same goes for backups.
Now in theory you might pre-emptively soft-delete things early, or pre-emptively age out records to archival storage so they can be retrieved if the proverbial hits the fan, but the fact this is a story indicates strongly the police are actively holding things for as long as they can and deleting them at the last moment. This would eliminate all the "normal" recovery procedures you could follow in a typical professional context. Once it's gone it really, really should be gone.
It was my impression that they would restore the data base to the pre-erasure status and then run the transaction logs accumulated since then to update it with the new and revised records. At which time they can correct as many of the mis-coded records, take a new backup, run the revised deletion script, and express horror at the records that are now being incorrectly removed.
What you say they should do "in theory" is what they really should do. Of course it wouldn't have any big effect to have a law changed, so if something must be deleted after three years, it would be legal to keep it in backups for 14 more days, and illegal to restore this data unless the original deletion was made in error. So if someone is suspect of a crime, 3 years and a day after his finger prints were taken, it would be illegal to restore the data.
"this is a story indicates strongly the police are actively holding things for as long as they can and deleting them at the last moment."
Are you sure they're even deleting them then?
I've seen a number of instances where UK police have claimed that items have been deleted/not longer recoverable, which they've then happily brought to court when it suited them to do so
Once such evidence is introduced and evidence is produced showing that police had previously declined to provide it to the laywers involved claimed such material was deleted, judges can become rather testy. Withholding evidence is a serious matter
Private Eye saw this coming. When they reported ladt year on the scandalous prosecution of sub postmasters due to Fujitsu cockups, they remarked that it didn't bode well for the police database.
Remember - Fujitsu did not disclose pertinant information and so innocent people went to jail, had their business and reputation wrecked, and in some cases were driven to suicide.
"None of this will be any relief to Johnson, who seemed pleased when, during PMQs, the opposition leader changed the subject to COVID-19 – indeed, he seemed to criticise Starmer for focusing on the PNC while the pandemic raged.
Considering the UK's coronavirus death rate (deaths per million), confirmed on Monday as the highest in the world, and the British government's neck-wrenching U-turns on lockdown rules, Johnson's preference to talk about COVID rather than the PNC might offer a sense of how likely the Home Office is to find the missing records."
As IT practitioners we all know that sometimes shit like the records being deleted happens. If robust procedures are in place then when things go wrong, they can be recovered.
The people that run the PNC have had the instructions & budgets from numerous governments going back to 1974 https://en.wikipedia.org/wiki/Police_National_Computer#History , to do it properly with no mistakes like this, or to be able to recover issues like this.
The fact they had to report this issue up the chain of command & the PM had to speak about it just shows how seriously wrong this record deletion has gone and the problems around recovering that data, otherwise we would never have heard about this. There is a very strong likely hood that a lot of that data is unrecoverable, otherwise, again, we would not have heard about it.
There is no individual to blame, but Fujitsu has to bear much of the responsibility as they are the ones instructed by NPIA (or whatever they are called now)to run it and do the work on it.
There is no individual to blame, but Fujitsu has to bear much of the responsibility as they are the ones instructed by NPIA (or whatever they are called now)to run it and do the work on it...... Anonymous Coward
Oh please, you cannot be serious? Is corporate machinery now to be highlighted in the frame and to be held responsible for all human fcuk ups? How very convenient.
I'm absolutely certain that won't be horrendously abused and misused.
I can just hear the bleatings from the dock now ..... "It was the machine which did it, m'lud."
There is no individual to blame
That's even more stupid than anything Trump ever said!
Of course individuals are to blame:
1) whoever authorised the deletion of the data (or let it happen)
2) whoever didn't keep proper backups
3) whoever is in charge of IT operations at the PNC
4) whoever is in charge of the PNC
5) whoever signs the cheques to Fujistu
6) whoever issued the contract to Fujitsu to run the PNC
7) the politicians and officials at the Home Office who oversee this omnishables
The politicians and officials aren't to blame for the actual data deletion of course. They are however to blame for the circumstances (outsourcing, crappy compliance, unclear accountabilitty etc) which caused that.
The missing data didn't spontaneously disappear. Somebody made that happen. Others were/are in charge of the processes and procedures which caused it. Those processes and procedures didn't materialise out of nowhere. Somebody created them.
On the covid thing....
BoJo doesnt run the NHS and hasnt for the last upteen years decided where the money is spent.
More worrying than the number of dead is the fact that the NHS has 2.7% of the cases dead .... this is the 5th highest in the world. Now you can say the NHS is underfunded but it is NOT underfunded compared to Columbia, Argentina, Brazil all of whom are curing more patients!
I bought this up way back in the first months of this over exagerated crisis and still no one has answered why the NHS is so clearly unable to learn from countries who are curing more patients. It surely cant be that damned hard to find out what doctors in Germany, France, Poland, Spain are doing that makes them so much better at curing people than our doctors. Its what I would expect any professional to do. Yet instead we hear about returning doctors needing to go on training on 'how to treat ethnic minorities' (and I would guess all those of relatively rare sexual persuations).
The first tranche of deaths included a lot of people "who would have been dead within 12 months anyway" - and in the long-term "deaths on excess of 5-10 year averages" this resolves itself
HOWEVER there are a lot of people who do not fit the "dead within 12 months anyway" who are no longer with us and my local ICU is currently stuffed to the gills with people under the age of 40
We're doubling down on this by insisting on vaccinating "vulnberable people" without paying any heed to nailing down the vectors (schoolchildren and key workers(*)) that keep passing this dease around faster than we can vaccinate the people most likely to die of it
(*) These people might not be particularly likely to die of covid, but the people they pass their infections onto ARE. Breaking the transmission chain is crucial and it makes sense to nail down your vectors rather than entirely concentrating on the people they might infect
I bought this up way back in the first months of this over exagerated crisis and still no one has answered why the NHS is so clearly unable to learn from countries who are curing more patients. It surely cant be that damned hard to find out what doctors in Germany, France, Poland, Spain are doing that makes them so much better at curing people than our doctors.
It's not an over exaggerated crisis. Ask the relatives of the 100,000 or so who have died of COVID19 in the UK. Or the 400,000+ who have died in the USA. Or.....
The reason the NHS is unable to learn from other countries is because it's being lead by incompetent and quite probably criminally negligent fuckwits. Do you remember that Boris took NO precautions when he visited COVID19 patients just before he got infected?
Our politicians keep changing policy, almost on a daily basis. They're throwing money at their cronies to spunk up against the wall. Anyone remember the promises of "world beating test and trace" (brought you you by Typhoid Dildo and Deloitte's consultants at zillions/day)? How about ordering the planeloads of useless PPE? Or wasting millions on the unused and largely unstaffed Nightingale hospitals? Or our political overlords facing no consequences at all for ignoring the lockdown rules - like going for eye tests in Barnard Castle?
The doctors in those other countries are probably no better or worse than ours at curing COVID19 patients. They are however better resourced and co-ordinated. Germany for instance has pro-rata more ICU beds and nurses than we do. Testing and tracing is done bottom-up rather than top-down. Which means it's done by people who know what they are doing. ie not DIldo Harding. Lots of other countries have had far clearer messaging on the public health aspects too. So proportionately fewer people get infected or spread the virus. Which means their health professionals get more time than ours do to care for patients.
'but Fujitsu has to bear much of the responsibility as they are the ones instructed by NPIA'
Struggling to see how you reached this conclusion as there is no mention of it in the article. It is stated that Fujitsu provide the hardware (and presumably the OS) but aside from that....? If you are referencing some other source, feel free to share it.
I'm all for giving negligent or poor service providers a good shoeing when need be but it helps to get the facts straight before the blamestorming starts.
Police Forces are paying the Home Office for centrally managed IT systems such as the Police National Computer System for which they are bearing a recent increase of just under 10% ,which is scandalous. Based on this event, rightly described by the PM, as "outrageous" Forces, as they try to keep the Public safe, should be receiving a reduced cost to reflect the extra work they will have to do to try to mitigate the effects of this disastrous deletion of records by the PNC.
Depending on how you look at it. The story keeps getting better... I think when the facts are fully known it will be worse. I think in the end this will end up surrounded with that tacky yellow and black police line ribbon that says crime scene do not enter, and nobody will, this will be swept under the rug, and forgotten.
'"The software which triggers these automatic deletions contained defective coding and inadvertently deleted records that it should not have, and indeed had not deleted some records which should have been deleted"'
Once again (oh dear, I'll have to get a rubber stamp made) the software didn't work properly because clearly the developers didn't fully understand the problem they were tasked with solving - the requirement to automate deletion of the correct records
Until software development becomes an engineering discipline with formal standards and recognised practices the ensure conformity with those standards, no application will be trustworthy. But that requirement extends way beyond 'coding' - it encompasses concept, where what is required is thrashed out and verified, design, where how it will be implemented is formulated and verified, and implementation where the presentation is defined and verified, before any coding takes place. At every stage, conformity with standards that have been verified themselves as sufficient must be a baseline for proceeding to the next stage.
Oh dear, this sounds like 'waterfall' doesn't it. But leaping from loosely expressed concept to coding as is now commonly the case ensures an application will be riddled with flaws (and not just security bugs). Post-coding testing is necessary but can never be a substitute for controlled development.
It doesn't matter if the code is verified if, despite all the thrashings, the standard it's verified against misunderstands the problem or is incorrect. Because an oversight or omission in the specification means the verification won't be worth the bold font its printed in.
Come on, we've all been round the block enough to see code perfectly implement a spec that contains a flaw no one has recognised. It happens even in standards that have been thrashed out by field leaders in international standards bodies.
And, also, what you're asking for requires management to respect IT and give them the time.
@Brewster's Angle Grinder
"It doesn't matter if the code is verified if, despite all the thrashings, the standard it's verified against misunderstands the problem or is incorrect."
That's why I said "At every stage, conformity with standards that have been verified themselves as sufficient must be a baseline for proceeding to the next stage."
Back in the old days programmers understood databases and file formats at a byte by byte level. Code was written to read and write the data at a basic file level - that's the way that FORTRAN, C, Pascal etc work. I work with a public domain file format that is been used for 30 years and these days I keep running into people who think that if they can read the file by writing an algorithm in some modern app language then they can write the file easily with a function and update the format themselves, adding blocks that change the file at a basic level with no thought about the consequences. So when a user opens the "updated" file they start losing data.
Still doesnt answer the points.
First off the standards have flaws and are usually riddled with must, should, could... so what shoulds are done. what coulds?
Second no one creates an application purely to a standard, verified or not. These guys were creating an application to delete records according to some given criteria from a police database. There is no standard in existence for that.
For us to fully understand where the fault actually lies we need to understand (and we will never be told) what was deleted that shouldnt have been. For example, if theere were 70,000 records and anything above 65535 were deleted by accident we can start to guess maybe an out of range error, maybe even that the first 4500 were deleted we might hazard a wrap round guess. If all records for people over 6'5" it might be an issue with a faulty requirement, if all those with birthdays prior to 1970 we might wonder about some date algorithm. Pretty much all of those would have been caught by some decent testing of course, but the deletions of people over 6'5" might well have passed the tests because the requirement itself was garbage.
@Dave 15 "First off the standards have flaws and are usually riddled with must, should, could... so what shoulds are done. what coulds?
I didn't say ISO standards. I work on these and while some are excellent, others leave quite a lot to be desired. Tthey are all developed usning a consensus on current practice, so the adequacy of current practice determines the validity of the standards.
What I did say was "standards that have been verified themselves as sufficient. Not necessarily international standards (though that would be nice). Internal standards, if they're adequate, would do fine. However in most commercial environments where I've been called in to evaluate development practices I've found little evidence of any formal standards in use - indeed in many cases, not even recognition of OWASP.
BTW it's perfectly possible to modularise development so it's agile while still ensuring staged verification of output. Conceptually, agile is not antithetical to established engineering practice, it simply doesn't often make use of it at present. One possibility might be each sprint a little waterfall.
> That's why I said "At every stage, conformity with standards that have been verified themselves as sufficient must be a baseline for proceeding to the next stage."
I wrote an essay in the 1990s which pointed out that you could easily write an ISO9000 standardardised way of flying a 747 into Angel Falls or the World Trade Centre and have it signed off, but having somethign as "standards approved" didn't mean this was a sensible thing to do or that anyone would in fact have done it
After a certain day in 2001, most copies of that essay disappeared from the net as being in "bad taste"
Spare me. The issue is most likely to have been the specification of which records should be deleted being faulty. Yes software engineers dont always follow the right processes (e.g. test driven development, automated code reviewing tools, coding in a language thats understandable - such as C and not some wooly headed pile of confusion like C++, doing a run through on a copy first and checking the results instead of leaping in on an irrepairable original) but it doesnt mean all of us as shonky,
As an aside I wonder who wrote this deletion code... it wouldnt, just as an example, of been outsourced to our old friends in France at 1500 sobs a man day to be parcelled out to India to an engineer on 1500 a month if they are lucky who was probably working 12 or 18 hours a day and spending 4 or more hours in a traffic jam (been to Bangalore???) and couldnt hear what was being said on the crappy phone line (but just says yes to anything because they are never known to say no even when they dont understand, havent tested and didnt get a clear picture of what they needed to do but did have their boss over their shoulder gagging to get his bonus?
And as for the rant about the wonders of waterfall, remember very well that this has been tried for decades, what we end up with is late, not fit for any purpose crap instead of something the user wants.
It is NOT any agile principle of small increments to deliver what the customer requires thats at fault but failure to do things like test driven development. To work out what to test you have to first understand the requirement. To make the code pass the test it has to satisfy the requirement. How many waterfall projects do you know where the requirements were changed (with a complex change control process... which involved at some point asking an over worked engineer who gave an off the cuff response to it because he was already late) and ended up with dev eating into the test time because the developers hadnt understood the reams of requirement docs spread otu all over heaven only knew where. Because EVERY waterfall project I ever saw was exactly that mess. And NO, other engineering teams in other disciplines do NOT do better. Look at the recalls on cars, look at the wobbly suspension bridge they put uip in London, look at Brunel trying to launch the Great Eastern, look at the beloved Morris Minor and then try and change the brake cylinder, or better still try and change the headlamp bulbs in any of the cars you can see sitting shiny and new in a car showroom.
Mind I would love to know who DIDNT test the system settings program on Ubuntu. IF you are going to let me have a screen size of 640x480 (which is still useful, especially to us old farts that need big letters) then why for fucks sake didnt any anus actually set the screen to that size and realise that when you want to use the 'apply' button its bloody helpful if it is actually available on the screen! I mean, this bug has been around how long?
That would imply the Titanic wasn't made to correct standards... Or the bouncing Thames footbridge wasn't made to spec or Potters bar rail disaster was because of rules not being followed.
Sometimes things are just an unknown, unknown. Sometimes testing doesn't cover enough edge cases.
The sign of well designed and managed system is the ability to recover from disaster however it occurs (because it can and will happen outside of the imagination/budget/timescale of man).
" the developers didn't fully understand the problem they were tasked with solving - the requirement to automate deletion of the correct records"
Yup, and you can pretty firmly drop this in the lap of management who didn't brief them properly
I just spent 18 hours chasing and fixing a non-existent fault in a piece of software which was entirely unrelated to the issue the user was originally complaining about, thanks to "creative interpretation" resulting in what was reported to me being entirely different to what the user actually said and described
What the user described was readily replicated and resulted in a one line fix in a configuration file, vs a very long and twisty rabbithole of false clues and red herrings
I had a case once of a client providing a set of tests to perform before accepting delivery of CD full of scanned books.
The scanned pages needed to comply with rules about size, document information, and so on.
The interesting part was these 2 rules:
a- if a book is OK, it must be accepted into the production system (meaning that the CD is added to a jukebox-like system)
b- if a CD contains a book that has been rejected, the full CD is rejected
Of course these 2 rules were not in the same part of the specifications document.
Being old school, I read all the specs before sending them to my team of developers.
When I reached the end, something was nagging me so I read everything again before spotting the discrepancy.
It took 2 weeks for the customer to decide what to do in such a case...
We all know you take the money for being in charge, you take the praise and bonuses when it goes well and when it breaks its always someone elses fault... just ask the guys at VW about the diesel scandal that wasnt their fault, or those in charge in our social services who werent to blame when baby Peter (or whoever) was killed when they should have been in care.... they NEVER take the blame, not ever.
How tedious. Blame the PM for this is like, well, it's so stupid I can't even think of an appropriate metaphor ..... Disgusted Of Tunbridge Wells
Where does the buck then rightly stop, Disgusted Of Tunbridge Wells? And then what will be done to the unfortunate soul[s]?
Hung, drawn and quartered? An immediate sacking? Or nothing really of any significant consequence?
What will be tedious is to hear a Priti Patel like drone trot out that old faithful ....... Lessons will be learned ..... whenever it is bleeding obvious that they never ever are ..... for there is always a constant new stream of inexperienced hopefuls doing tasks they are never fully prepared for, and in all too many cases, totally unsuited and booted for.
'Tis just the crazy way things always are and/or have been with humans who think they be in charge with the pulling and pushing of levers/pimping and pumping and dumping of information with commands and controls to follow with media presentations.
* I suppose that is still too alien a notion for mainstreaming just yet, but there's nothing stopping it anywhere/everywhere else though.
Biting the hand that feeds IT © 1998–2021