that's what happens
When you slash a workforce, heads should roll, but they won't, of course "lessons will be learnt"!
Can customers charge 30 quid a day for this? Must be worth a shot
On the fourth day of a IT systems choke-up that has left customers unable to access money and in some cases unable to buy food or travel, Natwest and RBS – which both belong to the RBS group – still have no idea when the issues will be fixed. A spokesperson said the banking group had been working overnight to fix the problems …
> of course "lessons will be learnt"
Generally the lesson that is learnt is that the bank in question can futz around for this, particular, length of time without anything bad happening to it's senior staffs' employment prospects, the bank's long-term reputation or in regard to shareholder backlash.
No doubt when RBS carry out a post-mortem, they won't actually find the root cause of the problem (it's the network, stoopid!) but will blame some third-party: either outsourced, software supplier or infrastructure. They will then issue a suitably smug contrite press release about how they've "taken steps to make sure this never happens again", award themselves large bonuses for the successful cost-savings, take the regulator out for a very good lunch and prepare their CVs to move on and stick it to the next financial institution on the list.
nice thumbs down from the PR troll at RBS
Perhaps if you useless cunts invested in making your systems work and not trolling forums looking for negative comments about the monumental goat fuck perpetuated by the bean counting fuckwitts at the top of the steaming heap that is your dying business you might not be ankle deep in shit whilst doing a headstand.
Hurry up and just die, you only got bailed out because Gordon Brown is a jock and we all know how good he was with the economy, he should have let your toxic, debt ridden whore of a bank go bust at the time.
"The problem is that IT systems have become vastly more complex. Delivering an e-banking service could be reliant on 20 different IT systems. If even a small change is made to one of these systems, it can cause major problems for the whole banking service, which could be what's happened at NatWest. Finding the root cause of the problem is probably something NatWest is struggling with because of the complexity of the IT systems in any bank."
This is why out-sourcing IT is bad. You fire the permanent staff who knew all the quirks of the system and would have pinpointed the problem in no time at all.
And now the decision to fire IT staff has come back to bite them in the arse.
"This is why out-sourcing IT is bad. You fire the permanent staff who knew all the quirks of the system and would have pinpointed the problem in no time at all."
I'm sure that the project manager made absolutely sure that there was adequate time to complete all documentation and each quirk and bug is accurately recorded so that the next guy to come along would have a fighting chance.
Do I need to add /sarcasm to the end of that post? :)
Good documentation helps prevent future disasters, and speeds up the resolution of problems. Good documentation should save you time in the future. The problem is that few people seem to realise the importance of good documentation until they don't have any.
[quote]Good documentation helps prevent future disasters, and speeds up the resolution of problems. Good documentation should save you time in the future. The problem is that few people seem to realise the importance of good documentation until they don't have any.[/quote]
Haha.. 3..2..1 you're back in the room :)
Honestly, you have never worked in a real IT department then. People want to do the minimum in terms of effort and time, and documentation (updating it, managing it, reviewing it) will be the first thing that gets chopped and dropped when deadlines approach or the project finishes. IT Departments ignore the need for decent documentation or its control because it is expensive to manage.
You can probably rely on the original spec being right, but any subsequent updates will always be poorly document. In fact a decent programmer using a decent language should produce decent self documenting code anyway.
I worked in application support and trust me the last place I would look when fixing things was the documentation. Better to see what the source code is actually doing.
It has not always been like this. I've been working in Data Processing (remember that!) for over 30 years, and there was a time when good best practice, BS5750 and it's follow-ups like ISO9001 were actually valued. But this was back in the days when computers were expensive, and it was seen as valuable to invent in people and process to get the maximum value from your high outlay.
Of course, everybody bitched about having to write the documentation, but at least the management bought in to the overall need for it, and factored time into the project plans, because these standards said it had to be done. Sometimes the docs were junk, but often they contained useful information. And the more documentation you wrote, the better at writing it you became.
Nowadays it's all about trimming the fat, over and over again, and if the managers complain, they get trimmed themselves and replaced by others who are happy to comply. This means that the barest minimum is done to get a service kicked over the wall to support, the support teams have no way of pushing back against a poor service, and then this happens.........
I'm now seen as a boring old fart, locked in the past, so I'll just go and get my Snorkel Parka and go.
I like writing code comments, ensuring system/unit test coverage and getting basic things like Interface Communication Documents put together.
Sure I will avoid writing a user guide, but the first thing I do when I join a project is find out about their build process. I then work to automate that process (projects rarely do until you point out the wasted time).
Then Once I have Jenkins/Cruise Control up and running I get tools like Cobertura (Unit Coverage), JavaNCSS (Code commenting), Checkstyle, JSLint, PMD, FXCop, StyleCop, Lint, etc... (build scripts tend to be written in ANT or Maven and I have done it so many times I add this over a lunch).
Then all you have to do is show the system to your team /technical lead and wait. I've yet to see one who can resist all of those lovely build metrics. Once you have their buy in most of the rest of the documentation comes as a matter of course.
Good documentation frequently requires the benefit of hindsight. A person with experience of a system will often quickly work out what's gone wrong with it after the problem arises. His knowledge is implicit, not explicit, and couldn't have gone in the documentation.
Outsourcing, bad. Check. All other forms of de-skilling, likewise. Monkey see, monkey do, doesn't work for what the monkey's never seen before.
"Good documentation helps prevent future disasters, and speeds up the resolution of problems. Good documentation should save you time in the future. The problem is that few people seem to realise the importance of good documentation until they don't have any."
Completely true. Please can you be on my next programme board.
Unfortunately when explaining to your project/programme sponsors, before the disaster, that you would need to have an analisys team with the following skill sets (x..z) go over the complex and byztanine structured delivery environment to assess the risks and any project "tuning" that needs to be done, you get shot at for delay and bumping up the cost of the deliery. A decision based upon their profound understanding of IT and system development issues, as demonstrated by their ability to use twitter!
After the disaster, the sponsors then all forget the warnings and advise you gave, that they over-ruled, and then blame the PM and his delivery team.
Yep...got the T-Shirt
Think you guys are thinking of a SDM - Service Delivery Manager. Its not the PMs issue, once the deliverables have been produced as the SDM required and its been agreed and signed off as part of going into Live service, its not the project team's responsibility any more.
Otherwise you would have projects that never end! Now if the SDM had no idea about the Service they are responsible for delivering, and didnt think about the training required and protection of said service, well thats a different story.
Enough of the Alan Sugar mentality please.. thanks
Besides outsourcing, that other over used IT fad of the moment "agile development" could be in play here. Whilst a good technique for most green field sites and systems, I have come across it being used in complex multi-system environments, where it struck me as something similar to playing russian roulette, especially when being carried out by teams that did not a have deep knowledge of the overall delivery environment.
Sacking 1.8k permanent IT workers, and replacing them with 800 offshore workers was always going to end like this. Roughly 9 months after they did that, this happened.
I expect that until now, they have simply been managing the existing systems, and now they have put some changes live, utterly breaking everything, and no-one left has a clue how to fix it.
The golden rule when you shed staff like this and cut wages, you lose the good people - who will always be able to find a job - and keep the dross.
Maybe the 800 Indians can't read German.....
Alles touristen und non-technischen looken peepers! Das computermachine ist nicht fuer gefingerpoken und mittengrabben. Ist easy schnappen der springenwerk, blowenfusen und poppencorken mit spitzensparken. Ist nicht fuer gewerken bei das dumpkopfen. Das rubbernecken sichtseeren keepen das cotten-pickenen hans in das pockets muss; relaxen und watchen das blinkenlights.
Also shows that potentially documentation and config management was gotten out of date because of lack of staff. Can't just sweep in contractors and say "here read this, sort sh*t out".
Hopefully the former permies are all being contacted and offered awesome rates to go and help out.
"The Queen has been unaffected by the glitches,.."
Phew, what a relief, haven't slept a wink worrying how she was going to manage.
As a Natwest customer I can say the at the time of writing this I've had money credited to my account but am unable to transfer/make payments online. Plus direct debits that should have been gone out today, the 22nd, have not been paid.
I took £50 quid out of the cash machine yesterday from NatWest. My salary wasn't showing on the balance though. Oh, and they were unable to give me a mortgage quote or start an application, but weirdly were able to do their internal credit check and decide I was probably a better risk than the Greek government.
A customer who sent us confirmation that BACS payments had been made on Tuesday has still not had those payments arrive in our account from their natwest account. Its ridiculous.
All the press is about people not receiving money into their accounts with natwest but there must be a huge amount of people affected by payments not going out from natwest accounts too.
I wonder if some of the poor staff they laid off a few months ago will come back in as consultants to fix the issue at something like £10000 a day rates :-)
Still really feel bad for the poor folks who are Natwest customers, I guess screw ups like this could happen to any bank too. :-s
Rob
They would be offering a free £100 1 month overdraft to anyone with a Natwest debit card who switches their account in the next week.
Although I suspect most CIO's are just thinking "there but for the grace of god go I". Thats even if they have a CIO and he's not been gobbled up by the CFO's Office.
lol - nearly fell off my chair laughing at this. Sooo true. Just the other day I was in a server room and the IT manager said, "need to shut the door quickly, the fans annoy the marketing girls".
Still part of my thought, nice deployment of server room so that the lonely IT crew get to see the other parts of the business.......
Don't joke about it, the first company i worked for the CEO did similar as he felt the aircon cost too much money to run .....
Fortunately I don't have to go AC for this one as the company is long gone (was bought out rather than senior management stupidity - the CEO actually used to keep profits back (rather than rewarding them to himself) so he could keep the company going with out laying staff off during lean times)
They did the same at my college. The old Apple room was fully air conned and security protected (smoke system, alarm etc). Run a dream except the Admin tools crashed the PCs. :(
But then they decided to get a fancy PC room next door with 15+ top of the range pcs. Oh, but blew the budget on the PCs only. The room had 1 wall with full windows, facing the sun and they barely opened. I think one or two PCs blew each week. But at least the replacement boards came in quick. :P
To me, there are two trains of thought that exist in management.
Train A goes something like: We employ 1,000 IT staff and, because of this, all our systems run smoothly.
Train B, however, goes: All our systems run smoothly, why do we need 1,000 IT staff?
Unfortunately, it's standing room only on Train B, whereas Train A has been cancelled due to lack of demand and a bus replacement service is now in operation.
whilst an hour or two outage would be dismissed by the courts as "reasonable" within the banks T&Cs (which will try to remove any liability from the bank for any loss due to it's actions), I suspect a 4 day outage would be considered unreasonable, and consequential losses *will* be awarded.
Just the thought of the hundreds of claims through the courts, and having to argue each one on merit should keep the lawyers on the toilets for the next decade.
Once the final liablity has been costed, I wonder it IT staff will seem quite so expendable on a risk/return assesment. We need a bewigged icon with the caption "wait till the lawyers get there"
suppose a business has been unable to access it's account, and pay a supplier, and as a result they go bankrupt ?
Supposing, thanks to their incompetence, I am unable to pay for my cars service, and therefore can't get to a business meeting ?
As I said, T&Cs can be used to protect you from consequential losses, in reasonable circumstances. A 4 day, unplanned, unannounced outage is not reasonable in my book, not I suspect would the high court find so either.
I am taking your comment at face value. I apologise if you were being ironic.
I worked for a building society on mortgage systems and in a decade, we once serious outage, where the system went down for nearly a whole day because of a disk crash. Our overnight update was built up of a series of jobs for different purposes, each creating daily transaction records, with one program at the end that applied them all. The benefit was that if someone buggered up one of the overnight update programs, the rest would still get through, and the program that applied the transactions almost never changed.
A relative once had to pay a day's interest on the price of a house because the payments transfer system caused his completion to fail. The bank wriggled out of its liability citing T&Cs. The glitch was only a few tens of minutes, late in the banking day. Mercifully the person whose house he was buying did let him move in, even though he hadn't yet paid for it.
But I doubt that the bank could escape liability for *several* days' failure and interest. If they try, the lawyers sure as hell *will* be involved. Wonder if it affects hundred-M completions as well as hundred-k completions?
They won't start. BPO and ITO vendors write and sign contracts every week. Buyers sign such contracts once in a flood. As a result the vendors run rings round the buyers, and there's no chance that the bunglers of RBS would be able to pin something on the ITO.
And of course, are RBS management inconvenienced? Nope, they've got big credit balances and multiple accounts. Are the 1% inconvenienced? Nope, they bank with Coutts. It's the plebs who spend most of their salary or pension each month who are in the crap. And I can assure you that RBS won't go fighting for justice for the likes of you and I.
I thought I heard NatWest (the helpful bank) say somewhere
'no one will be out of pocket because of this'
Does anyone with a NatWest account want to lend me a tenner at a bazigilliion percent interest and a trimegazillion quid administration fee until monday and we'll split the compensation when it comes through?
I guess I'm paying the compensation
*always lose*
No suitcase of money. A well-stuffed wallet will suffice while people still think it's just a glitch. Only gold will be accepted after it becomes clear that TS really has hit TF. And be very careful who you let know that you've got gold, because of the knives and the guns.
Pray it never happens.
Are you really saying that you don't have enough savings anywhere to tide you over for day to day expenses for a couple of days? We're not even talking about the big non-cash outgoings - your mortgage, car repayments, insurance etc - those can be sorted out later if need be. It isn't as if this particular snafu has not been extensively publicised. Hell, when I've worked anywhere that involves payment processing all user memos get sent round for issues far less significant than this.
Your level of fail at understanding systematic failure and the importance of banks is breath taking.
But you've clearly either not bothered to read about it and learn, or have chosen to ignore what you've read so there is no point in rehearsing the arguments here for you now.
For the fools that don't grasp the problem with bank failures, even "technical" ones like this should read this story http://www.bbc.co.uk/news/business-18547149 and explain to me how workers on strike, unpaid wages and customers not having their orders fulfilled is not systematic failure.
In RBS parlance (I used to call it RBSBS), a GNEP is a Group Notifiable Event Process
"A notifiable event is when something happens that could cause damage to the Group by adversely affecting our customers, our finances or our reputation."
I'd hate to be the poor soul that had to raise this GNEP...
The experience of outsourcing is frequently terrible, for the simple reason that custom software isn't like business cards or the car fleet. For one thing, car production is a repeated action, so you get economies of scale. Custom software is like custom anything, it doesn't scale.
On top of that, there's often a confusion between the value that internal people bring, which is long-term experience of the software rather than experience of the technology. Someone who's been looking after a DB for 10 years doesn't need to look things up. You ask them how to find out the customer's name and address, and they'll tell you the service to use or the database fields to use.
What consultancies do is to always look to reduce costs. They'll often start you off with really good, experienced people, then once the consultancy has its feet under the table, they start putting graduate trainees on. You won't know this, or see any discount in your fees. You'll only find out when they screw things up. But this also means that they're moving staff off and on quite frequently.
The best run IT functions I've seen have been in-house with mostly homegrown systems and a few packages where the package was clearly a good fit. The words "we'll just get SAP and adapt it for our needs" are a sure sign that you have morons running your company.
"The words "we'll just get SAP and adapt it for our needs" are a sure sign that you have morons running your company."
Been there, done that, have the scars to prove it. Thank God, I am out of here. So sick of the absolute bullshit being spouted by the consultants that know f*** all about either the business processes used or the software they're supposed to be experts in.
Really pissed off with managers that refuse to use the correct SAP processes, then complain that things don't work, or expect me to instantly fix their cockups when it suddenly stops working because of something that they have done, but refuse to discuss what it was they did.
Really, really, sick and tired of the marketing bilge pumped out that portrays SAP as anything other than a massive scheme to screw every last penny out of c-level execs that think they are IT experts because they can send an email from their iPhones.
Utterly gob smacked that anyone can believe it makes sense to outsource the servers when the orignal costs quoted by the hosting company are higher than the internal IT budget.
So long, and thanks for all the stress
Well they launched some new payments facility on mobile, and those payments go might go share some infrastructure as the batch or other payments, they could have a knock-on effect.
For example perhaps there is a much higher volume than predicted. Or those mobile payments result in a much higher CPU cost or transaction time through the shared payments processing engine. Or connections to external interfaces e.g. for faster payments.
Don't have an inside information though, these are just guesses.
Ah - if I recall RBS correctly, then the sequence of processing will be something like this (admittedly speculative, things may have changed).
RBS have a system called Accounting Interface. It applies various accounting "rules" which reconcile the path of monetary transactions from, say, a cash withdrawal from an ATM (say, a Barclays one) back to the original customer account from which the money is debited. These transactions are then fed into the main batch account update program, and everything should reconcile at the end of exercise.
So a mobile payment would result in (possibly):
passage through some gateway to be added to a list of mobile transactions, which would then end up in a transaction file fed into their batch systems (plenty of scope for bog-up here).
In batch, these transactions are typically expanded by other generated transactions such as:
a) a debit from a holding account for the new mobile app
b) a credit back to that holding account from the customer account
c) a debit from the customer account
d) a credit to another holding account for transactions to the target of the payment
e) a debit from that holding account when the payment is transferred to the bank of the payee
and so on...
If the accounting rules governing each of these transactions are bogged-up in some way, the main batch account system (which updates EVERY account) will not reconcile properly, and panic will ensue. To fix this, the transactions via mobile would have to be corrected and everything re-run from that point. And then re-tested quite a few times to make sure they are correct this time.
Anyway, fun to speculate exactly what went wrong (I don't bank with RBS BTW).
"how does it have a domino effect?"
Because the mobile app is not simply accessing a web front of existing services, it is accessing the payment transactions back end which will need to have been modified to handle the different types of transactions, and this potentially requires changes to the entire CAUSTIC transaction processing service.
On the plus-side, all RBS Group employees are forced to have an RBS/Natwest account into which their salary will be paid. This happens around the 24th of each month.
Yep, you guessed it: RBS and Natwest probably won't be able to pay their staff until they fix the problem so they will be feeling their customers pain.
This post has been deleted by its author
Things appear to much more serious than is being admitted, if you can believe what you read in the newspapers ...... http://www.independent.ie/national-news/ulster-bank-thousands-without-cash-as-bank-fails-to-fix-it-crash-3146374.html .... and on fora you get a feel for the inconvenience caused, but fortunately will all be compensated should they suffer unwarranted charges caused by the glitch? ........ http://www.boards.ie/vbulletin/showthread.php?t=2056676772
It never rains but it pours, and it is lashing down here today.
"" The words "we'll just get SAP and adapt it for our needs" are a sure sign that you have morons running your company. ""
Hmmm... I've been on the frontline of implementing and managing SAP projects for end clients 12 years now and I've never heard those words mentioned once. In my not inconsiderable experience the morons on SAP implementations are normally the leaders/management of the 3rd party SIs such as Wipro, Accenture, Capita etc. rather than the end clients themselves; who generally go through quite detailed business cases and product selection processes before they choose ERP systems or related products.
I'm not sure what your experience is with SAP/ERP, but I would like to point out that I do agree with the first 3 paragraphs of your comment though.
Unless it has changed drastically, IIRC RBS (and Natwest, they migrated Natwest customers to the RBS system after the takeover) update main customer accounts in batch (on an IBM mainframe, natch.) overnight via a number of "feeder systems" (BACS, Accounting Interface ,etc.) through a number of "streams" through their main account update system (can't remember the exact name it had, Sceptre? - something like that) which cover a range of branches. These originally reflected the distance of the branch from Edinburgh, so stream 'A' was branches in the far distant north and run first, allowing the van with the printouts to leave earliest as it had the furthest to go. Seem to recall that Natwest started at stream 'L'.
The actual definitive customer account updates were carried out by a number of programs written in assembly language dating back to about 1969-70, and updated since then. These were also choc-full of obscure business rules ("magic" cheque numbers triggered specific processing) and I do not believe anyone there really knew how it all worked anymore, even back in 2001. I remember sitting in a meeting discussing how the charge for using an ATM which charged for withdrawals could be added as a separate item to a customer statement and waving a couple of pages of printouts of the source of one of them. The universal reaction was "wow, can you actually read that?". I can't see them having mucked about with that one too much, since good assembler people are like hen's teeth and it was decidedly non-trivial to make any changes to it, but the accounting rules did change frequently in the feeder systems. My bet is that some change has resulted in discrepancies in the eventual output from these systems, and a combination of retirement and redundancies has left them with very few people who know how it is all supposed to work, and therefore identify the cause of the error and fix it. Or they might have just blown the size limit on one of the output files from the various feeder systems (VSAM IIRC, limit was 4GB, unless they moved to the extended types), or possibly the FICON cable just dropped out of the back of whatever device some of the disk volumes live on.
Anyway, very complex stuff which all has to just work together. Of course, the moral is, complex mainframe systems require staff with the skills, and in this case, the specific system knowledge to keep things smooth. The fewer of these you have, the more difficult it is to recover from problems like this.
My $0.02. Perhaps someone with more recent knowledge will care to rebut?
My partner worked at Halifax/HBOS and their experience sounds exactly the same as yours. Complex spagetti code with stupid naming rules that made the complex software even harder to understand and no concept of testing or proper debugging. Debugging at Halifax is pretty much the use of printf style statements scattered throughout the code rather than using proper tools. Halilfax even threw out Leeds' system which was Unisys and a high level languge for IBM and assembler becasue it wasn't invented here.
I understand that your description of the RBS Mainframe based batch update process is fairly accurate. The source of the problem was a software update to Batch scheduling suite CA7. The upgrade when so well that now there is no schedule to run all of those thousands of batch jobs to receive and make BACS payments, update balance, schedule printouts, etc.
I am sure the problem with the CA7 upgrade and the unfortunate misplacing of the Batch schedule has absolutely nothing to do the with the last UK based technicians leaving recently. The guys in India of course are perfectly able to cope and fix their mistake. I'm sure they understand how the thousands of jobs in the schedule need to ordered to make sure there is data corruption or loss. After all the problem happened on Tuesday and it's only Friday.
I wonder how many ex-RBS staff have received very lucrative short term contracts in the last few days......
This post has been deleted by its author
I'm AC for very obvious reasons having been one of the recent 1000+ to find their roles now being done from Chennai, however I have been speeking to a few ex-collegues who are still there and can confirm that they say the same as the above poster as in a CA7 upgrade was done, went horribly wrong, and was then backed out (which will have been done in typical RBS style - 12 hours of conference calls before letting the techie do what they suggested at the very start).
My understanding is that most if not all of the batch team were let go and replaced with people from India and I do remember them complaining that they were having to pass 10-20+ years worth of mainframe knowledge on to people who'd never heard of a mainframe outside of a museum. The Indians were keen and willing to try and learn, but with out the years of previous experience will now be deep in the smelly stuff.
The onyl good thing is being the batch and over night processing that failed, all the data will still be in the sysetm awaiting processing so no one should find they money going missing as a result of tis incident.
I hope they will indeed be able to execute all transactions and make things balance.
But a few years ago I heard privately of a FTSE 250 company whose accounts got corrupted. When they turned to the backups, either they were already corrupt or they were screwed up during the "restore". So they lost everything, and had to write piteous letters to suppliers and customers.
I scanned the financial press for months afterwards, looking for a public report, but saw nothing. Full marks to the cover-up team, then!
If it was a software update to CA-7 and they corrupted (or otherwise lost) the various VSAM datasets which hold the schedule database, then I think that backing out and restoring should have been a fairly simple exercise, and the complete failure of an entire overnight batch run is something they would have noticed pretty quickly. Assuming that they are even slightly competent.
Well, if as the previous poster says, it takes about 12 hours of conference calls to get anything done, then I guess that they held over until the subsequent night's run to try to re-run everything. Of course, unless things are staged very carefully, they then have to process twice the transaction volume, and there may just be some hard limits on the feeder system dataset sizes which are now too small, or the batch runs now take too long, so the on-line daytime stuff cannot start. And undoing the problems which cascade from there is where you really, really want your experienced system people.
Which it seems as if they no longer have.
Based on my experience at a large bank that outsourced support to India, I'm sure the guys in there will work diligently to meet their contracted SLA.
Probably by sending an email requesting further information 15 seconds before going off shift and then not returning to the office for three days.
It is pretty much disastrous. In RBS world, there are many interconnected systems, some of which can maintain a view of an account for some time, but eventually all transactions need to be reconciled via the main overnight mainframe batch. If this is not done, the account info maintained by these satellite systems (ATM, card purchases, etc) will become stale, and increasingly risky from the bank's point of view. So the CA-7 failure seems entirely plausible. It leaving them in the shit for 4 days, however, is not a situation one would expect a competent mainframe site to find itself in. If this is a consequence of "off-shoring" support, then someone has made a very bad judgement on an essential component of the bank's ability to stay in business and heads need to roll over this.
"The source of the problem was a software update to Batch scheduling suite CA7. The upgrade when so well that now there is no schedule to run all of those thousands of batch jobs to receive and make BACS payments, update balance, schedule printouts, etc."
Yep. Had the same problem with CA batch scheduling software in the late 1990s.
It's all a bit hazy now, but there were ways for someone who had seen it before to recover from that situation.
Ah, in the time it took me to type the above paragraph, it has come back to me that one of my first tasks in a new job in 1998 was to write a bit of code to export then reimport CA scheduler jobs, just in case it all went titsup.
This is RBS, remember. ISTR them cutting their batch schedule from ~7 hours to 1 1/2 hours by the simple expedient of remembering that they no longer had IBM 2314's and changing sequential datasets from LRECL=80 BLKSIZE=800 to half-track blocking on 3390's. Don't get me started on METACOBOL.
I transferred a couple of grand from HSBC to my natwest account last night. It uses the faster payments system. It worked fine, money was there in minutes. Now I think all the people having issues are waiting for salaries/benefits etc, which I believe use the old BACS system. So my guess is RBS's BACS gateway has gone tits up. Unacceptable though
You're missing something. Your transfer may show up in whatever webby system which shows incoming transactions. Your real account balance is updated overnight in batch on a mainframe.
In other words, RBS/NatWest do not actually know what balance any of their customers actually have in their account, have not done so since this failure occurred, and will not know again until the problem is fixed. So good luck getting your money back out, since they do not know how much you actually have.
A Maximum Fail icon is required for this.
Christ, that was a bit harsh. I'm just using basic logic to hypothesise what the issue could possibly be. My natwest balance is completely accurate, and cash withdrawals/card payments are working fine. People who have been paid via BACS don't have an accurate balance on the other hand. Think the maximum fail might be with you son
The systems which tell you your account balance via ATMs and handle that side of cash withdrawal run on a "snapshot" balance which is eventually updated to maintain your "real" balance via batch systems on a mainframe. So they can maintain that view, for some time at least. This "snapshot" will not be updated with your salary, say, until that is processed into your "real" balance, and therefore your salary will not be available to you via ATM or card payments until this problem is fixed.
"Think the maximum fail might be with you son"
Unless you have worked on the systems in question (and I have), try to avoid comments like this.
Makes you appear to be a bit of a tool.
I've found that sometimes, you just need cash. Relying on banks (and me tbh) to not fuck up or me not losing my card or whatever other reason is going to catch you out eventually, so I keep a couple of hundred quid hidden around my flat. I appreciate that not everyone can afford to do this though..
It is kind of ironic, given that Natwest have wall-to-wall TV adverts declaring "a better way" and promoting features to allow you to get access to your money even when you lose your card.
You sir win word prize of the day! That word has been repeating in my head as I read the background and think of the number of IT mates struggling for work.
Maybe this will be the tipping point that happened with off-shored call-centres, a change of attitude towards it. One can hope now that it's actually hurt a corporate so obviously.
nK
"No customers will be permanently out of pocket as a result" - Natwest.
Personally, I wouldn't trust them as far as I could kick them. I highly doubt that they will compensate people for the time they wasted as a consequence of this mess, for starters. This is the same bank that promised that it wouldn't close the last bank in town, then closed Farsley Branch (which was the last bank in town - see http://www.bbc.co.uk/news/uk-england-leeds-17041251).
Right now, I only have one thing to say: "There is another way, you know".
NatWest used to have adverts making fun of the fact that some bank branches had been closed and turned into trendy wine bars.
Well ..... the Standing Order pub on Iron Gate, Derby is a Wetherspoons house -- not exactly a trendy wine bar, but ..... guess which bank it used to be a branch of?
You beat me to it! I'm not sure if they use in house/out sourced software. But it's much easier for them to actually start their "contingency plan" and get everything keyed in manually or on paper ledgers. No idea how many accounts they have, but there cannot be many millionaires left right now. :(
"The problem is that few people seem to realise the importance of good documentation until they don't have any."
Or in this case, with the major cull and outsourcing, the outgoing guys may have known exactly how important documentation would be to their cheaper replacements ... if there was any.
"Yes, yes, of course I'll be delighted to spend the remaining 37 hours of my contracted time documenting how Sanjeet can do my job perfectly while I'm in the dole queue. What's that? The spam box needs emptying? ... oh dear, seem to be out of time. Documentation: 'try turning the mainframe off and on again at the wall, a couple of times, quickly. if that doesn't work, pee on the socket.' Bye, enjoy the 'savings' from downsizing me..."
If someone out there knows what's gone wrong, I'd be putting another zero on there as a day rate, and expect a massive completion bonus of at least five figures on there too, seeing as you'd be saving RBSs global repuation.
You'd get paid what your worth, and after three or four days now, that rate has increased significantly...!
Steven R
My girlfriend has just left Natwest as a business manager mainly due to the horror of the backend IT systems - they are numerous, broken and useless - which means you can't reliably (real examples) : send out checkbooks (businesses still use them), close accounts, take signing people off accounts (led to massive fraud), change addresses(!), or do many other normal things.
Her advice to everyone ,whether business or normal customer, is to leave Natwest/RBS. She has just seen far too many people screwed over by the outsourced control centre's actions or inactions. It even took her 6 months and 4 tries to get her own name changed back from her old married name.
My favourite was that she was locked out of her bank computer account - and couldn't get anyone to unlock her account since the "two secretaries who still know how and have permissions to unlock accounts (across all of England) were on holiday". So she couldn't do anything for 4 days until one of them came back.
The IT systems have been slowly breaking, system by system for a couple of years now.
Wow.
Sorry calling BS on this one. As an ex-RBS employee who was caught up in the off-shoring to India I have a right to bitch about them, but for a lcoked out account a simple call to the help desk and they would have just got her to get a full time employee who could log on to fill out an electronic form and then give a second person's name for confirmation of who we was. These two would get the two halves of her new temp password.
Longest myself or any of my colleagues were lcoked out was only a couple of hours. Normally it would take just 10-20 minutes
Wow. I'm not sure if it's "2 secretaries for the whole UK". But it may be a local problem. IE, if the manager is on holiday, you cannot get their sign off on a password reset/IT assistance. :D
I know someone in an insurance company who did similar to get himself 2 weeks off work.
Am I alone in being very suspicious that this is not a security compromise? I don't doubt the it teams are working hard to fix and won't allow credits to go ahead until they are sure the system is safe.
I had to tell our workforce today, wage day, that if they bank with Natwest (as I do) they won't be getting paid today as usual. Not fun.
I would be very suprised if it was a security issue, there are many more ways to mess up a batch and corrupt a database. However most of these are quickly recoverable within a few hours and happen more often than banks would admit. As long as mainframe ops and support do their job noone knows they do it at all.
latest news.
The problem is (as I have stated before) is they will have to field legions of claims where their snafu has cost people money. There was a story on the BBC of a guy who hadn't recieved his wages, and couldn't afford to pay for a headstone for his stillborn daughter. I wish them the best of luck trying to argue that down in the "court of public opinion"(c)
(c)Harriert Harman 2009.
I opened an account with Williams and Glyns in 1981, and they were excellent; after a while it transformed into The Royal Bank of Scotland before truncating itself into RBS, with the emphasis on the last two letters. Even after [Sir Fred's] Mr Goodwin's departure the branch network was still solid and, from a user's point of view, the online services were good.
Then I heard that RBS England was going to be passed over to Santander and alarm bells rang good 'n' proper - they'd done me over years before when they took over/rebranded Abbey [National].
I moved (hence the coat icon) over to Co-op - on the basis that they seemed the least worst and owned, at least notionally, by its customers - and completed the transfer of D/Ds, S/Os, funds and notified anyone paying in last month. Just in time, it seems. Now maintaining the accounts with zero balances just to piss off RBS/Santander, though must keep an eye on sneaky account charges being introduced.
In all my dealings with the phone banking staff over the years I unfailingly received good, polite, friendly service and it saddens me to leave. Good luck guys.
And I work for an organisation that itself is still worshipping the false god of outsourcing, so my sympathies go out to all those former RBS IT staff who have been dumped.
PS, Listening to an RBS Spokeswoman, Susan Allen, on the PM programme. Apparently this failure wasn't expected. That''s OK then...
Strangely enough only last week Lloyds Bank signed a 'no regrets' contract with Wipro to replace skilled Open Systems contractors with off-shored staff.
I guess those contractors might be able to amble over to NatWest and amble over to Lloyds when it goes pear-shaped there.
I wonder if NatWest has got around to ringing-up the newly-redundant skilled staff and offering them lots of dosh to come into the office this weekend?
I worked for RBS during and after the merger with Natwest, I left their Global Financial Markets Department in 2004 after a 5 year stint. They had already moved some IT functions to India at that point and have continued to do so year on year since. The numbers some people are quoting 1600/800 are possibly the more recent figures, the total is way way beyond this.
The comments on documentation are comical, as if a document is the thing you turn to at a time of crisis.
The fact is, when you work closely with systems and the business users, you understand not only the quirks of the systems, but the risks and consequences of failure. You work with those users on the work around solutions that will get the banking day complete.
They haven't just outsourced the IT staff, but the very experienced and valuable back office / operations staff that would work with IT staff to solve the serious issues. I beleive these guys are mostly posted out in Singapore, who probably have never met the IT staff in India. The unseen cost of outsourcing is a compounding loss of shared experience and commitment, which becomes accutely apparent when the sh!t hits the ... cash machines
The chaps I trained out in India were nice enough, but they simply lacked the knowledge and experience of Finacial Markets trading, trade and settlement processing, Swift messaging blah blah and the risks involved.
I'll be drinking with a bunch of ex RBS/Natwesties soon enough, where we'll all be saying.....
"WE TOLD YOU SO!!!!!!!"
I doubt any senior manager in RBS/NW is going to say the outsourcing was to blame and change anything significant.
Over the next 10 years, I would expect this sort of failure to happen to vertually all the big UK banks. Hopefully it will become such an inconvenience to the UK public, the government will have to step in and regulate service expectations, a kind of SLA for the public......... then maybe, the management will be forced to value to kind of 'Support Culture' I enjoy to provide. Then maybe.... the jobs will start driffting back to the UK once more.
The comments on documentation are comical, as if a document is the thing you turn to at a time of crisis.
Well said sir, you can tell on this thread that there are some experienced devs.
Documentation is _the_ last thing I would go near at a time like that. In fact, I'd go as far as to keep _all_ my devs well away from documentation at such a time. Why? Because I have no guarantee of the accuracy of any given document. You can't unit test a document, it wasn't compiled, it likely wasn't reviewed, was it even finished? Was it updated when changes came along? I don't know.
Innacurate documentation leads to bum steers, it tells you to look in the wrong place, it describes how things work when it's not the way they work at all. Code, on the other hand, is harder to read but tells no lies. Commented code is more valuable in times of troubleshooting than documentation because it's concise and granular.
I've been a dev for over 25 years and I firmly believe that bad documentation is far worse than no documentation at all.
If the devs in this case were frantically grasping for the documentation, I'd be very worried indeed.
I used to run NatWest IT 20 years ago and I suspect many of the back end systems are similar to then, but with a multitude of front end systems added for online and mobile banking etc. For all the comments on documentation and outsourcing I'd have to say:
This feels like a major batch scheduling cock up and/or Db corruption to me. Given the length of time it is taking it would seem like they've also screwed up what might have been a straightforward rollback or recovery process. However good your documentation or ITIL processes or scheduling automation it's still possible for a production control issue to happen, but you do need people who understand how it fits together not to make it worse... In my experience that's not the people that are outsourced, at least, I hope they kept a few people who know which batch job needs to go after which...
So I think this is less about app programming and more about production control and operations...
In fact outsourcing the ordinary application programming makes sense and it's naive to blame this on outsourcing. It's often the case that the so called 'experts' that have been around for ever are the worst at documentation or training others because, hey, they wouldn't be the 'guru' anymore. I usually get rid of the guru's as they are often the blockers in getting the applications updated and written properly.
Well I meant 'ordinary' as opposed to 'a bit special'. There is a difference - for a company with the diversity and scale of NatWest it makes sense to differentiate what it decides to outsource. I might decide to have different level of expertise and experience coding my fraud and risk algorithms than I do doing some simple customer service screens. Outsourcing simple stuff is obvious, outsourcing core critical stuff is riskier. Anyone that says outsource nothing is being as naive as someone that outsources everything.
..." 'ordinary' as opposed to 'a bit special' ".
And there you have a neat encapsulation of the outsourcing problem. It's the 'ordinary', less experienced, programmers and developers of today who, with experience and training and identified talent, become the 'special' programmers, maintainers and developers of tomorrow. By cutting that link you might make immediate savings but you've bolloxed yourself for the future.
But what do I know, after all I only work for an organisation that has done exactly that and then finds itself over a barrel when it does need people to jump through hoops in double quick time.
"It makes sense to outsource", eh?
Mmmm. Tell that to the FD or TUI Travel plc, who was shown the door when they cocked up their IT and Finance after they outsourced and offshored it all. I very much doubt they saved the £117m they had to write off after over-stating their results.
Offshore activity at minimum cost is absolutely fine when the "value at risk" is a T-shirt, or a cheap plastic toy. When it is the integrity of a company's core financial systems and data, maybe the relatively modest costs savings need to be placed against the reputational damage, and the costs of rectification and compensation. Look at the Cable & Wireless/IBM spat back in, when was it 2001? C&W thought they'd save money, and in fact they found that they were getting a worse service and paying (IIRC) about £120m a year more than they were before - all went to the high court, and on the day of the case IBM came up with an out of court settlement. My own employers have outsourced IT to HP, and the like for like operating costs have gone up; we've cut the projects budget to keep costs within budget, and as a result we get less for more. Meanwhile the directors think that the outsource has been a huge success because total costs have gone down - they're too far from reality to understand that we pay more for the basics, and now have less being invested in new systems and enhancements than before.
In terms of "ordinary application development", that's what British Gas did with Accenture for a utility billing system, resulting in a huge project failure, regulatory intervention, and a £180m dispute that went all the way to the high court, and resulted in BG cleaning the floor with Accenture. Was that the sort of "ordinary application development" that you think should be outsourced?
I'm sure you could come back with many examples that have worked, but the risks undoubtedly are far greater than the clowns signing the deals will recognise, and the savings more modest.
I'm a security consultant who works for a rival but banks with Natwest. We all know that IT systems fail from time to time and sometimes these make the news but the duration of this is extremely worrying to me.
I've been impressed with the Natwest mobile app and thought that they must have some top notch security if they were offering 3rd party payments and cardless cash withdrawals. So I decided to take a look and it took me 20 minutes to reverse engineer it and patch so that it captured my passcode as I entered it ! I was shocked that it was so sloppy and had no discernable security measures at all, this can't have been pen tested by anybody remotely competent. This is bordering on the criminally negligent, so I've sent my findings to the FSA and needless to say I'll be moving all of my accounts on Monday.
Obviously they have nobody skilled in risk and security left either.
OK, let say this patch captured the GUID, Passcode, TWK, IssuerName (terms meaningless to you or I but the people that wrote the app would recognise them as everything that is required for logon). Now let's say it then transmitted these details to a criminal's phone and now the criminal has complete takeover of your account including the facility to take cash out without a debit card. To easily install my patch I need to be on a rooted (Android) or jailbroken (iOS) phone so the first line of defence would be to ensure that the app doesn't install or run on either of those. No such detection, so I can install my 'malware' by hiding it inside a cydia or android app or even with a well-crafted drive-by download via a well-know web exploit.
There are then extra levels of defence that I would expect any competent banking app to deploy but this has none - sheer negligence.
So pleased I left them. Now with First Direct i.e. HSBC. Would not touch any of the nationalised banks with a barge pole. This on a Blackberry scale in terms of screw ups. Used to support payment processing systems at a mobile operator but now look after mobile data. No glacial batch processes to worry about.
This is either down to a process similar to what is detailed here:
http://thedailywtf.com/Articles/Slaves-to-The-Process.aspx
Causing a massive tailspin when it collided with the next process and the dominoes ended up in a heap on the floor.
Or
Mahoooooosive fraud attempt in the BACS or CHAPS process, which is why fast payments are working but all else isn't, meaning that all BACS CHAPS Batches are having to be manually checked with real eyeballs in triplicate before being allowed to pass into reality.
FirstDirect is the home of my money thank (insert deity of your choice), but my employer uses GnatWest so I fear for my salary arriving there this month.
Unhappy RBS/NatWest customer? Vote. Feet. Now! Plenty of competition. Many banks now iron-clad by national governments.
Unhappy creditor. Meet and commiserate with fellow creditors, then serve class action suit.
Unhappy debtor? Wheee! RBS/NatWest will cover all your accidental failed transactions and penalty charges. Go spend.
Unhappy RBS/NatWest staff? Whinge about rejected DR plans. Find new job. Now.
I bank with NatWest and went to look at the update on their website (http://www.natwest.com/personal.ashx). My eye was caught by this comment from another customer:
"You bunch of s**theads , I want my wages now , I'm fed up waiting . Need my money. And no I didn't choose to be paid by natwest sue from south of England
by Pi**edrightoff from Kent on June 22, 2012 at 9:59 pm "
I have the screenshot :-)
Nice, but following looked more proactive....
"I'm going to take a poo on your doorstep you inbred mugs. If I don't get my money in my account within 24 hours I'm going to chain myself to your qc naked and play with my todger. I think your employes should work naked for compensation #SayNoMore
by I hate natwest from Hmp Bristol on June 22, 2012 at 11:12 pm 0 comments "
(also have screenshot :-)
You know I honestly thought they were bad until I read about this. I will never rip their systems again.
Surely RBS/Natwest have the snapshot balance to fall back on, from there they can process the BACS payments and incoming payments using a secondary IO system or mainframe? I should point out I do not know RBS/Natwest systems but only HSBC. I assume they would roughly be the same but with different processes and hardware? Please tell me they have a auxilliary standby processing system.
My assumption could be wrong though. It seems to be taking an awfully long time to correct, if they have outsourced and offshored then I would be a little worried. The HSBC bods ususally run DR preps and "what if" scenarios, primarily "what if everything went down the swanny". Guess RBS/Natwest don't and/or maybe have not done any dry/wet runs of their DR procedure.
Been having some very terse dealings with RBS since 14 June regarding something inexplicable that happened to one of my accounts. I haven't been able to fathom it, and RBS cannot explain it.
I have a view of my account online that the customer service advisers cannot see. I can see transactions that they can't but they see the same balance as me. And because they cannot see all the transactions they cannot answer any queries about the erroneous transactions that I think are on there.
When this blew up this week I realised the problems I started seeing late last week are most likely part of it. In years nothing like this has happened before. I ended up closing an account last night because a customer service person became irate when I pushed them on the causes of the problems. They don't know naff all to be fair to them. Looks like a semi-meltdown. Textbook IT disaster case study.
My Business Banks with Coutts.
Coutts did a massive software upgrade at the end of March. It was a shambles. We (all) lost access for several days, huge delays in seeing incoming payments, inability to make outgoing payments, unreliable balances (and if you logged on several times, you would see a different balance each time), numerous other "bugs" and glitches. In fact many of the problems I have seen reported in the current RBS / Natwest debacle sound awfully familiar. As far as I can tell the Coutts staff had little or no practical ability to do anything useful either.
It did improve but was a shambles all over again at the end of April.
The end of May was more normal.
Coutts does basically seem to be "up", but outgoing payments seem to be going through considerably more slowly right now.
We're now bricking-it for the end of June.
Yes. It has been established. It has been posted here and elsewhere that the problem was with the CA-7 scheduling software. This is one of the roles that was transferred to India. Look at this link (as posted above) for a job advert for a CA-7 technician in Hyderabad.
http://hyderabad.quikr.com/Batch-Admin-with-CA7-tool-W0QQAdIdZ81783921
I worked in banking IT (networking) a while back and one thing that surprises me is that this problem surfaced midweek. In my day no changes were made except at weekends to allow plenty time for testing/ backout.
This suggests to me something like a hardware failure rather than an update induced problem. Inexperienced and far away staff then perhaps screwed up the recovery causing further damage.
Cue dainty enfeebled citizen, shivering outside bank, lamenting:-
"My bank's turned into a rather expensive random number generator"
So glad I left them - and glad to find out I'm not the only one who's only keeping an account open with them to 'annoy' them - every little helps LOL
According to the BBC News website
"The problem began on Thursday.. It is understood to have arisen after staff tried to install a software update on RBS's payment processing system, but ended up corrupting it."
Erm ... system? Payment processing system ... singular? Shouldn't anything like that exist in duplicate or triplicate, with off-site disaster recovery systems on standby?
Typically there is redundant hardware, sometime OS and other system software too. But above that there is a single logical version of the application and data. You can have as much tin as you, but as many copies of the data as you liked are all the same. You screw up one, you screw up them all. Payment comes in from BACS or FP into the payments engine and is replicated at disk or app level.
Redundant systems developed and deployed in isolation to a common set of requirements is an extremely expensive option only available to mission critical systems - which excludes our retail banks apparently.
CTL-Z
Works for me everytime.
What? That's not how a mainframe works?
.-)
I am an old mainframer and I'm caught between thinking "there for the grace of God go I" and "you got what you deserve you bunch of non-technical beancounters for outsourcing something you had no clue about".
I have not used CA-7 or anything like it, god I would hate to be left alone on a mainframe.
But after a bot of googling, I assume you don't want people like this either http://ibmmainframeforum.com/viewforum.php?f=38
"cann any one please explain the commands like PF(page forward),PB (page backward) and SR(save Replace)
Also TSO CA7cLIST"
I" wonder who ended up taking this job:
http://hyderabad.quikr.com/Batch-Admin-with-CA7-tool-W0QQAdIdZ81783921"
Followed the link for the job in India (just out of interest you understand) - it's based in Amberpet.
What has Wikipedia (yes I know) got to say about Amberpet ?
" It is the biggest slum of hyderabad and lack development in many areas. Amberpet was never the part of hyderabad and comes under Ranga Reddy District. Amberpet was largely famous for the spread-over open markets sprouted all over the area catering to the household needs of people from all the nearby places at very affordable prices. Now, these have been replaced by many supermarkets and other allied dairy & poultry establishments."
Nice!
Bet they get the best candidates there!
Risk Management - we've heard if it - RBS senior management, they really are worth their bonuses eh!
AC
...all UK banking institutions are similarly exposed to this kind of problem emerging. I have not worked for any organisation that does not - at any given point in time - have at least one piece of critical infrastructure that is unstable or have an identified single point of failure with no hard mitigation plan.
Sadly the impact of legacy and resource constraints mean that we operate in an imperfect environment with incomplete information. The skills we have to mitigate against and handle this kind of event are the reason we can invoice for hundreds of pounds per day without blushing - that's our value.
The core payments and ledger systems at RBS are designed to be resilient and recoverable to the N-1th degree. I have no doubt at all that, to the level observable by the customer, normality will be restored - the speed at which it is restored dependent on the nature of the problem. At the levels deeper than that observable by the customer there could be a number of infrastructure, development and reconciliation projects required to fix the original problem, prevent it happening again and tidy up behind any quick-and-dirty bodges that proved necessary to get services restored.
I'm looking forward to hearing the final accounting of this incident, coming at a time when RBS is performing poorly in the market and introducing significant operational risk through it's Group Technology strategy (in the opinion of many who occupy the cubicles), a salutary lesson could be on offer
The facts are RBS's computer systems are ancient in computing terms. Many dating back to the 1960's and using obsolete technologies.
RBS has also been directed to cut costs sharply, and in banking that means staff cuts and particularly expensive, older IT staff. So many / most of RBS's experienced staff have left recently leaving inexperienced on-shore youngsters and off-shore outsourcing companies to fill the breach.
So combining the two causes, ancient software and very few experienced staff left to look after it was a ticking time bomb that finally went off this past week.
A pity so many customers were impacted by the result of decades of poor management by the RBS Group.
... but also a testing issue.
Development can screw up, but surely the "update" had been tested prior to putting on the live system.
Regardless of how old the system is testing should have taken place (and arguably more testing for an eldery system) within both development and in a system test environment.
Oh must not forget the project/programme/release managers who say "There is no need to test that its not changed."
Those involved in testing the update should not rely only on what the delevopers say the have changed, but to look at what is actually being changed and identifying potential areas where problems may arise.
Having worked with RBS group in the past, my experience was that the change management process was extremely bureaucratic and it was very difficult to alter anything. I was told that Fred's right hand man (who managed the Natwest/RBS IT merger) was partly responsible for this, because he had a very "robust" management approach, demanded 100% server availability (!) and left everyone terrified of making changes.
I was responsible for implementing a minor change to production, and it all had to be tested, documented and supported by backout plans. On the night, I had support chasing me by phone to ensure I was sticking to the implementation schedule, and imposing a fixed deadline for successful implementation or full roll-back.
Because the same process was applied to all changes (regardless of the scale and impact of the change) there was a suggestion that it was crippling the ability of the business to make timely changes (e.g. even changing some copy on the website could involve a full change board review and take weeks).
Looks like things have loosened up a bit now though...
We have to spell out the rollback procedure for a proposed system change in order for the change to be authorised. Even our plumber left our old ball valve in his van, so when the new one he'd fitted started whistling like a steam train he could put the old one back until he'd found a better replacement.
Basic stuff.
I personally have a huge amount of respect for RBS. They've sorted themselves out just in time for the completion of my house purchase today, so all the hassle that goes with delaying the removals firm, deliveries to the new address arranged tomorrow, wednesday and thursday etc is eliminated just by them sorting the problem over the weekend.
Hooray. Just need to call at the shop on the way home and get beers for the fridge.
This was an accident waiting to happen. The IT outsourcing strategy in RBS is corporate suicide - An incident of this nature has been predicted by those working in IT for a very long time - RBS have dodged several bullets recently, their luck has now run out.
As always the management team are running around trying to sweep the outsourcing issue under the carpet - Mr Hester a word of advice challenge & challenge again your IT Managers for the real incident review not a fabricated one that covers up their fatally flawed outsourcing approach.
Also speak directly to the administrators to get a real understanding of the outsourcing issues & the risk this puts on your business. Sadly this will likely be the first of many incidents on this scale - RBS get your head out of the sand or watch this space for further business impacting carnage caused by inadequate outsourced staff.
At a bank I once worked for we did the maths for a major system outage. Suppose your systems are near to fully committed (more likely with global 24 hour banks than, say, a UK-only bank with bulk load maybe 8:00 to early evening for real-time processes then up to midnight for less time-critical stuff like statement production). You might have something like 20 hours a day of heavy system usage with 4 hours "slack", so if you lose a days processing the catch up time to process the backlog is 5 days. In our case we calculated that more than 2 days total systems loss would mean we'd not be a bank any more.
Outsource and you lose the guys with a 20 year career on the same system, knew it inside out and could home in on a problem in minutes - at the same time the bean counters are looking at the spare mainframe capacity and regarding it as "a waste" so trim down to insufficient spare capacity to withstand a disaster and too little expertise to know how to fix it quick. The outsourcing provider has its own bean counters and regard testing as a bit of a luxury doing as little as they can so the "tested" box gets ticked - their worst case scenario is they might lose a customer, the bank's worst case scenario is total melt-down. The triple whammy is when the major problem falls at month end when transactions peak with monthly salaries and consequent bill payments (and you'd not wish a major failure on your worst enemy when it's year end too).
The IT guys sound all the warning signals but the bean counters win and when everything goes tits up it's the IT guys fault.