
"main online transaction processing server"
lol, that twitter guy doesn't have a clue how mainframes work, does he.
More likely human error where the "cleaner/electrician/technicnian" "unplugged" it in error.
The National Australia Bank has been sharply criticised after a seven-hour outage on Saturday that took down its ATMs, EFTPOS, Internet banking, mobile banking services, and call centre operations. The bank has pointed to a problem in its Melbourne mainframe system as the cause of the outage. By knocking out payment …
It makes perfect sense if both were running off the same power supply and they didn't have a backup power supply. Before they brought up the mainframe they'd need to shut down the ancillary servers and flush their queues. Bringing the whole network back online will take time - which is why you have backup power, and regularly test it.
The more important question is how does El Reg determine if a system goes TOESUP or TITSUP? I blame the GDPR.
Yeah, there really isn't any excuse for a Core system to have no power. There should be layers of redundant power, most data centres have DRUPS these day's, and if that fails they have Diesel generators, which you hope fire up because you test them regularly. So if all the systems have power, you would have to assume it's some other System Cockup that's been the issue.
NAB has 6 diesel generators at the Knox data centre IIRC. So IMHO it wasn't a power outage at the data centre. Unless they've moved the mainframe to the new (to me) data centre which was commissioned yonks after i left. Rumour has it that it was related to switches dying but i dunno.
Sadly, I fear you may be right.
And I'd imagine it'll be a while before they get round to a heavily-lawyered response that'll need signing to confirm the account owner has agreed to their offer of nowt and have no intention of sueing them (now or ever) just in order to maintain an account with them..
This post has been deleted by its author
Yeserday thought I heard a NAB spokes weasel on radio (yes, big radio) explaining the assessment that will be made. Something along the lines of requiring documentation for each "lost" sale. I also note the other routine Oz disaster, NBN have cut off local businesses when connecting them to the NBN cables. Phone dead for 4 days now as copper cut somewhere. No network as usual. Who would be a small businessperson with all the predators charging?
About time a few CEOs and boards had their assets stripped under Proceeds of Crime legislation for making false representations, thus winding up doing a few years hard labour instead of a shell company.
Sadly, these outages seem to be an annual event. If not NAB, then one of the others. Luckily I'm not a business-owner. But as a consumer I have a second, pre-paid card with me at all times for just such an eventuality. I suppose cash would be a better backup, but I'd just blow that on beer and crisps.
Much of the IT involved is mangled by a certain other large corporation, frequently mentioned here on El Reg for their apparently never-ending slide to oblivion. 3 letters, 1 guess.
NAB is actively trying to get away from them as fast as is possible - which may be near impossible for mainframes.
AC because... also, 1 guess.
This post has been deleted by its author
Much of the IT involved is mangled by a certain other large corporation, frequently mentioned here on El Reg for their apparently never-ending slide to oblivion. 3 letters
I was about to say that only IBM could make a total cluster-f*ck like this. And I was just making an un-educated guess. I wasn't expecting it was.
Our NAB Customer Manager has told us we'll be compensated for lost margins.
No they won't.
I do not bank with NAB but two of three transactions I attempted on Saturday were nonetheless forced to cash-only because the merchant was with NAB. I went to an ATM and withdrew cash. Not a big deal for me but I imagine a lot of people did this generating an abnormal run on ATMs. Restocking cash in unusually depleted ATMs will cost the ATM operators (other banks and private enterprises) something that they will not get back from NAB.
NAB best not charge businesses for depositing the cash they took on Saturday.
I’m pretty sure there’s more to the story. Mainframe going down is a pretty rare occurrence, even today. A data center without redundant power? Where? That’s extremely rare—even in locations where the equipment is being housed internally.
Just seems fishy to me.
A month ago Canadian Stock Exchange went down. Screwed trading badly. They’re still trying to recover. It was a big deal. Only one to report it was CNBC. The outage was due to a “storage failure”. Rumor has it that the storage was Pure. Not a single word about this from the Register.
My point is, more info. Cover it in depth not just high level overviews.
To be fair you are reading The Register..
They can only afford 1.5 full time journalists because you readers wont pay them a subscription! They are forced to just regurgitate press releases for most stories.
Thank God for the BOFH...
It would be nice to once in a while find out what really went on and who the culprits were..
This post has been deleted by its author
Not new at all, when I worked for IBM in the min 2000's we had two outages at it's major Sydney DC that impacted at least 2 airlines, multiple banks and many, many other businesses, fortunately around midnight or so both times.
The first involved some wanker driving his car into the local power substation, which caused the DC to switch to UPS, whose batteries hadn't been checked for far too long. Causing the entire DC to go dark before the generator could kick in.
The second was when testing the newly installed batteries, some other wanker had forgotten to turn the diesel supply back on to the generator with expected results.
IBM manglement has not had a clue for years, ignoring tech advice from tech staff is the start of it...
The more things change the more they stay the same.