
I wonder whether that airline would survive a reservation of our lil' bobby tables of the '); drop table passengers;-- family.
What? Monday again? Didn't we just do one of those last week? Oh well, if we must, dear reader, we must. Welcome once again to Who, Me? in which Reg readers regale us with reports of righteous wrongness. This week, meet "Leopoldo" who some decades ago landed a stint as a database admin with a certain unspecified national …
I wonder whether that airline would survive a reservation of our lil' bobby tables of the '); drop table passengers;-- family.
Luggage In Another Terminal.
My experience of them wasn't so great. My luggage did stay behind in Trinidad (IIRC), whilst I continued on to Grenada.
Not quite as bad as for one passenger when they had miscounted. Having condensed two flights into one, they didn't have enough seats for him and kicked him off again having boarded us!
Add in "But Will It Arrive" to the alternative nicknames too.
This post has been deleted by its author
Anonymous Coward
Is that the airline from the same country that gave Powergen a bit of an amusing problem with their local domain name?
On that topic, I find it fascinating that that has been so vehemently denied by Powergen that even Snopes declared it False. I would have totally bought that as being a hoax if not for the fact that I actually immediately ran a WHOIS of the domain name myself to see if it was true because yes, I don't believe everything I hear either anymore after I found out who Santa Claus really was.
When I did the lookup, the domain owner was indeed Powergen, so someone high up must have really put the fear of God into people to get that cleansed so well.
Also, having worked for large corporates I can also imagine perfectly how this went down: Marketing dreams up this scheme, some guy in IT gets the order and fairly quickly arrives at this, um, more interesting result in Italy. Sends an email back asking if the scheme is to be implemented as described, naturally while carefully omitting why exactly the confirmation is requested, and several msec after receiving a snotty email from Marketing to just do as he/she is told implements this in all, er, 'innocence' and then finds an excuse to be unavailable for a while as it blows up..
Anyway, thanks for making me remember that one :).
Eh, if I were a company in that situation, I too would buy the domain name, if only to keep it under control.
There was also a recent kerfluffle around a popular videogame called Inkulinati. Very unfortunate association there.
From what I hear, though, this kind of problem happens in all languages. I'd expect marketing departments to have some kind of automated software to deal with this by now.
Don't cry for Leopoldo, though. When he returned to work the next day – fully expecting his stint with the airline to be over before it had begun – the manager told him that he understood it wasn't his fault. He blamed the Senior DBA for poor supervision – not to mention gross dereliction of the backups – and told Leo to get on with the job.
I bet he was in quite a flap the next day...
40 years ago, I heard about a senior operator training some junior staff, and he typed the command "PURGE SYSTEM ALL" on the product console, saying this is one command you must never , ever use. Then, like he does a thousand times a day, he pressed the enter/return key,
One outcome of the incident review was that they tightened up the authorisation checks for all commands - and many of us lost what little permissions we had.
A colleague did a customer audit/review, and recommended that the production console background was a different colour to others. A year or so later, someone issued issued a command on the wrong system. My colleague went along to do a root cause analysis, and found none of his recommendation had been implemented. When the feed back got to the board level, they found that there was an action "Get audit/review of system" Tick, this had been done. There was no action "Implement feedback",so nothing was done. Head's rolled, and several people were redeployed the next day.
SQL Server Manglement Studio allows for different coloured highlighting by server, but defaults to MS-preferred bland. It would be better for it to automatically allocate a different colour scheme for each server it connects to, with the usual options to change it manually and perhaps one to disable automatic colour selection to keep everybody happy.
There’s nothing like a bright red colour scheme to remind you that this is the !!!!!!LIVE!!!!!! server, and the yellow one isn’t.
Nah, this is the one remaining instance where the <blink> attribute may be used:
1 - even if you're totally colour blind, you can get enough colour combinations going to ensure you see your prompt is blinking
2 - it's so furiously annoying that you won't stay in high permisson/DBA/root level for long :).
Anything else I can solve for you?
:)
"it's so furiously annoying that you won't stay in high permisson/DBA/root level for long"
Some years ago, I interviewed for a PFY position at a regional wireless company. To work on the console of their main switch required you to set up a stepladder to reach the console which was suspended above the middle of an aisle. This did a few things:
1) Added effort to get a ladder meant you only went to the console if you really had to do so.
2) uncomfortable location meant you only stayed as long as absolutely necessary.
3) if you had no business using the console, it would be quickly noticed.
4) while working at the console, any coworkers knew that it was not a good time to distract you with unrelated questions or idle chatter.
he typed the command "PURGE SYSTEM ALL" on the product console, saying this is one command you must never , ever use. Then, like he does a thousand times a day, he pressed the enter/return key
Catastrophic commands like that should really have at least one level of "are you sure you want to do that?" confirmation. Years ago, when IBM PCs were new, mine had a disk formatting program that asked "are you sure (y/N)?" first and then had a second level of confirmation "honest (H)?" that meant you couldn't just hit 'y' twice and lose the drive contents.
The best disk partition program I used back in the day had a second level of confirmation that required you to produce a 4 digit code which was buried in the "are you sure you want to X" details, meaning that you actually had to read the confirmation with some degree of attention.
Two or three decades on, I still think it's the best bit of human interaction design that i've seen on a PC; somebody actually understood that the typical human behaviour of being presented with a box saying "do you want to do what you've just commanded me to do" is to select "yes" every time without reading the alert.
Reminds me of the time when I was trying to explain the difference between mapped and physical drives, and how some things didn't work quite the same, to a client who thought that he knew more than the Novell consultant (me) he hired. So I typed "format f: /y" and pressed enter to demonstrate the point and he nearly had a heart attack. Nearly 30 years later, he is still a client.
> recommended that the production console background was a different colour to others.
This is worth doing when you have a few WS where Admin is identical and have to nest desktops to do updates; the one with the maroon background is running the hyper visor…
Yes you can also paste the system name to the wallpaper, but a colour change helps to prompt to check such details.
I had to intentionally delete a large number of records from a production db, and very nearly messed up. How? Stay with me…
Inherited a system where a new database was created for each new client, each one linked back to a master db. One client reported problems; an investigation showed the daily data file that they supply to us to load into the client db was the cause, and had been wrong for a week or so. Fixing that was pretty straightforward with a few minutes faffing in UltraEdit, correcting it in the db was a bit more involved. Rolling it back would put it out of step with the master; rolling that back would put all other clients’ dbs out of step. A carefully crafted delete across this one client db and the relevant data in its master db would fix it though, so I set about it.
The delete statement in the test db was fine. Dropping that into the live system and swapping delete for select to eyeball it and confirm that it was good showed all was well, as did wrapping it inside a transaction to roll it back on completion. It was safe, so instead of diving in I went for a coffee first, knowing that it just needed one last sanity check before running it. Which I did, but as I reached across the kb to hit F5 in SQL Server Studio my wrist caught the mouse and did a perfect click drag of the cursor across the delete statement, omitting the Where clause. As any fule know SSMS runs whatever’s highlighted, and it did, happily setting off to delete everything instead of the required few hundred thousand records from the client db. I clocked it almost immediately and hit stop, then silently prayed that SQL Server would live up to its ATOM promise of all or nothing. After an age of watching the rolling back message, it succeeded. Blimey.
I’ve tried to recreate that mishap but could never get a syntactically correct statement - Where what?, invalid table name, incomplete where clause etc. Good job there was enough data there to require a long enough execution time.
Depends on what DBMS you're using. For me it's generally MySQL. I have a rule that if I ever run something in production and expect it to delete a single row I start by typing LIMIT 1; then move my cursor backwards to type in the DELETE query.
That way if I accidentally hit Enter the worst thing that'll happen is it'll delete 1 row of data.
It gets more tricky if you don't have a "guaranteed" number. But for simple 1 row deletes this has saved me on more than one occasion. You can also use this with things such as UPDATE queries.
Never underestimate your muscle memory that just presses Enter, in any environment ;)
I managed to do the same on another airline system when about a month in to my current jon (I'm currently at 11+ years) - I emptied a table on a production database by highlighting and running a SQL delete command, without realising I had not highlighted the WHERE statement!
Luckily we had a log shipped copy of the database for reporting purposes, which log shipped every 15 minutes - we switched the log shipping off before it caught up with the delete, then inserted the data back from the copy, before swicthing the log shipping back on .
My punishment? U had to buy cakes for the whoe office!
Sounds similar to my experience - see the post a few above yours. Like another comment says, paranoia is a useful character attribute in these situations, hence me running it with Select instead of Delete first to check that it’s doing what it should do, then running it inside a txn to roll it back and check that it’s good to go, then leaving it - coffee or whatever - and come back to it with just one thing left to do, which is run it in full. One job, one action. And have it all ready to go. Personally, and having learned by experience, it’s almost always better to know that you don’t have to do any specific other action such as highlight all of the statement to get it right. The distraction of a manager waiting for confirmation that it’s all resolved and a client already pissed off that their data has been found wanting was enough of a distraction; I preferred to know that it was tested, checked and ready, and that there was nothing by now to go wrong.
Neat that you worked somewhere that had a whoe office. That could excuse any distractions.
We used to run a nightly clone of prod to uat BEFORE the main backup is taken of the the prod clone as the only backup ( you know where this is going already! ).
One night something goes wrong, one of our beloved contractors drops in, mounts the wrong backup filesystem, wipes out the prod by cloning previous day's UAT into prod and we lose the last 24 hours of live transactions. Let's just say by the time we all woke up and arrived at the office we were informed our contract colleague had been fired at 3am that morning.
No it wasn't me, I was a lowly junior DBA/sysadmin at the time and it was made adundantly clear to me that you ALWAYS, ALWAYS check what your doing, back off for 30 seconds, check again and then hit RETURN.
One job I had a while back, around 6-months later I still didn't have a proper chair to sit on. I had like a cheap $10 office chair from Staples or other non-descript office supply store, with a broken gimbal. I called it the tilt-a-whirl chair because unless you managed to balance just perfectly, you'd get thrown to one of the 360 degrees that makes up a circle.
I'd been working as a senior BI consultant but made redundant. Decided to live and work in my home city (pre remote working days), could only fine jobs as a DBA (semi senior because of my age)... I got a job and I assumed that I'll be joining a DBA team so I was confident that I coul cop with the new role. But what I found was that the previous DBA almost died in a car accident and the replacement was anxious to leave the position, so after the first 2 days he went on vacations and never returned, so I was left alone with all the duties and of course nobody in IT knows a thing about DBs except installing them....first think I tried to discover was how the heck the backup system was working and if I have a way to restore something in case of a disaster.
《of course nobody in IT knows a thing about DBs except installing them》
Plausible deniability again? No sane sysadmin is likely to admit any more than that (and most not quite totally demented ones too.) System and network administration present just so many opportunities to completely stuff up without the whole new dimension that rdbms add.
Rdbms that used raw disk partitions for their table space always seemed to me an accident waiting to happen. Some PFY seeing a "unused" partition thinks I can makefs that for my collection of ...
Back when PCs had "turbo" and "reset" buttons and writing on paper happened quite a bit... I was unfortunate enough to have a machine where the reset button was just at the right height that pushing the keyboard across the desk (in order to use pen and paper) usually caused me to say "gosh, I must not do that again".
That the airline had a single DBA, and when he announced he was leaving the airline hired another single DBA to replace him? The story mentions no others, no one else following the two of them around and also learning.
Was Leopoldo starting a job where he would not be allowed to take vacation, call out sick, or step in front of any buses? And would be on call 24x7 for the rest of his life until he got fed up like the previous DBA and quit?
Sadly I don't believe a word of this. The piece doesn't say how many decades ago this is alleged to have taken place but we can assume it goes back to the era when most large airlines ran their inventory/reservations/ticketing/check-in systems in house. Those systems were IBM or Sperry Univac mainframes and didn't run database software in the modern sense. If, as speculated here, it was Alitalia, they ran an IBM system. Any database that could behave as described here would likely be an extract from the production environment used for back office purposes. Even at Alitalia it could not have impacted front line airport operations.
After reading all these comments, I am flabbergasted nobody seems to be using transactions. A simple rollback would undo the damage.
Anyone using autocommit should not be allowed near a production environment.
Any database not supporting transactions, has no business being a production environment.
Any company not using transactions, should not be in business.
That was my first reaction. But you still need appropriate self-preservation instincts to remember to do that "begin tran", and to properly read the output before automatically typing "commit".
I did once reboot a production DB server thinking I was logged into the dev server... so all the transactions in the world wouldn't have saved me. Luckily it was just used as an audit trail for getting the in-memory state back up and running in case the application server crashed so we survived until the closing bell. I think the back office guys had a late night however. Lesson learned.
See my comments about paranoia and Dunning-Kruger. Stuff happens.
One of the most frightening was the gig where the actual command scripts for some of the overnight jobs where composed on the fly by permanent TCL/expect scripts running vi. They worked but no way was I going to touch any of those.
What vintage? We don't know. I mean, 20 years ago was 2003, and it's not that unrealistic to think that a major airline was using a database to power a lot of their functions. Thirty years ago would be 1993, and it's still not THAT unreasonable to think an airline could be doing that.
One of the things about getting old is how time seems to kind of slip by. You're so busy with other things, it feels like something happened only yesterday, but it was actually years ago. I remember when Alan Cox retired from Linux kernel work. At one point I made a comment about how it was only a few months ago, and a friend mentioned that it had actually been a couple years. I thought how surely that can't be right, but when I looked it up, it was. It's not fun, but as I said elsewhere in this story discussion page: Lord Chronos doesn't really care if it makes us feel old.