As soon as...
..I saw the line "my application first deleted all files and directories (recursively)" I knew we were in for some "fun"!
With the weekend gone, like the contents of a file share after a hasty execution of a seemingly innocuous script, pause for a second before tackling the week's shenanigans with another Reg reader Who, Me? moment. Our reader, Regomised as Dave, told us of his time working in the IT department of a bank "at the end of the …
"and nowadays you don't even think about giving anyone access to your environment."
Even in those days I would have opened an account with appropriate privileges for the interloper visitor to muck about in[0]. What's the point of having separate accounts if you share them willy-nilly?
[0] With logging. Lots of logging. All of the logging. Paranoid? Me? Not yet, but I was getting there ...
He was working at a financial institution. You couldn't just go creating user accounts willy nilly.
Getting a user account setup in that kind of institution was a hugely bureaucratic process involving lots of forms which had to filled in and passed around and signed off by lots of people. It took about two weeks minimum.
Not so sure about that. When started working for an investment bank in the early noughties, I recall users regretting how IT had lost the freedom to do things like in the "old times" (i.e., early to mid 90's) where they would sit alongside the developer who would hammer changes directly in the production environment and the users would test along until everyting was OK (with appropriate changes to accounting tables, etc., as needed to correct any previous fumble). So, yeah, a new user account might take 2 weeks of paperwork, but really dangerous stuff was readily available.
There is a UK bank where the business side told IT they didn't want a new system to update reference tables, because it would be slower and require logging of changes.They quite like being able to go in and amend production data themselves, direct access to the data warehouse.
This is in 2020.
There is a big difference between "I can do that" and "I will do that".
Ain't there just. I excercise that on a daily basis. Plenty of "I could, but..." and also lot of my favourite "What is it that you are actually trying to accomplish?" Quite often it turns out they don't need what they're asking for at all.
lots of places new users still take weeks to action.
In many of the places I've worked that happened because neither the hiring manager nor HR bothered to tell IT about the new person, even if it was three months between job acceptance and arrival. The first IT knew about a new person was when their manager appeared with them in tow and said "meet X, our new hire, where is his/her machine?". "New machine" usually meant a Unix workstation in those days, so not something to easily get delivered the next day.
That happens far too often, and is usually down to utterly incompetent HR or management and a lack of even the most basic of HR processes such as New Starters, or following them should the happen to have them.
Occasionally it's worth being slow just to prove a point so the offenders remember to follow the processes that they often had a hand in writing.
On the other hand, I've had compliments that new starters have everything in place on the IT front with plenty of time to spare and no drama. It's almost as if they expect these things to be painful - they shouldn't be as new staff starting and staff leaving is a standard part of any organisation.
First of all, you are ignoring the fact that the narrator of the story gave the visitor from Corporate the narrator's own root access to a live system. That is a big no-no, and has been since the 1950s, or perhaps earlier.
Secondly, yes, getting a user account can be difficult because bureaucracy. However, a systems administrator providing a tool for a Corporate troubleshooter isn't all that difficult. The dialogue goes something like this:
Me: This is $guest_admin from Corporate. I need you to sign this paperwork, authorizing me to create him an account with the permissions listed.
$BOSS: Excuse me? $CHANNELS!!! Weeks & months!!! It'sHowWeDoIt[tm]. G'way, I'm busy ::shifts gaze back to porn he thinks I don't know he's viewing::
Me: I'll send him back to corporate, then, with your name & number as the contact to explain the refusal to let him do his job.
$BOSS: ::sputter:: Excuse me?
Me: You heard me.
$BOSS: Where do I sign?
There is always a way around corporate roadblocks to getting things done. It's my job to know them ... and to have the cajones to implement them as necessary. That's what they pay me for.
I remember being in the machine room late one night, and I was chatting to one of the bored operators. He was showing off his knowledge, and what he could do with his all powerful userid. He said, as he typed - this is one command you never want to issue "PURGE SYSTEM ALL", then automatically pressed enter!.
Whoops.
Next morning when I came in there was a logon message for all users "Due to a technical fault - all spool files were lost last night".
Over the next few weeks they implemented very granular command security, so it could not happen again.
Remember the days when programs would be run from network drives to save the space on the local hard disks?
I'll never _ing forget... As a fresh faced PFY I had a problem with my local PC office, and it was suggested that I un install and re install.
Unfortunately with my rights it decided to completely un install office, including the networked version that everyone was using.
Fortunately quickly fixed with the /admin install back to the network location, with a lesson well learned that could have gone a LOT worse.
That wouldnt have helped the guy in the story.
he'd have seen what he expected to see at the last folder in the mapped drives path.
Then his software deleted from W:\ ( as opposed to W: )
[edit]
Although if he actually went to w: in cmd he might have noticed the full path , as noted in the story.
Thanks @Jake.
It was actually a gentle reminder that not everyone uses dir to list the contents of a folder / directory / inode / whatever terminology you favour*.
My life in IT doesn't personally date back that far, but I believe the use of ls predates dir by some decades, originating in Multics (the predecessor to Unix and all modern *nixes). dir I believe originates in CP/M, and became widely used with the advent of MSDOS. Wikipedia assures me that Muiltics predates CP/M by a decade or so, and MSDOS by almost two. The history of such things is a bit more twisty-turny than this, but that is it in a nutshell.
The vast majority of command-line jockies certainly used to be using *nix type systems in preference to DOS-type ones (I don't believe CP/M ever really got to being a well adopted OS), so really what I was insinuating is that for those who think that those file-containing structures should be called directories because that's how the DIR command is named should perhaps broaden their horizons a little.
FWIW, the first OS I grew up with used CAT as its command, to CATalogue the contents of a file system. On a C90 cassette.
*My opinion here is that in computer science, the terminology is secondary to the actual concept. As long as others can understand what you are talking about, it doesn't matter. See also the pointless column / field debate in database nomenclature.
While at uni, I was doing something where I had to FTP some files to or from one of the Unix servers. After typing in the server name to the FTP client GUI, I spotted the server name already having a configuration defined. With the username "root". With a password filled in. I looked at my mate. He looked at me. I hit "connect" and promptly got logged into the server, as root. Rather naively, I tested the access by downloading /etc/passwd, deleting it and recreating it (in retrospect, this was incredibly dangerous. I didn't know Unix that well in those days...). Evidently, the shared FTP client config had been used by the lecturer at some point and he'd saved the root password for ease of use, not realising it was available to all the students. We reported it to the lecturers and it was removed from the config soon after that...
Worth noting this was about '97 when security processes weren't as strict as they are these days. FTP as root? *shudder*
"Worth noting this was about '97 when security processes weren't as strict as they are these days."
Excuse me? Security these days is extremely lax compared to what it was in the 1990s. Shall I start with BYOD? Open access to facebook, twitter, inistagram and whathaveyou? To say nothing of "clouds" losing corporate data all over the place ... Security? I'm not all that certain that the kids running corporations these days have even heard of it!
Do you not read ElReg?
I know of more than one person here who has run a SQL DELETE command and forgotten to add the WHERE clause. With client data.
This is why, when working with anything remotely like a live database, the series of steps goes something like this:
1) BEGIN TRANSACTION
2) SELECT * INTO Affected_table_backup_{today's date} FROM Affected_table (probably best to have backed the database up first too if possible)
3) SELECT * FROM Affected_table WHERE {delete criteria} (to make sure you are getting the right records)
4) DELETE FROM Affected_table WHERE {delete criteria} (by amending the SELECT statement to make sure the WHERE clause doesn't get lost)
5) COMMIT TRANSACTION
I got downright paranoid about the use of BEGIN TRANSACTION ; SELECT * INTO table_backup ... when my job was to fix data in the production database.
One day I had a conversation with my sysadmin along the following lines:
"If you did not have to get involved, it did not happen, correct?"
"Yep, that's right."
Could have had it more than once...
It's just a shame that MS-SQL's interpretation of SQL transactions is so broken.
Transactions in Transact-SQL are not broken (disclaimer: I'm thinking Sybase here, but that is where MS SQL Server has evolved from). Sure you may get bitten by unchained mode being the default where you have to either set chained mode or explicitly begin a transaction.
I actually like the flexibility of unchained mode, you just have to be aware of what you are doing as it doesn't molly coddle you like say Oracle.
It's just a shame that MS-SQL's interpretation of SQL transactions is so broken.
Broken in what way? It certainly doesn't break ACID.
Nested transactions can be a bit of an oddity (only the outer transaction ever gets committed, which is arguably correct), but if you are using those... why?
If you've got time to get through all that, it's not a "live" database.
If I tried that at my old job, before I was halfway through making the backup, users would be on the line demanding to know why the system wasn't responding. (Because the table was locked by my transaction.)
Use SELECT * FROM table WHERE condition to identify the data you want. Then draft another SELECT for a test sample to include both some of the records you want to delete, and some of those you don't.
Once you're satisfied with both of those:
BEGIN TRAN
SELECT * INTO backup_table FROM table WHERE condition (backup only the records you want to delete)
DELETE t FROM table t JOIN backup_table bt on t.id_column = bt.id_column (converse of previous step - delete only records that have been backed up)
SELECT * FROM table WHERE test_sample_condition
ROLLBACK
Check the results returned by the test sample, and take a look at the backup_table as well. When you're satisfied with those, change the ROLLBACK to COMMIT and run it for real.
If I tried that at my old job, before I was halfway through making the backup, users would be on the line demanding to know why the system wasn't responding. (Because the table was locked by my transaction.)
You forgot the (NOLOCK) clause; a little dangerous in itself, as this may include records from an uncommitted transaction another user has open (i.e. so called "ghost records"), but you are making a backup for the sake of safety, so including those records isn't the biggest issue if you need to access that backup to unfuck the data because you didn't follow the other steps properly.
Reminds me of the time I went for a network engineer class
We did the whole thing - made a coax network from scratch (when coax ruled), and did a netware 3.12 install on a file server
Everything went swimmingly well, BUT we all had superuser rights everywhere
Then somebody inserted a Very Naughty Floppy Containing A Fun Virus into his workstation (DOS at that time) IIRC it was one which infects .COM and .EXE files, can't even remember the bastardly virus' name.
Said virus took everything down except the Novell NLM binaries. Luckily it was a fairly new install, so we wiped everything, did a full reinstall of Novell, and set permissions accordingly. No more issues.
Sounds like the perfect setup to learn of Netware's execute only flag. Seems like the thing to do until you find out execute only means strictly execute only. This was the era when patches were in effect diffs written to files. Remember writing to the file is verboten when execute only is set so no patchy by viruses or updates. Any other action was prohibited on an execute only file. There was a secret incantation which could make such an execute only file mortal again, I just do not remember it.
I've only used it for real once. There was a graphics application on Sun Workstations (remember those?) called SunAlis. It was awful. It had to be installed with root privileges, so every print job was sent as if from root, rather than a standard user (who could not remove it from the print queue as is had root authority). It caused no end of trouble, not least because when files got 'large' around 2Mb the app froze and lost it all. It was a good day when I logged in as root, went to the SunAlis directory and typed
rm -r *.*
and watched it all slowly disappear.
In my company, developers do not have any access whatsoever to any live servers or databases (apart from the server used by the development department itself of course). All new code gets put onto a test server first. Only after the technical director himself has done a final test of the code on the test server does he, personally, put it onto the live server(s). If someone trashes data on the the test server, a new test server is simply cloned from the relevant live server (although sensitive databases will be replaced with dummy databases).
I would have thought that a bank would have a similar process.
We do exactly that to test our stuff. Obviously.
Worst case scenario, you forgot you were in the test database, and everything you did the whole day gets wiped along with the development database.
Luckily I had detailed instructions and copies of everything I was meant to do that day... and Groundhog Day ensued the following morning.
Oh yes they do, since more than 25 years. The tale was "at the end of the eighties or the early nineties"...
Now they have DTAP environments and strict tests & procedures to get anything into Prod, and still it goes wrong from time to time. Complexity has advance as well...
Years ago, I worked with a proprietary insurance claims app that used a file-based database consisting of data (.dat) and index files (.idx). When we would have corruption at a client's site we would make a copy of the data file and rebuild the index. It was common to copy the data file to .bak. One day, a client was running low on space and instead of doing del *.bak I did del *dat. That command ran fast and purged all the data in seconds. That is the worst thing I've done. The second was in a system that was getting close, but not yet in production. As part of a refresh for an Oracle schema I would drop all objects and run my conversion and supplemental scripts. One of these scripts dropped all sequence objects. Well, I was logged in as the sys user, and when you drop these you may as well get ready to reinstall the software because you can no longer log in.
After that, I changed how I connect to any database. For Oracle, I tweak the glogin file to display connection info and if I change user I completely close the session and start a new one. For SQL Server, I never connect even if the last connection prompt is what I want. I close that and select the desired server from the registered server list. This forces me to think about what I'm doing.
I worked in SWIFT and they are pretty unhackable. In my place they had one internet connected PC, not connected to any other internal computer and with it's drives removed and it's ports literally cut out. Plus it was on CCTV, as were most places there. And they analysed your toilet material for drugs or whatnot. And body scanners at the door to check you weren't taking or bringing CDs or drives.
Every security scare about SWIFT is always about the banks that connect to it, it itself is pretty much the perfect panopticon.