Probably a bit silly but....
He should have remembered that no (intended) good deed goes unpunished!
With the use of personal email accounts seemingly never out of the political headlines, we present a cautionary tale of their career-shortening possibilities in another edition of Who, Me? Our reader-supplied story concerns the reader’s colleague: a fellow Regomised as “Lucas”. Lucas and our reader worked for a mid-sized …
who was really at fault here?
Lucas, for multiple reasons. A "potentially suspicious" email could still have contained business-related confidential or personal data, no emails should be automatically forwarded out of the company mail system. Also, if he's on a day off then he should have either arranged for his colleagues to cover him, of let them know he wasn't really off, anything else could lead to duplicated work (as it did).
In the early days, just about at the time Exchange was first being launched (and allegedly there were only a handful of people in the UK who knew how to configure it) we'd moved from having modems on individual PCs to an ISDN based proprietary box which one day suffered a similar snow storm. Tracking tools were limited but by a process of elimination I tracked it down to the engineering manager. He'd sent a perfectly valid email to a supplier rep which elicited an out of office message that required a read receipt!
As mentioned in the article things weren't sophisticated back then so the read receipt prompted another out of office message which also requested a read receipt...
I suspect the rep at the other end got a dressing down over it from his employer but it took a while to manage to break the chain at our end.
You misunderstand. The rep wasn't _sending_ read receipts - his OOO was _requesting_ them. All would have been well if the employee at the other end had set read receipts to be denied or to be prompted for (if those were options back in the day) but instead they had gone the path of least resistance and set them to be sent automatically . All would also have been well if the OOO was configured to be sent a maximum of once to each addressee - but unfortunately, it wasn't a true OOO and instead was an automatic reply _which was triggered by incoming read receipts_.
Maybe it's not a read receipt request causing the trouble, but a delivery receipt request. Did you know that was a thing? I don't know if it still is, but the idea is that you're notified (maybe) when your e-mail successfully reaches someone's inbox. Which apparently IBM is having trouble with, this month.
Around the same time I managed to bring down two mail servers, I set an out of office response, sending server replied with a "this email address is not monitored" email, cue email loop. How come no-one spotted this possibility when creating automated OoO messages!? Surely the second thing you'd think of is "what if two people have OoO set"? Mailing lists were a thing back then
Around the same time (or possibly a bit earlier), I built an out-of-office system based on procmail.
Loops were indeed known about and the advice was to include an X-loop header that can be filtered on.
Later supplanted by logging replied messages and only sending one reply per address per day.
I worked for a senior manager. A project was in trouble, so the senior manager sent out an email to the managers clearly laying out the situation, and the action plan. 30 seconds later he realised that should have said "do not forward this email", so he rushed into my office and asked if I go to to IT support and kill it.
I rushed down two flights of stairs and into IT support.
"Too late they said" they could see it had already be forwarded to over 100 non managers.
I went back to the senior manager and said "sorry - too late". He said don't worry, that is good news. He had received some emails from the grunts ( sorry, the professionals) with words like "Thank you for a clear statement, it looks like someone is finally doing something about the problems". So overall he came out well.
To be honest nothing irks me more (as a grunt) than managers NOT acknowledging the brown stuff hit the air moving device and giving platitudes about how everything will be fine and just keep your nose to the grindstone. If the grunts are the ones expected to do the work, let them know the status of that work and how it fits in the bigger picture.
Which is a bit like the way that Virgin Media seems to work. Front line tech support L1 staff responding to customer calls ( and the people who update the "service status" web page) are not told the status of the system - i.e that an area/region/country has a widespread problem. So they'll be fielding a rush of calls from customers complaining that they can't get their emails/do their shopping etc. And they'll run through the damned script with each and every one - up to the "It must be a problem in your house, we'll book in an engineer..."
Usually said customer will then discover that things go on again magically or read in the news about the problem. Once, only once, a call handler has said to me.... "Funny, I'm getting a lot of calls like this tonight, could you hold on....." Gone and checked with a manager, who's checked.... etc. And then come back to me and said, "Sorry there's a problem at our end".
I used a home-brew mail system once at my university that let you unsend a message -- but only if it was still unread. As soon as the recipient opened a given message, it stopped being retractable.
(Not network mail; this was strictly iintra-mainframe.)
I believe the implementation was fine -- no corrupt mailboxes, missing messages, or whatever.
As for the policy -- well, I'm curious to hear what people think of the idea in principle (leaving aside the practical issue that implementing it in a network context would be highly infeasible.)
That story triggered a memory of a similar thing happening to me just over 10 years back.The customer used office and Exchange 2007. We were suddenly faced with a mail storm of our own, the logdisk of one database just kept flooding with streams and streams of logfiles, gigs at a time. Those disks weren't all that big, so it filled up to capacity in a matter of hours. We were frantically trying to figure out what was happening, even involved Microsoft to try and find the root cause (all the while manually deleting logfiles, which had been sanctioned as a one time workaround). After several hours of searching we finally got the root cause. A user was transfering to a new department and thought it was a good idea to zip up all his work and email it to himself as sort of an archive. His mailbox was about 1.2 GB when he performed the action, the file was 800 MB. In itself this would not be a problem, unfortunately we used Exchange caching and that's where everything fell apart. in Outlook 2007 an OST could not exceed 2 GB, it simply stopped working after that. The user with the 800 Mb attachment (they had no limit, customer enforced) sent his mailbox to 2.8 GB, filling up the OST to the 2 GB mark, crapping out and restarting the process, with the massive log flood as a result.
And we've learned lots of things through lots of errors and surprises, but I think the one bit of email functionality that has to have caused the most trouble is the out of office auto-reply.
It is rather useful, generally speaking, but it can still cause havoc even now. Thankfully, email servers have been taught to not auto-reply to auto-replies.
I was about to say the same. Being let go a few weeks or even months later might make that the probably cause, but a year later? I'm guessing that this email gaffe was one of a handful of similar mistakes. After so many of these, even the politest, most caring employer is likely to turn around and say "Look, you clearly mean well but perhaps this is not the career for you."
My thoughts exactly. While this may have been part of a sequence of problems that ultimately led to him being let go, I don't think it was the only cause. If it had, he would have been let go a few days after the incident, if not immediately. After all, a lot of companies would not be happy with someone sending potentially thousands of potentially confidential emails to an outside email address.
Don't get me wrong. I'm not saying in the above post that I think he's done anything malicious (he might have, but I wouldn't know and the article doesn't say), but he may have just been a little enthusiastic. I've met plenty of young technicians who are enthusiastic and sometimes go way over the top in an effort to help. This might be just that.
Agreed. Personal and corporate should never be intermingled for any reason.
However, Lucas was a junior sysadmin. That means he can't necessarily be expected to know that at the time he was hired. It's his employer's responsibility to make that policy known to employees. If you're going to hire junior staff, you must have a plan in place to educate/mentor them in their field and train them as to the employer's way of doing things. And even if you hire only senior staff, you still need to write down your policies somewhere (trusting in that case that they will know to read them). Something as simple as a restriction on the target domains allowed for mail forwarding should be enforceable technically as well; I've no idea whether Exchange supports it, but if not, maybe use something else that does: inability to implement corporate policy is a failure to satisfy requirements.
So yes, Lucas should not have done this. However, it's not clear from the story whether he should have known that.
While we're in agreement that he should have known, I'm not sure we agree on how he would have acquired that knowledge. Is it:
(A) His employer, recognising that it had hired someone with little experience (and the accompanying benefit it received in not having to pay him much), assigned someone to teach him the trade, including things like reading and understanding corporate policies on information security and change control; as part of this on-the-job training, he learned not to make these kinds of changes to production systems without approval, and until he had that training was not given the access required to do so.
(B) At birth, because along with knowing how to use a nipple, humans are born with a complete understanding of production change control processes.
I suggest he should have known because (A). I've found no evidence to support (B). We aren't told in the story, but if the employer didn't do (A), the employer bears most if not all the blame.
Yes, but Lucas didn't have forty years' experience. He may have only had two months'. And depending on his personality going in, how good the company was at disseminating information, and how his good his manager was, he may not have considered it to be iffy at the time. You certainly get a feel for what is and isn't iffy after a while, but it takes experience, which it doesn't sound like he had.
And even if he was initially the sort to ask his manager about everything, there comes a point where a manager has to say "you need to stop asking me so much and walk for yourself, just be honest with me and I'll support you". When it comes to asking his (same-level) colleagues, some people get very irascible very quickly when asked question and that can scare people into not asking the question that would have prevented a situation like this.
When I hear stories such as these, the over willing digging themselves into holes- I do, in my cynical way, wonder to what extent a recruitment process has selected the naive over-eager sort of type. Because those are the types that HR/managers thought are good for the company. Willing canon fodder are more popular with them than wily old soldiers.
There is a definite minority of junior sysadmins / developers with that kind of trait. I spent about a year training my junior colleague out of that kind of behaviour. He was genuinely just trying to help... but he often caused more problems than he solved whenever he had a bright idea and hared off through the code-base making minor text / css changes without telling anyone what he was doing. At best, he often just wasted time on petty stuff when something more urgent needed doing.
This attitude wasn't helped by a middle manager who understood nothing of what she managed, who would just turn to him and tell him that something needed done. The first I'd hear about it was when something inevitably fell over.
Several quiet talks later with both manager and junior eventually sorted that issue. Happily, junior is now getting to the point where he actually does have a good idea of his capabilities and can be trusted to see work through to a proper conclusion. Or to seek help before he gets out of his depth.
A colleague once triggered a mail flood between his mailbox and mine.
When me or the other senior tech on my team went on vacation we'd set up a regular OoO and, for a select number of addresses, a supplemental forwarding rule to the other senior to make sure requests from higher ups were properly followed up
Well, I went on holidays and set the rules, but my colleague had to take a day off and did the usual. Unfortunately the forwarding rules weren't set to just forward mail from those contacts, but also when they were CC'd... so when the first email form someone on the list was sent to his address (or mine, really don't remember who first triggered the glitch), mails started being forwarded from one account to the other until it froze the Exchange server)
I think he got a bit of an earful the next day, but nothing of consequence (except learning not to do it again)
Sometime around 2005/6 we were banned from automatically forwarding emails to personal accounts (no idea why it was allowed in first place) - as someone had set up forwarding everything to a Hotmail account, one day an email sent the hotmail account over the limit, so a bounce came back including the offending email, this prompty got forwarded to Hotmail growing a bit each time. I am sure you can workout which came off worse a Lotus Notes Server or Hotmail, especially as Notes was storing everything
Well yes, it was designed in the era of dialup and where sites had a local Domino Server, so often older installs would have seperate notes networks for each server to a hub/central server. Normally configured to hold a certain amount of messages before transmitting them.
Not many people seemed to know having them on the same Notes network would mean the Domino servers would send direct rather than routing through a "central" server. Obviously when most companies replaced dial up with dedicated links it doesn't make sense to route via a hub. When bandwidth obviously increases may as well move Domino back centrally.
Many years ago (>25) one of our UK engineers was seconded to the US for a month. Before going, he was set up with a local email account in the US and diligently set an autoforward from his UK account to his new US one; that way, he could ensure any ongoing UK work wouldn't be held up due to his absence. At the end of his secondment, he set an autoforward on his US account to his UK one; that way, his return wouldn't hold up the US project he's been helping.
Yep - no smart filtering and, before he'd left the US, he received an email at one of the accounts, which autoforwarded; the receiving account then autoforwarded (back), which was autoforwarded... within minutes, the company's global email system ground to a halt!
At a place I worked at, there was one really annoying person, the Finance Director. Obviously she was in charge of IT, despite knowing nothing about IT, and she was one of those people who had a read receipt for every single mail regardless of the content. So, simply delete one of these messages without "reading" it (isn't the Preview pane handy?!?) and at the end of the night, empty your Deleted Items. Cue an e-mail to her saying that I hadn't read her mail. As per Exchange's design.
Next day, recall that particular item from the black hole but leave it in Deleted Items. When Deleted Items are emptied, another e-mail is sent to her saying it wasn't read. Next day, recall that message... and so on.
It took her weeks to question me about it, I "investigated" it, found an "issue" which I then "corrected" and the problem disappeared.
Petty? Absolutely, but great fun.
A colleague once managed to send an email, complaining about how things within the organisation were a shambles, as he put it, to the entire organisation rather than the couple of people it was intended for. We used Novell Groupwise at the time and he accidentally put a * in the recipient list, which automatically causes your email to be sent to every address in the global address list. Oops!
Well, no, it was making a change in production without telling anyone.
And how was this guy going to test this effectively? The error only got triggered because the destination bounced it too. What if a series of messages he tested with didn't meet the spam criteria at the other end until the 100th message?
At one place I was working an intern lost her coffee cup.
Solution: send an e-mail with a picture of missing cup to everybody in the address book by clicking on first entry, ctrl shift End, To, then send.
Unfortunately she did not realise the address book was for the whole global corporation (about 120,000 email addresses) so the email was huge just with all the e-mails, the picture of the cup was the icing on the cake.
Result: Mail system ground to a halt for hours. Not helped by her realising her mistake and trying to fix by recalling the e-mail (3 times as this didn't seem to be working) effectively quadrupling the load on the servers.
I don't know if she got her cup back.
I believe that Alan Turing, now featuring on the UK's new £50 note, so frequently lost his coffee cup, that at Bletchley Park he literally chained it to the radiator in their relaxation area. (See Andrew Hodges excellent biography: 'Turing the Enigma of Intelligence'.)
We had a sales guy who had his email on Pipex.
One day his office based subordinate was going on holiday, and set their email to forward to him.
All went well till a virus infected email our system didn't detect came in. It forward it to pipex, who rejected it due to the content.
The NDR was duly forwarded, and rejected.
Cue 1 email loop around and around, slowly getting larger and larger.
It was about 3meg when we found it, on the n-hundredth loop.
It could been worse, but the slowness of our single ISDN channel kept the loop from going any faster!
One of the most important lessons that I learned early in my (software) Engineering career is:
"Just because you can do some thing, doesn't mean you should do that thing".
This applies to so many things in life.
It is so applicable to technology in the world. Unfortunately, I've yet to encounter a graduate of a university where they taught this.
KISS, when possible.
Make it only as complex as needed, and no more.
Needless to say, I'm not a fan of:
Now that brings back the memories of the advertisement for a piece of software called Sideways, which'll allow you to output the contents of a spreadsheet (SuperCalc5/Lotus1-2-3 etc etc) sideways on a printer.
They had a fancy Rube Goldberg contraption to print sideways in their advert.
An archaeological dig of old PC magazines from the 1980's will probably unearth this classic.
Brings back memories of having almost stand-up shouting arguments with other organisations over the configuration of their systems killing our systems. "Your system is at fault, change it to stop borking our system". "No, your system is at fault, change it to cope with our system", "The RFCs state this:", "They are just Comments, that's what the 'C' stands for, they're not requirements"
"They are just Comments, that's what the 'C' stands for, they're not requirements"
Unfortunately this was the attitude of a number of mail software authors - invariably leading to their systems being trivially borkable or buriable (novell mail would go down in a screaming spiral of doom if mail from Postmaster to postmaster (no @) was fired at it, as one example)
It's not a surprise that the DNS relaying blacklists mostly contained sendmail 8.6 systems simply because of its ubiqity and inability to find someone responsible for the ancient systems it was running on - but the vast majority of the rest of the entries were brokenware of some sort or another that should never have been left exposed to the world
I remember when e-mail applications first got automated Out-Of-Office autoreply facility. It did not take long for it to be amended so that only one OOO reply would be sent to any e-mail address, after two people, going on leave at the same time, each set an out-of-office message, and one, just before leaving at the end of the day, sent the other an e-mail ...
Academics generally were very early adopters of email; if you're collaborating with people around the world, the benefits are pretty obvious!
As a result, I think I've seen every single one of the error cases that people have reported; OoO loops, emails sent to everyone in the organization; you name it, it's happened to every large academic organization! I hasten to say that the worst I've done is send a message to the "wrong" email group resulting in something reaching people it shouldn't.
back in the late 90s when I was working in a small publishing department.
Every so often our mail server would go down because someone, normally advertising sales, had emailed a clients incoming artwork to the whole production team, rather than popping it on the server and sending a link out. Multiple copies, of, at the time, large artwork, quickly jammed up the servers. This being before email servers themselves referenced files being forwarded, rather than duplicated them.
Me, too! Me, three! And thus the Exchange server for the Exchange team was brought to its knees, and was face down for three DAYS while the queue cleared.
Someone was testing distribution lists, and made up some lists with lots of names on them. Then someone decided to mail the whole list, asking, "What is this list for? Why am I on it?" And then things when down from there, with all the other idiots on the list also replying with something stupid.
I've seen three mail storms like that at Microsoft. And for some strange reason, nobody got fired.
I remember 2 situations at work when this happened,
Once we were trialing a new helpdesk system would send an acknowledgement upon a new ticket getting generated, which included the originating server for the server error notification, to which the server dutifully replied that the email address for the server didn't exist (Lotus Notes was fun for that), and the helpdesk said Oh new ticket, and sent a reply back.......
The other was an email accidentally sent to all users who had open helpdesk tickets, with the recipients in the TO field, not the BCC field, so of course people started hitting Reply to All, you can guess what happened then.
Reminds me of the time a previous employer in the early 2000's sent an all staff message from one of the bosses or CEO, with a large attachment. Think it was a TIFF file or something that should have been a JPG or more optimal format. Was a multi-site company with hundreds of staff, and the WAN link at the head office where the mail server lived was only 4 megabit or similar. Took a long time for everyone to download that message.
Wouldn't trust an admin who hasn't (or won't admit to) having done something catastrophically silly on a prod system. That sinking feeling and more importantly how you deal with the balls up can't be taught.
Doing it repeatedly on other hand just shows lesson hasn't been learnt, and desktop imaging and toner replacement is your future.
I'll admit to the classic "SQL update statement without a WHERE clause".
One afternoon of hurriedly digging out a backup and rebuilding the corrupted records from other systems later...
I think I mentioned in another thread, I very quickly learned the habit of typing UPDATE x WHERE y before then going back to insert the SET clause
The only email cock-up I made was actually at home. In the early days of dial-up and Windows for Workgroups, I'd managed to get hold of five cheap PCs. So me, my wife, and all our three daughters had our own PC each on a thin ethernet network. I realized that I could set up a mail server, so that we could all send emails to each other without going out to the outside world - which everyone thought was great. It was set up to dial out when a message for the outside world was waiting and then dropped the connection when there were no more to be sent.
Then the heady days of ADSL arrived, with 256K downloads, and always-on - incredible, it seemed at the time. So I was an early adopter for my ISP and this was even more exciting. Until the second day, when my ADSL was switched off by the ISP end and I got a frantic phone call from them, saying "You're running a mail server and you've misconfigured it as an open relay..."
When I looked at the mail server, there were several thousand spam messages advertising Viagra and various other dubious offerings...
Oops. Taught me a salutary lesson.
not just exchange. Back in the day 1998/99 I was working for a marine research lab with around a couple of hundred users. We were running Netware 4.11 and Griefwise (Groupwise) 4.x I think, but it might have been 5.x Anyhooo a boffin had left to work at a Uni and so set a forward to his new Uni email account and for reasons unknown a forward back to his lab email account, with hilarious consequences. the resulting mail loop took down the box running griefwise which also happened to be the box running NDS, utter sh1t show for several days whilst the queue was purged and we could get back up and running again
Biting the hand that feeds IT © 1998–2021