Friday ...
... I mean really? You did this on a Friday?
RFC1925 should have an extension to outlaw all PROD changes on a Friday.
Lol.
By misunderstanding how a single word was being used, I caused a boo-boo that counts as "really stepped in it this time". After a lot of research and testing, I thought that months of "the spam filter is crap, make all the spam go away" warring with "the spam filter is too restrictive because $client can't send me his …
Do it on a weekday and they'll have your hide if anything goes wrong. Do it on a weekend and there's not enough traffic to make it go 'ping'. Do it on a Friday, right after EOB and you have a few good hours of decent incoming traffic flow, a handful of folks who work late and are used to minor changes and an entire weekend to fix things if you bork them really badly.
I try to discourage people working late, or on weekends. I have few enough maintenance windows as it is. If you work during off hours, well, I have no sympathy. There isn't a 24/7 global team of nerds to implement changes and patch things. So we have to sleep some time. If I have to be up for the 9-5 grind, then I'm not waiting until 3am to patch.
Besides, some folks start getting in a 4am...
Lucky for some, all of the companies I have worked for the last 15 years were 7 days a week. The current one is 24x7 but without the budget for fully online redundant systems, which makes patching, updating and reconfiguring more than a little bit difficult.
We have a comms room that really needs taking down, re-wiring and racks re-stacking, but there is never going to be a good time to do it as it will take about 8 hours.
"I try to discourage people working late, or on weekends"
You make it sound as if people have a choice... The times I have been at work on a Friday night beating a deadline were not my choice, and I think this counts for most. Yes, we all want to get home on a reasonable time and get some sleep, so there's no good solution here, but some respect please.
RFC1925 should have an extension to outlaw all PROD changes on a Friday.
As a programmer I have a general rule not to make anything 'live' after mid-afternoon and by preference do it at start of day. I sometimes work with colleagues in the USA and it's not nice to go home and inadvertently leave them with a pile of poo to resolve. It pisses them off and embarrasses me.
It's much nicer to push things at start of day so you know it's been exercised before you quit for the day. I hate that feeling of impending doom when something has gone live and no-one else has tried to use it yet.
For a full "this is live and will stay that way", I agree. For a pre-permenant, data-gathering exercise that needs to run on live...this I prefer on the Friday EOB. Remeber, the goal here was not a permenant run, just a very brief test on live with just enough traffic to find bugs.
Found one.
This post has been deleted by its author
Unless he is using his gmail account to sign up for anything and everything I would like to know how he gets them as I have had Gmail for slightly longer than you have and never see those emails.
One way I avoid that shit is to have a burner account like Hotmail. That is all I use on sites which want my email information.
In my organization making even a minor change on Friday would get you canned outright. Thursday was highly despised, Wednesday not so hot, Monday a poor choice because that is new hire day and the affected customer might not be too sure of what caused the changes.
Tuesday right after lunch . . . most preferential.
Your biggest mistake was flipping the switch and walking out the door leaving it unattended all weekend. For something so critical, that was a huge fail.
"A bit of testing" never works out the same once the users start clattering it. For something as mission-critical as email, it's something to do and stick around to watch for a good while.
Of course I'm preaching to the choir, but your big fail was in the testing phase: you simply fed it some 'live' data when you really should have taken some time to look at the different types of data the system would encounter (ie. emails from multiple sources, destinations, mailing lists, attachments, hyperlinks, the works!), and then put together some suitable tests that simulated as much of those varied cases as possible. And that's before you start considering any form of peak load testing.
There is an attachment to the idea of Outlook + Exchange + Public Folders that no force in the universe is ever going to dislodge.
Microsoft is working very hard on this.
1) They've been steadily depracating Public Folders with every release of Exchange and Outlook (including refusing to fix decade old bugs) in favour of... sharepoint.
2) It's cloud, cloud, cloud, cloud all the way. Or, more accurately, subscription services under Microsoft's control.
Luckily, Microsoft hasn't been entirely successful in killing off Public Folders yet.
We have plenty of clients which are very attached to public folders, and many of them have been successfully moved to 365. Now you can convert your public folders to public folders in 365. There are a couple of caveats in sizing, which can mean they get split across multiple shared mailboxes. If the client is used to Exchange why move them to largely unfamiliar Google Apps, just move them 365 and have unlimited mailbox and public folders. The users are largely unaware of the change if the project is done properly.
I am yet to find a client that can run Exchange onpremise cheaper than the 5.25 monthly fee for unlimited mailboxes and full retained backups (legalhold) which the 365 Exchange 2 plan costs.
didn't stop reading at that point, but my opinion of the author shot through the floor. Hope he has good insurance cover.
Presuming he is in the UK/EU, you could land in hot water if you recommend that. Depending on your clients business, you could be contravening EU Data Protection Laws as Google App, Office 365 etc are owned by US companies, so the data, even if its on an EU hosted server, is covered by the Patriot Act.
Bit of leaked data, and be sure someone will point the finger and say " He said to do it"
I think there are some negotiations going on to circumvent the issue, if anyone in the know could provide more info…..?
Must confess that Mr. Pott took a bit of a credibility hit there with me as well. Gmail may be easy; but putting business comms through a foreign advertising company doesn't strike me as particularly bright; even before you get to any legal and data-belonging-to-clients-of-the-company issues.
Some of my clients do do it; and that's down to them...but I would never recommend it as a course of action. Quite the reverse, in fact.
Google is also a foreign advertising company from Canada's viewpoint. My personal view is that there is no good reason for using Google for email in any circumstances; and that goes triple for business use. I'll freely admit that I'm a bit religious on the subject; but you have no idea what they're doing with the data now; let alone what will happen in the future. The facts of the case are that Google is a business and they are there to make money...not to be nice to people.
You may be happy exchanging a loss of privacy for ease and convenience; but I am not. Everybody's mileage varies. I wouldn't do it for my own email -trivial or not- and I absolutely would not recommend it for any class of business user.
Re: Data from Europe held in the US, for all apart from very specific data sets, there's no issue holding personal data on US servers if the company has signed up to Safe Harbor.
See point 6 on http://ico.org.uk/for_organisations/data_protection/the_guide/principle_8
Because of an expressed preference and opinion, that doesn't even relate the rest of the article? Must be too much effort to use that crowbar to open your mind in the morning.
No Not at all. It was Ahh Fuck it it would be easier if i could just dump it on somebody else.
Anything goes wrong its not my fault.
I'm a bit confused as you seem to imply that the client should really be looking to host their email system in the "cloud" yet seem to insist on trying to homebrew your own anti-spam system.
There are any number of really decent hosted AS systems out there that cost no more than a few dollars per mailbox per year. We currently run about 2000 mailboxes (across a number of clients) on a couple of different providers and I would no more think of trying to homebrew AS than I would AV to be honest, life's too short :)
Unless your client insists on AS in-house? In which case you need to get your sales person's hat on!
Hosted AS is ultimately where I want to go. The history is as follows:
1) Until recently, hosted AS was along the lines of "a few dollars per user per month" not "a few dollars per user per year." Which is more than the client would pay.
2) Until recently, relatively simple in-house open source AS systems worked just fine.
3) Having used the simple open source AS systems for so long transitioning away from them takes time. The existing system, for example, injects [SPAM ASSASSIN DETECTED SPAM] into the subject, rather than adding X-SPAM-STATUS
My goal is to get them using an in-house AS system that uses X-SPAM-STATUS for the rest of the year and then have them transition to a hosted AS system at the end of the year. This will be possible because both the system I'm trying to deploy for the in-house option and virtually all hosted AS systems use X-SPAM-STATUS.
Now, getting them to accept hosted AS will require getting them accept paying a subscription for an AS service when they're used to using free in-house stuff AND getting them to overcome their innate paranoia regarding having their e-mail hit servers in the states. I honestly don't know if I can "sell" that...and I'm pretty sure I don't care enough to try.
What I can do is get them migrated to a solution that uses X-SPAM-STATUS instead of subject injection which will make the transition to a proper hosted AS a heckofalot easier in that mythical future when the decide to just pay the tithe like everyone else.
That's the goal, anyways...
How much your time in hours per annum spent on 'homebrewed as' vs how much outsource costs - that should make your argument pretty easy, as in why didn't we do this yesterday.
Also, make sure not to underestimate how much time you actually spend on this, generally I find when you sit down and look at it what you might write of as a couple of hours quickly adds up over the year (with unexpected work like outages included).
The existing system, for example, injects [SPAM ASSASSIN DETECTED SPAM] into the subject, rather than adding X-SPAM-STATUS
That's trivially implemented by using clear_headers to remove the X-Spam-Status header and rewrite_header to inject a tag into the subject.
Whether or not that is a good idea[1] is up to you...
Vic.
[1] It isn't.
They have the same issue of an outsourced mail server - all your company emails are routed through an external system you have to trust more or less blindly. An AS appliance on-premises could be better, if you have not the resources and expertise to setup up your own AS pipeline.
At least what is good with SMTP is it is an end-to-end (from a server perspective) protocol, it doesn't need routers or relays, unless you want them.
A homebrew solution is not that difficult to setup, and if you really hate Exchange you can simply setup another MTA of your choice in front of it, configure spam filters there, and then forward to the Exchange system. Just you could lose the Exchange-to-Exchange capabilities.
Did you mean to post this on TheDailyWTF.com?
Two major WTFs and one minor:
1. Trying to reinvent the spam filtering wheel
2. Putting something live on a Friday - let alone last thing in the day (Live = in the morning, near the start of the week)
3. (minor) Trying to make MS do something complex. Next time try Exim ;-)
Gotta be honest here - I find Fridays are very suitable for some changes. It all depends on the client, the work to be done and the circumstances but in some instances, it is the best option. Failing to even consider an option because of some personal rule is surely the bigger crime?
Indeed, "never makes changes on a Friday" sounds very much like those who admonish all and sundry that they should stick to white paper solutions (to the letter) or not bother. Without knowing the client, their usage patterns, 'risk appetite' or indeed a whole host of variables, such proclamations are on the same level as "migrate to the Cloud" - i.e. blanket assertions of suitability that, when blindly followed, can lead companies and their IT departments down some very steep roads.
IT is so varied and we are all at risk of falling prey to our own biases. That your rule has worked for you so far in the situations you are used to is great but really not an indication of if it is suitable for another situation - even one that looks, on the surface, to be similar.
The one thing I would say, however, is that changes to live systems generally require monitoring real-world use (traffic/transactions/disk access/etc...) for a period after the changeover. If that is not possible on a Friday night, or you are not able/willing to stay up late to do so then you need to find another way. If you are willing/able/paid enough to do so, then there is no inherent reason why a Friday change-over should be off the table.
Pairing with that, I would implore people to not underestimate the variability of real-world inputs and to make sure they have a solid understanding of the true scope, volume and variability of what the system is asked to deal with on a daily basis. But then, that's important whether you do your work on a Friday afternoon or a Tuesday morning.
Indeed, "never makes changes on a Friday" sounds very much like those who admonish all and sundry that they should stick to white paper solutions (to the letter) or not bother.
I first picked up the "No Production changes on Fridays" mantra from fellow techies - it's us who have to spend our weekend fixing stuff if it goes tits up!
@AC
Again, it depends on the situation but, for some scenarios, I would much rather have a weekend to sit, relatively un-molested, and fix a SNAFU than deal with clients breathing down my neck.
Again, it's all very much dependent on you, your clients and exactly what you are trying to achieve and I can only speak for myself but having to spend your Saturday fixing a problem before anyone notices can be preferable to spending a Tuesday trying to make up new and interesting variants on "it'll be done when it's done!"
d.
I don't get paid to work weekends, so I do everything I can to get work done during the week on the boss's time.
Perhaps I shouldn't have said exactly "Never put something live last thing on a friday", though you shouldn't have taken it quite so literally. It is a more concise way to say "Never put something live that could potentially go wrong at the last minute where there would be a large gap before the next working period when you can fix it", but I think people might have fallen asleep had I put it that way ;-)
You know, I find this whole "never go live on a Friday" thing idiotic. I went live for a brief period of testing on a Friday. Someone found the error I missed on a Sunday. It was fixed before Monday. Staff came to work with a weekend of low-volume traffic where they had to check through the junk-email folder for (on average) about 15 e-mails to see if they were false positives. Not the end of the fucking worked.
If I had run that thing at 8am Monday morning, it would have taken about 4 hours for someone to notice that something was up. In that time an average of about 100 e-mails would have hit each person's box that they needed to check through.
And I'd rather work a weekend than have 50 people screeching at me demanding to know when the fix will be in, "How could I possibly have let this happen" and telling me how shit I am because I can't design a network that's more reliable than Google while being more accurate than Microsoft and more capable than Amazon, all for free.
Buncha great choices there.
1) My spam servers worked just fine for years.
2) Putting things live during the day risks outages during working hours which has been emphatically affirmed to be an absolute no-no. There isn't much choice.
3) Exim? Really? I'm a bit of a QMail fan myself, though I have to admit that Postfix has come a long way. Honestly though, I've been working more and more with Zimbra and liking it.
I loathe exchange with the burning passion of 10,000 suns.
I managed to nuke our department's mail in a previous place of work...
I'd set up a script to delete unused mailboxes as it was a uni department and we had a fair turnover of user IDs. Any mailbox associated with a user ID the script couldn't see in NIS (this ages it somewhat...) would get deleted. That worked well, until we did some network changes which multi-homed the hosts into new subnets. Without changing NIS securemaps...
Cue the mail server doing the NIS query on the new (untrusted) network and getting back no users. Cue removal of all mailboxes overnight. Cue me scrabbling to the backup tapes in the morning and checking the mail logs to let users know who might have sent them mails which had been deleted...
Luckily, this was over the Christmas period so it was pretty quiet. The script was changed soon after to mail out a list of rm commands rather than running them manually.
> My personal preference would be to punt the entire kit and caboodle into Google Apps and be done with it
While I think that Google's provision of "everything" (provided you acknowledge that they own everything, including the content it seems) is a good thing in some circumstances it's always sad to read a comment like the above without the necessary "For this customers use case and requirements it would be ...". What's worse is that you then go on to explain why you don't use Google Apps... which means you have considered it and Google Apps is not your preference... It's confusing.
A throw away comment like the above leads PFY's to say "hey, Google can be used irrespective of requirements" and push their clients down the GA route without considering the implications on the company or their work-flow.
Don't get me wrong, I like Google Apps, but I'm conscious that it's one solution of many and it's the requirements that dictate the solution, and my preference is always the solution that meets those requirements (with some compromise on cost, time of delivery, etc.), though I'm always willing to tell a company that their requirements are wrong and therefore so will the solution be ;o)
All in all you erred (right intention, wrong outcome) and some people were inconvenienced in a rather minor way - this solution, though rather complex, seems to tick the boxes in this case, and you learned a lesson about how ambiguity (or rather assumed meaning) can easily cause the client issues. All in all I'd say this wasn't a "fail", just a hiccup. It's still good work.
Oh, I just really, really hate exchange. E-mail in general, but exchange in particular. Loathe it with the burning passion of 10,000 suns. Most of my clients you Google Apps, Zimbra or a hosted exchange solution (that I don't have to manage, hee hee!)
If there's e-mail to manage I just want it to be a nice IMAP server. Postfix + Dovecot on virtualmin works like a hot damn. Or Qmail. For the love of $deity, why can't I just use Qmail? But no; exchange! Exchange, destroyer of souls. Exchange, the eraser of sanity. Exchange the requirer of resources 80x that of any other MTA.
And the cloudy alternatives? Well there are Linux-based IMAP mails...but I could run those in house, if allowed, with no real problems. There Google Apps with Just Works and works better than any hosted e-mail solution I've ever used. And then there's Office 365, which is the only solution I've used that makes me piss away more hours solving pointless problems (or waiting for Microsoft to do so) than Exchange itself.
Maybe I wouldn't have Office 365 so much if it weren't for the 48-hour lag on support calls, followed by 32 hours to resolve issues, but this is what it is. And when it's a "client down" scenario, 3+ days to get them back online isn't okay.
So yeah, Google Apps, when possible. Because it just works. If you read these pages, you know I'm not a big public cloud fan...but I trust Google to keep the e-mail working. Because they have a hell of a track record of doing so.
The solution, to my mind, is "have a critical service be bulletproof." I cannot offer that running on 10-year-old hardware using overly complicated MTAs with no funding for proper spam and antivirus scanning software. I am not convinced that Office 365 can offer it either. The only things I trust are Qmail, Zimbra and Postfix (which the client is allergic to) and Google Apps (which at least has something sort of like public folders, though you have to use a web UI to access them.)
Hence the desire to convince them that's the way to go.
When someone says "do this" and you aren't sure you can, the bigger mistake, I think, is spending your life just saying "yes". I've started to say "no", and this is a source of a lot of tension and conflict. "No, I can't do that" or "I don't think that will work." A decade ago I would fucking make it work...but a decade ago I only needed 2 hours a night of sleep...and I was only responsible for about 12 applications.
Now I am responsible for hundreds of applications, and I'm getting old. I need 8 hours of sleep or I am worthless the next morning. That young punk who could solve any technological problem using spit and bailing wire and sheer force of will is dead and buried. I used to know all there was to know within my sphere...but IT now encompasses a hell of a lot more than it did then. I could spend my entire day just trying to keep track of which companies exist in our industry, let alone what they do and how to implement their technologies.
So the scope of the project is beyond just software needs or desires for one vendor or feature. Who is going to look after this stuff? Especially once I'm no longer there to keep it ticking along? How will it all interact with everything else, and should it even interact with anything else?
The more I ask these questions, the more I want to pull core services off the local network. Some things need to be in house. But e-mail doesn't. There's already too much there for one person to handle; I'd prefer to pull everything that doesn't need be on-prem off, just so that it's feasible that one person with next-to-no budget can keep that place going for another decade.
Even if that means feeding the advertising behemoth of Mountain View.
That sounds awfully long, are you on a free support package or a managed one? When I logged a sev-1 many years back, I got a call within 10 minutes from our TAM checking it really was sev-1 which would mean dedicated follow-the-sun resolution as long as we on the client side also stuck at it around the clock. 3 days doesn't sound even close to that.
>>>I get 48 hours for responses to queries calling the MS partner support network. Then up to 32 hours for them to fix it. I get similar responses for average customers with E1 and E3 licenses. Multiple events now, same timeframes for each.<<<
Might be worth looking at paid incident support - if it's a genuine bug you're re-credited, if it's a configuration/training issue then it's cheap compared with 3.5 days lost productivity.
For issues where it is creating an outage, I do. Although even paid incident support offered - for the best instance - 18 hour resolution. It's ultimately what has ended up driving most of my clients to Google Apps.
Gmail is nowhere near as feature rich or awesome as Office 365...but it fucking works, and most SMBs simply don't use 99.9% of the features in Exchange anyways.
Exchange is not a server application you can administer casually, as much as Oracle. You need to understand how it works and how to configure it properly. It's not a simple IMAP/POP/SMTP server, and it can do a lot in a single application, but can easily backfire if not "handled properly".
It's huge and expensive, but I still have to find an MTA that can match Exchange capabilities, especially when used for "internal" mail exchanges when it can use its full features.
Sure, if you just need IMAP/SMTP capabilites Exchange is overkill. If you need an allround solution for an organization communications needs, it's difficult to find a replacement for Exchange.
I'd agree with you, for organizations willing to invest in the full stack. Exchange needs more than just exchange to get the benefits you speak of...and that stack needs a dedicated full time admin. Not an admin who is also doing storage, networking, applications, desktop support, websites, Linux, etc.
It was one thing to be the gneeralist who lumped in "and exchange" back in the Exchange 2000 or 2003 days. It's another thing entirely to try to keep up with e-mail today. Even for "basic" MTAs, there is so much to configure, and so many "conventions" on configuration you have to abide by to stay off greylists that it's crazy.
I agree exchange is amazing. I rather like it for many things...but only in cases where you're willing to pay the tithe. That means proper hosted AS. It also means keeping up to date on clients and all ancillary applications that tie into it.
As a unified communications stack, Exchange/Lync/Sharepoint/etc can be very powerful. But they aren't wrapper-ware and they aren't particularly good past their "best before" dates.
Where exchange truly shines is in things like retention rules, archiving, and all related stuff. If you need to do things like legal holds, in-depth content scanning, Exchange is pretty goddamned hard to beat.
The problem is that most companies absolutely don't need that stuff. They never use it, but they're sold on the idea that they "need" etiher the top-end collaboration stuff or the in-depth retention/legal policy framework, despite never actually wanting to engage any of it.
Worse, you sometimes get a CIO who thinks it's all really, really cool and wants everyone to use it, but simply can't get buy-in from the staff. Usually they'll try everything, including outright threats and bullying, but the staff have non-technological ways of communicating and getting things done that are simply faster and far more efficient for them.
The biggest thing I see with my SMBs is people wanting to use the full Microsoft stack to be "more efficient" at communications because one or two people (who typically telework for some or all of their day) feel "out of the loop." They try to impose a technological solution on a human problem and it fails every single time. The problem isn't that people don't use the relevant technology, it's typically that they're an asshole, or that they simply choose not to give a fuck about $issue until it there's a problem.
Exchange isn't - and can't be - a replacement for human beings taking responsibility for their actions, taking the time out to think about the various projects that needs be done, or actually taking the time to answer the various and sundry e-mails and communications that need answers. Making communications "more efficient" doesn't force people to actually acknowledge one another, keep eachother in the loop or convince the powers that be to make a fucking decision about something.
It absolutely doesn't force overworked people to sort their crap and "properly file" digital data. If you have problems with people using a single public share as a catch-all wastebin where they store everything "because everyone has access and it's more convenient" then public folders and/or sharepoint are just going to look the exact same. The issue there is the people, their habits and their workload, not the technological tools available to them.
When and where exchange can make a difference, I absolutely champion it's use. Exchange is one piece in the best groupware and productivity stack on the planet. Period.
But I do not champion it's use in most SMBs. I think that's ridiculous overkill. Hell, even Office 365 which is designed to be simple to administer (compared to Exchange) and offers only a subset of features is something where 98% of all SMBs I've worked with that use it simply don't change anything past defaults.
So, while I think Exchange is grand, I can't and don't recommend it for SMBs, unless the SMB has a definable need for it and they're willing to pay for it. Regular updates, proper amounts of sysadmin time, proper hosted AS and enough server licenses and hardware to make it all go.
I will never do another exchange install that doesn't have Exchange Enterprise Cal Suite for each user and hosted AS. There will also be a minimum of three server licenses involved: one dedicated hub transport server and at least two storage servers in a cluster. They will also be backed up using Data Protection Manager and monitored using System Center.
The floor cost for this is simply higher than most SMBs are willing to pay, to say nothing of the ongoing costs of keeping it ticking along.
Here's a great example: try running Update Rollup 3 if you'd disabled IPv6. Whole thing goes pear-shaped. Worked fine without IPv6 until then, then *bam*, implosion upon update. There are various reasons why IPv6 had to be disabled in one of the environments. Update happens along, murders exchange. Figuring out what went wrong, then applying the fix takes a proper sysadmin.
Ideally, you never encounter the error because everything exists in a test environment, all patches are vetted, etc. How often do you think that happens in an SMB where you don't have things like "dedicated Microsoft communications stack admins" or even "dedicated Microsoft admins?"
And so we get to the heart of it: Exchange is an example of a service that should never be run by an in-house SMB sysadmin. It needs to be outsourced. If you are going to run Exchange in-house then the sysadmins should have access to an MSP with a hell of a lot more experience, time and resources to do proper labbing of patches for that SMB's config and so forth. It is an application in a stack for which specialists should be used.
...or where it makes damned good sense to simply pack the whole thing up and go "cloud".
If Microsoft had "Office 365" for service providers and/or could make their own offering reliable enough that it isn't constantly experiencing outages, I'd say "use O365 service provider" and be done with it. MS refuses to release O365 to SPs and it can't keep it's own version working.
That leaves me with Gmail as the most stable offering for SMBs, followed by the more expensive hosted Exchange (assuming you can meet the floor cost), or simply hosted e-mail using open source MTAs without all the groupware faffery.
But the issue, 99.9% of the time isn't that "groupware will magically make things better." It is that there are bigger business and communication issues that need to be dealt with that no software can make better.
Anywho. Long ramble...
"While OPE is better than its predecessor FOPE . . ."
Except in one glaring, crucial and, for me, insurmountable particular - OPE is not available under SPLA. So, if you run a managed service utilising FOPE (and if you're an MS shop, it made good sense) then, you're out of luck.
Yes, yes, that move "better align[s] product availability with program goals" but it sure as hell doesn't align them with my goals.
Good news though, you can resell it through the Office 365 'Advisor Program'. Hint, hint . . .
I consider myself an educated fellow with a perfectly serviceable vocabulary - certainly sufficient to handle the vast majority of situations I find myself in - but so far I am at a loss to accurately and succinctly express my current feelings towards Microsoft without straying into the colourful and endearingly impolite vernacular my country has a reputation for
On a side-note, this post brought to you by the ellipsis. Apparently . . .
The Office 365 advisor program and I are having a disagreement. Specifically, I've been fighting with MS for the past five days to even make my bloody partner page work. MPN and O365 both hate me. I hate them right back in turn.
Office 365 is something I'll revisit when they A) beef up the reporting to levels that aren't complete ass. [Insert 8-page reporting rant here]. and B) Make the fucking thing work. When Microsoft can achieve Google Apps levels of uptime, we'll talk again.
As for SPLA; fuck SPLA. I refuse to host Exchange in my cloud. The hosted e-mail I offer my clients is Qmail, Postfix or Zimbra, front-ended by Barracuda and/or Netgear UTM. OPE can be got by the customer for their own site...but it's more expensive than competing solutions and not as good.
Oh, I lied. The MPN support people only solved part of the problem within 32 hours of picking up the ticket. They fixed the part that was preventing me from generating quotes for new seats. They didn't fix the licensing issue with my MAPS. They *just* e-mailed me about that.
This makes it 48 hours to pick up the ticket and we're 72 hours past that point without the ticket fully resolved. And the ticket in this case not being some niggly complex technical problem, it's a billing/administrative issue that stemmed from a years-back uncaught authentication system screwup on their side.
I.E. the damned thing autogenerated me an Office 365 account without informing me, then assigned my MPN account to it. I was then able to create another Office 365 account that was somehow also attached to the same MPN account, but which couldn't get at the partner section, but which would accept my MAPS keys.
They fixed the bizzare double-attachment bit 32 hours after picking up the ticket, but solving the "regenerating me a new MAPS key" part of the ticket is two days past that and counting...
I think I'd have more confidence in Office 365 - which, from a technical standpoint is actulaly quite a good solution - if only authentication ever fucking worked. MPN never works the first time. Even straight-up .onmicrosoft.com Office 365 IDs never seem to work, requiring me to login two or even three times, sometimes requiring a log-out in between. There's something about session cookies they can't every get right.
Beyond that, I have all sorts of issues with Azure Active Directory. Sometimes it says it works, but isn't. Other times for reasons incomprehensible it just stops working, despite nothing having changed (and no reported outages on the MS side.) This makes hybrid setups very frustrating.
Microsoft is so close. Their hybrid solution will one day be the solution. But to be perfectly honest it's another 1-2 years away from being ready for primetime. Maybe when Server 9 comes out, they'll have added in the bits required to make it go reliably.
rm /My brother did that on a server we shared...
That's *usually* quite safe these days - if executed as a standard user, there's nothing in / that that user can delete. And if executed as root, "rm" is usually aliased to "/bin/rm -i", so there is a prompt for everything.
"rm -rf /" is the killer. A mate of mine used to use it as a demonstration of the robustness of his phone servers (a power cycle restored the unit to its state prior to the rm command).
Vic.
And if executed as root, "rm" is usually aliased to "/bin/rm -i", so there is a prompt for everything
Huh? What kind of namby-pamby, hand-holding, distro are you running?
Hint: always assume the safety's /off/ and think before you sudo, rm, dd or whatever. An alias for rm is suitable only for true nincompoops.
> Mine was the command "rm -rf / tmp/*"
At one place I worked[1], a user did the "sudo rm -rf *" thing. In the wrong directory. After managmeent *insisted* he was added to the sudeors list. On a production server.
If he'd been 30 minutes later, there would have been no sysads on site, and the mail spool would have filled over the weekend with all the cron jobs that could no longer run because their binaries were missing...
Vic.
[1] This happened after I'd left. Which was nice...
the command "rm -rf / tmp/*" (note the significant space).
We had a case a while back where people started complaining that they couldn't find some of their files. Initial investigations showed the free space on many disks gradually creeping up.
It was eventually traced to a timed (cron) job that was run to clean up debug output on some test systems. It contained something like "rm -rf $(DEBUG_LOG)/" and there was one code path where DEBUG_LOG was an empty string. Didn't help that the lab systems had NFS mounts of many development trees, either.
One of those times when all the effort put into the backup processes really paid off.
It contained something like "rm -rf $(DEBUG_LOG)/"
I had a customer do a "make target= foo clean", and wonder why he was getting a "No rule to make target `foo'" message. And then he wondered where all his source had gone.
It took me several days to get his work back. I was definitely Flavour of the Week afterwards...
Vic.
Then there was the trainer (not me !!) at ... well, better not mention where....
He managed to delete all of /etc
Not just once
Not even twice
But three times (over a matter of weeks).
It was a long re-install (from floppies) on a 3B2. Oops, gave it away.
Some years later, I myself did manage to blow away /etc by mis-typing vi as ci. ci didn't check the inode type before doing its work. That hurt.
I think one of the most frustrating lapses of judgement I have had involved too liberal an application of "rm -rf"
I was on call, and one of the app servers disk filled up at 3am. So I fell out of bed, looked at the monitoring system, swore at the devs for not tidying up old deploy folders, changed into the deploy directory and ran "rm -rf *".
Then I got an alert that the app server I was working on was no longer serving content, as the live app was a symlink to latest deploy, that some muppet had deleted.
Fortunately I was intimately familiar with the backup system that was in place.
A classic case! If you had stuck to "strictly equals" and "contains", you wouldn't have had that misunderstanding with yourself.
"Matches" is a nice example of natural language that looks fairly technical and therefore invites us to treat it as being precisely defined. But it ain't.
Thanks for sharing - I'll remember this one.
I think I'm right in saying (not knowing the ins and outs of Exchange) that "strictly matches" wouldn't do it, because that added guff about the various tests is present whether it's a YES or a NO.
I have spamassassin-checked mail being filtered by pigeonhole (aka dovecot sieve) so the sieve rule condition I use for this is "begins with". Sieve is also very nifty for some of the more esoteric recipient-routing rules we need, on which I could write a rather long (but very niche) essay.
@Trevor, does Exchange provide such a condition, or did you reimplement a different way? I'd love to have been able to see all the conditions offered by Exchange, wonder if there's a full list online somewhere for my edification.
...doing it with the GUI. EMC can be handy for reading settings etc, as they can be much more readable in a GUI. But PowerShell is the way to go for actually making any changes.
To be fair I'm not sure if implementing it in PowerShell would have made difference to this rule, but I believe your problem was caused by the fact that EMC used "Get-TransportRulePredicate HeaderContains" in order to build a rule the way you did it. But to get the effect you wanted, and the GUI implied it should have, it should have used "Get-TransportRulePredicate HeaderMatches".
This is just a guess though I haven't tested the difference between these two properties.
I am frequently convinced that the person who built the back end functionality of Exchange and wrote the PowerShell cmdlets was not the same person who built EMC and that in fact the two people never met or communicated in anyway.
A great example of this is the message tracing GUI in EMC, which is fine and wonderful as long as you are in the USA. The PowerShell cmdlet for tracing messages only accepts dates in US format, but the GUI pushes the date to it in whatever your regional settings are, thus if your settings are anything other than US the GUI doesn't work. Why the cmdlet wouldn't be written to use international standards and so format dates yyyymmdd I don't know, that standard is infinitely sensible but hardly used.
N.B. This is an experience I had in Exchange 2010 a little while ago, it may have been patched out of 2010 by now, and not exist in 2013 at all. But I wouldn't be surprised if it did.
Thanks for that. In addition to everything I have to remember about the hundreds of applications I manage, I'll just run along now and memorize every PowerShell command. It's not like money was paid so that there would be a reasonably easy to use and modestly intuitive GUI. Nope, rote memorization of more data than a human mind can actually hold for every application is absolutely the best possible path forward for systems administration.
Your only mistake was admitting liability, never take the blame for a major F*&* up!!
Lesson one: always have your scape-goat/fall-guy in place before making any kind of change to a live system. My preference is to have at least one hapless dev standing by on the sidelines ready and waiting...
I thought that when I saw matches pattern as my older MS favourite FINDSTR* uses those words in it's help and it does regex, but I think it's slightly none standard. I haven't dared try regex in exchange.
*I long ago gave up using windows GUI find/seach for finding content in files and use FINDSTR**
** Actual FOR is my favourite with a bit of CALL, SET, FINDSTR, IF who needs powershell, vbscript etc
"matches" means "matches". If Microsoft mean "contains", they should write "contains".
"X-Spam header matches "YES"" **MEANS** "X-Spam header matches "YES"". It does not mean "contains". If Microsoft meant it to mean contain, it sheald say "X-Spam header **CONTAINS**" **NOT** "matches".
Now aren't you all really happy that your employer doesn't read this message string? Yes Exchange because it's the most useful groupware app out there. Because the entire user base (non geeks) know how to use it, and it handles message traffic reporting and receipting back to the user. As a corporate message machine and functional and secure store of corporate message traffic and business history its unmatched. Take your google garbage and stuff it where the sun doesn't shine. Any business owner stupid enough to allow a lazy bunch of geeks to push gmail deserves what he gets.
Yes that means you Trevor. We pay you to do the job not whinge about it! and you got away with this until I bitched to you on Sunday.
On the off chance that you might be right, I ran a series of tests against my own Google Apps domain, egeek.ca. Here are the results.
Attempting to sent to an address that doesn't exist from a Telus-based e-mail account provided me this bounce message:
Reporting-MTA: dns; cmta4.telus.net [209.171.16.77]
Received-From-MTA: dns; Impella [108.181.21.61]
Arrival-Date: Wed, 04 Jun 2014 18:10:33 -0600
Final-recipient: rfc822; bob@egeek.ca
Action: failed
Status: 5.1.1
Diagnostic-Code: smtp; 550 5.1.1 http://support.google.com/mail/bin/answer.py?answer=6596 v7si6012708qad.84 - gsmtp
Last-attempt-Date: Wed, 04 Jun 2014 18:10:33 -0600
Similarly, attempting to send from a legitimate eGeek.ca account to an Astlor.ca (which runs on sendmail) account that doesn't exist let the NDR through to my eGeek account. It didn't get caught up in spam or trash; Gmail sent it straight through to my Inbox. Here is that e-mail:
Delivery to the following recipient failed permanently:
Bob@astlor.ca
Technical details of permanent failure:
Google tried to deliver your message, but it was rejected by the server for the recipient domain astlor.ca by astlor.ca. [64.141.126.154].
The error that the other server returned was:
550 5.1.1 <Bob@astlor.ca>... User unknown
----- Original message -----
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=egeek.ca; s=google;
h=from:to:subject:date:message-id:mime-version:content-type
:thread-index:content-language;
bh=stXngne3UrZepo/myHRVcSj4pEeKGAcgHsgoYbGKzkI=;
b=Y5T94txWG8KxY2DgzDuCHomK+vBIqnyKjTXdBpOMSzPCcF3Dcjh9LC3rAboEEMTlhc
0c0q/g5uzKBguhzfehD1IsFoRhZkAoSTW51I8xW3eUCinyhVENHBGxtwg+X3WWJf6Coc
ioDEGLMb0LUJz07bkAuqtpv6lN9ey698Hzvr0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=1e100.net; s=20130820;
h=x-gm-message-state:from:to:subject:date:message-id:mime-version
:content-type:thread-index:content-language;
bh=stXngne3UrZepo/myHRVcSj4pEeKGAcgHsgoYbGKzkI=;
b=iu6A0TLCPfGtwcUnD2FBh7LJOI3nAhbRZaumLMOZwKxkin9XjutfZvj66Js7ALupUA
+A52iq2TbIqaUv7N7kyN+0um6pa0jn0GWpsygwKn5ACVYvOf74D8vUqKHmsFkfmNoKMa
wJEn4URuLWrB1gLIUg1Q1gbTPzrQqGMuWKC6jyAkVTI+mO+pfYIRiUvOdp69K1sVmoDD
AnxAov02u6sABPVS2Y+vLD6V3Z+SgABUT+oy6vi9Y8kXc30nTvKJyBOK9GNmbij7esdV
4BohEl5QoevwwXFxqj5Xfzv4fLpXJsCV1G2T7TEfkAtYZ054EG28nnRBDJIQ88p/W048
m6hQ==
X-Gm-Message-State: ALoCoQnrR4fNM2MLTt+cTlUi3sJ7W/wrA1rtU6u5WkhKAzxc5vL1uO8QtLfap95CLWh1q5g5hTOQ
X-Received: by 10.50.13.4 with SMTP id d4mr13139985igc.11.1401927652048;
Wed, 04 Jun 2014 17:20:52 -0700 (PDT)
Return-Path: <trevor.p@egeek.ca>
Received: from Impella ([108.181.21.61])
by mx.google.com with ESMTPSA id q2sm400463ign.2.2014.06.04.17.20.51
for <Bob@astlor.ca>
(version=TLSv1 cipher=ECDHE-RSA-AES128-SHA bits=128/128);
Wed, 04 Jun 2014 17:20:51 -0700 (PDT)
From: Trevor Pott <trevor.p@egeek.ca>
X-Google-Original-From: "Trevor Pott" <Trevor.P@egeek.ca>
To: <Bob@astlor.ca>
Subject: Test
Date: Wed, 4 Jun 2014 18:20:44 -0600
Message-ID: <021701cf8053$fe803650$fb80a2f0$@egeek.ca>
MIME-Version: 1.0
Content-Type: multipart/alternative;
boundary="----=_NextPart_000_0218_01CF8021.B3E689A0"
X-Mailer: Microsoft Outlook 14.0
Thread-Index: Ac+AU/3dyjJXVzIqTei0po4Bz7aTVQ==
Content-Language: en-ca
Test
I also tried a series of additional tests (mailbox full and so forth) and found that Gmail allows all standard SMTP NDRs that I can think of to reach the Inbox and returns most of them.
Now, IIRC, this wasn't always the case; quite some time ago they had disabled NDRs for a while in order to cope with backscatter - quite frankly, backscatter is a huge problem for a lot of MTAs - but they seem to have gotten around the backscatter issue through a combination of blacklisting known bad senders (thus not sending them NDRs) and greylisting.
Interestingly enough, this is exactly what I am trying to achieve with the chained X-SPAM-STATUS filters: reduce backscatter. I need something that will do proper LDAP lookups against active directory and thus not accept mail for users that don't exist. That said, I also need something that wil both blacklist the known baddies (and not NDR them) as well as greylist new users so that known badguys can't just probe the directory.
E-mail isn't simple, and it's getting harder. It's a heck of a lot more complicated today than it was even two years ago, and it's nightmarishly fiendish compared to a decade ago.
Google does it well. Better, quite frankly, than anyone else I've seen. It seems we will remain starkly divergent in our opinions on this topic.
Also: just FYI, Peter had raised the issue to me before you did. I simply didn't check my e-mail until late Sunday afternoon because I was enjoying a wonderful blissful day of sleeping in, followed by spending time with my wife.
Cheers.
Actually, I have to disagree with you here. The reason for moving towards an X-Spam-Status header is that it is an industry standard. If the system is set up to accept these then it can be used with AS devices or services from any number of providers. Not all providers allow you to change the headers you are working with, so X-Spam-Status makes the most sense to stay with.
Now, the ability to change the old server to pop it's stupid BAYES info into a different header, that would be great...