Hey, but it's "the cloud" right? And a managed service provider?
That's gotta be better than hosting in-house, right? Right?
There's no end – or restored data – in sight for some Rackspace customers now on day 12 of the company's ransomware-induced hosted Exchange email outage. In the service provider's most recent update, posted at 0844 Eastern Time on Wednesday, Rackspace said it had hired CrowdStrike to investigate the fiasco, and noted it …
as it relives me from having to administer it myself. I hope those that signed off on this have a signed piece of paper with someone elses name on it pointing out that they warned the company of the potential consequences of the "all-our-eggs-in-the-cloud" approach. The more cynical among us might have been happy to leave it at that.
Those that cared about continuity of business and the welfare of the company at all shouldn't have been using hosting that they couldn't back up to somewhere outside the providers cloud. An human enough mistake, but a teachable example for managers and IT drones alike. This stuff isn't academic, and if a third party loses your data, they can just leave you twisting in the wind, regardless of contracts or promises.
Even an ironclad SLA with sharp teeth won't make your data reappear if it has been destroyed. Neither will I told you so's or excueses. Only a working copy of your data can save you. So make sure the number and location of the copies matches the importance of the data, don't take other peoples word for it, and test the restore workflows before you need to use them for real if you aren't a twisted masochist.
We all should know this, most of us say it, and we all need to actually be doing it.
Equally, of course, any admin who insists that it is kept inhouse, despite no data centre (small business) or offsite secondary (small business), that they'll cover whenever (24x7) should anything happen.
Not every organisation has the resources to run systems with that kind of reliability the business needs.
If Rackspace has had this service for 10 years, and this is the first outage that would be better that 99.5% availability. I suspect it goes back longer than that. Can you match that with a repurposed desktop machine under a desk in an office (or maybe there is a cupboard if you're really lucky).
Yes, many commentators here have the luxury of being part of a significant IT organisation, but that is a minority of organisations.
"If Rackspace has had this service for 10 years, and this is the first outage that would be better that 99.5% availability."
This is true; as a user of Hosted Exchange for 5 years, most outages in that time have been in the vicinity of a few hours, and the total number of outages I can count on one hand.
Not once did I think a cloud-first company like Rackspace, with a history of doing this stuff reliably and well for decades, would suffer a catastrophic failure that basically ends Hosted Exchange in a single incident.
Well the re-purposed gaming desktop under my desk has had outages, but never a 12 day outage.
Even if there was 1 hour of outage per day, which there hasn't been, that doesn't have anything like the same impact as a 12 day outage.
There is usually about 30 minutes of outage per month when I reboot to install Microsoft security updates. That is planned downtime, so has less impact. Then there's been some unplanned downtime of up to 4 hours, maybe once every two years.
It depends on personnel, doesn't it? Two low-end server machines in a Pacemaker/Corosync cluster with redundant UPSs and an automatic=transfer switch and a couple of routers from independent ISPs can indeed have that availability and fit in a 22U rack. It's not really that expensive, in fact cheaper than AWS etc., but only if that small company's one or two IT people have the expertise. Contrary to popular belief, they do exist.
"Equally, of course, any admin who insists that it is kept inhouse, despite no data centre (small business) or offsite secondary (small business), that they'll cover whenever (24x7) should anything happen."
This is a joke, isn't it?
Small company email needs to work 8h/5 days and outside of that it's meaningless. Even one person, who's 3rd job it is, can do that.
We had one email server (in the intranet, for internal mail) which hadn't even been booted for 6 years, happily humming 24/7/365. That's 100%, literally. Obviously it wasn't Exchange, though.
How well is Rackspace servicing 24x7? They don't, as we can see. You get exactly what you pay for.
If you've ever been an e-mail admin or had to keep the wheels on even a medium sized exchange environment, you probably got all gushy inside when the various cloud vendors offered to allow you to wipe that booger on them.
E-mail and particularly exchange environments have been a huge vector and target forever. Hosting your own does not make it any safer than putting it in someone else data center. It does, however, let you point the finger at them instead of updating your resume.
I fell for this in ~2009 and again in 2013. First time was to move student email out to Live@EDU. Free, basically unlimited, all hosted by MS, and I got back several hundred Gigs of space on our Groupwise POA box. Just had to learn a little Powershell, and off we go. Second time in 2013 was to move Faculty and Staff to Office365. Groupwise was falling out of favor internally due to the lack of easy integration with third-party apps, not to mention Novell's dissolution, and O365 was free, unlimited, and meant I never had to learn the struggles of an on-prem Exchange Admin. Yeah baby, let's go.
I don't consider it a bad move overall, especially since MS removed Single Instance Storage in Exchange 2010, which, as a 12-year Groupwise admin that seemed like the dumbest idea ever unless the intention is to push everyone off-prem (which I guess was the entire idea). Exchange Online has proven to be usable , we just have to hope MS is better prepared to deal with a massive issue like this than Rackspace apparently was. Yeah, I'm not holding my breath...
I think we might find that watching the cloud provider lose all the emails is just as much of a resume updater as doing it yourself
On the other hand, I can't imagine how ruinous doing it properly and having regular backups under your own control would be in data egress fees alone. It seems like a no win situation where, as ever, the best case scenario is no one notices that the line item you keep fighting for just saved the company
I have read people claim ransomware can sit waiting for upwards of 90+ days before striking. I used that justification to finally get a decent tape drive approved a few years ago for my last company's IT dept, then they used Veeam to backup to tape.
I suppose in theory if you restored old data onto a server WITH A CLOCK SET TO THE RIGHT TIME (not current time), then perhaps it could be fine, but of course systems don't often behave well when their clocks are out of sync.
So for example if you restored data from 45 days ago, onto a server with it's clock set to 45 days ago then you may be ok, if you restore it to a server with current time then perhaps the existing ransomware will see the strike time has passed and activate again. I've never been involved in ransomware myself so don't know how fast it is acting.
I'd assume this was a highly targeted attack against rackspace not a drive by thing.
Also read years ago on multiple occasions claims from security professionals that on average intruders had access to a network for roughly 6 months before being detected, first saw this claim reported by the then CTO of Trend Micro, saw a presentation from him at a conference, normally I hate those kinds of things but that guy seemed quite amazing. I was shocked to see him admit on stage that "if an intruder wants to get in, they will get in, you can't stop that". And not try to claim his company's products can protect you absolutely. I posted the presentation in PDF form(probably from 2014) here previously, though it loses a lot of it's value without having the dialog that went along with it
http://elreg.nateamsden.com/TH-03-1000-Genes-Keynote_Detect_and_Respond.pdf
"I have read people claim ransomware can sit waiting for upwards of 90+ days before striking."
Irrelevant, you *do not* restore potentially corrupted software from backups, you restore *data*.
Software is installed from installation media to empty machines. Exactly for this reason.
-> It allows you to make sure that it is well defended
If you have the resources and knowledge to do that. For a small company with, say, 25 mail users, who is going to do that? It would be seen as an expensive cost to hire 1 person (actually more than 1 as that person would take holidays) for the pleasures of in-house email hosting.
-> offline backups.
That seems to be where Rackspace screwed up - the backups seem to have been writable rather than locked.
"? It would be seen as an expensive cost to hire 1 person (actually more than 1 as that person would take holidays) for the pleasures of in-house email hosting."
Why the fff* you'd hire *a full time person* to manage *email server*? Absolute bonkers even as an excuse for not having one.
Once it's installed and configured, it needs few hours every month and no-one sane hires a person for that. Even if it's Exchange. If it's some actually good server, it runs for years with 2h maintenance break every month.
"what if it breaks" .... use a server which doesn't. Simple as that. Company has had one for 12 years now, security&other upgrades every month, 0 to 2h downtime. Nothing else. Weekly backups off-line at office safe and disaster recovery plan, not only written, but tested to work too.
Use good tools and implement stuff properly and "24x7" support is meaningless, you won't get one anyway: You get *a reply* in that time. No more, no less.
" Hosting your own does not make it any safer than putting it in someone else data center."
This is proper bullshit: It does. A lot.
When you host your own, *you* are the person responsible for it and so the effort for securing it is *much* higher any "cloud provider" will ever bother to attempt to do.
As we can see in this case: Not only no service, they *lost all data*. That's a disaster recovery case. Which *can* happen in locally hosted environments too, but it's *much* easier to handle. Backups, you've heard about them?
Every cloud provider is in the business of making money, not offering security. Anyone believing otherwise is a fool and a disaster recovery waiting to happen.
"In ransomware attacks, data recovery efforts do necessarily take significant time, both due to the nature of the attack and need to follow additional security protocols."
Yes, it may take a couple years to infinity for the security engineers to reverse engineer the decryptor without paying the ransom.
29 November A knock on your door, presumably by a burglar casing your condo. Action taken? Observably insufficient.
2 December Burglar makes off with your customers' swag. Action taken? Observably insufficient.
9 December Rackspace said CrowdStrike confirmed the intrusion was limited to the hosted Microsoft Exchange environment. Because, if you can't see a trace elsewhere then it's not there, right? Like on Nov 29...
I've an inkling that Rackspace doesn't give a flier about security protocols.
Barracuda Networks, how aptly named: ferocious, opportunistic predators, sometimes consuming the remains of other predators.
Meanwhile: Rackspace stock unsurprisingly continues to fall, down from ~$5 (start of incident) to ~$3.20. Buy at $0.10 ahead of Amazon's acquisition?
I don't disagree. I didn't mention YTD as I'd mentioned it in a different thread, just a bit of an update :)
What with Rackspace mentioning selling parts of the business off "anything goes" & shedding staff, looks like someone's possibly making a mint short selling. (Shh! No one likes conspiracy theories hehe!)
I'm not any sort of Exchange admin, so I haven't got the background to credibly evaluate where this went off the rails. Nor who dropped the ball. Seems like Rackspace PR was originally misleading, at least, which is hardly a good sign.
What does "hosted Exchange" mean in general? It sounds like you're paying [someone] for Exchange mail service, storage, hopefully backups, and support -- no Exchange admin on your own staff required. It is considered perilous vs. running your own? From what little I've heard, hosted Exchange is fairly common these days.
How is it different in the context of Rackspace? E.g. is/was Rackspace "simply" running a pile of MS Exchange servers in their colos, and selling logins and mailbox storage (presumably) to customers? From TFA is sounds like Barracuda mail archivers </speculation> were responsible for the backups part.
Now that Rackspace is apparently advising their customers to migrate their mail service elsewhere, does that mean Rackspace is just trying to exit that business altogether? I admittedly don't know the mechanics of that "migration", never having moved Exchange mailboxes.
What does "hosted Exchange" mean in general? It sounds like you're paying [someone] for Exchange mail service, storage, hopefully backups, and support -- no Exchange admin on your own staff required. It is considered perilous vs. running your own?
You're right: "Hosted Exchange" basically means paying some company to spin up a Windows Server VM with Exchange in it. It comes complete with the full set of vulnerabilities that both Windows and Exchange are famous for. *Hopefully* your hosting company will have scripts to apply patches in a timely fashion across their fleet of VMs - but they might be lax, and in any case this is software with a long and illustrious history of zero-days. *Hopefully* they are taking regular immutable backups (and doing test restores), but clearly that wasn't happening in this case.
Office 365 is a completely different beast. This was built from the ground up to be a multi-tenant cloud service. I think it's very probable that Windows Server does not form any part of it, nor does Exchange. There will be data sanitization and security tripwires in every Internet-facing component, and between every internal component, and 24x7 security ops monitoring every aspect of the platform.
Consider this: if there were a similar security breach with Office 365, it would be *Microsoft's* entire cloud business which would be trashed. Whereas when there's a security breach with Windows Server or Exchange, it's only *your* business which is trashed - and/or some third party hosting company like Rackspace - neither of which Microsoft cares about. In any case, they can always blame you for not doing your job right.
Therefore, the amount of care and attention applied by Microsoft to building, maintaining and securing its O365 platform is orders of magnitude higher than its traditional software products. And Rackspace recognise this.
"*Hopefully* your hosting company will have scripts to apply patches in a timely fashion across their fleet of VMs - but they might be lax, and in any case this is software with a long and illustrious history of zero-days. *Hopefully* they are taking regular immutable backups (and doing test restores), but clearly that wasn't happening in this case."
And lets hope they test those scripts in a development environment first.... Remeber a few years ago the "Dead VM" cleanup script with the wrong variable at 123-reg that deleted all of their customers VPS? I do, I had to rebuild quite a few at other providers!
And this is true, but as sure as 'God made little green apples', at some point in the future, MS absolutely WILL suffer from a similar incident, be it some external attack or, more likely simple human error.
Although as you rightly say, I 'expect' that MS365's reliability and resilience is much higher than Rackspace's as MS is a 'big' company, which effectively is doing hosting for a living. But from the punter's point of view isn't Rackspace also a 'big' company, which effectively is doing hosting for a living? So anyone might well be forgiven for assuming that the two are equivalent.
In my time, I've moved quite a few on-prem Exchange servers to M365, it never works absolutely perfectly, but then again nothing does - so it is important to set expectations, yes Mr CEO, I fully expect the vast, vast majority of your email to migrate and be available - but there is always a non-zero probability that a couple of messages will get lost in transit, and no, I can't tell you up front which ones they will be!
Usually they are understanding about this, still looking at how 'cheap' it is to run things from Das Cloud! And then you drop the bombshell on them - and what about backups that are fully under your control? And then you get the usual cry of 'but it's the cloud, everything is perfectly safe, no, why do we need to spend extra money on backups'? Suddenly the up-front savings are not quite as much as first thought!
So you end up having a small fight with your client to get them to understand this, 'oh doesn't Microsoft do it all'? Well, yes but you never, ever use the same company to backup your data and look after it at the same time - either some on-prem device (but 'we're trying to get rid of all the equipment in the office - we don't want to buy more') or an alternative cloud provider (something like Skykick or similar), which will have a recurring monthly cost.
There is nothing inherently wrong about cloud storage or cloud-based infrastructure, just as long as you always remember that despite all the claims by the salespeople, you could be working just fine on Friday afternoon but that doesn't necessarily mean that your data will still be there on Monday morning - unless you are prepared to take matters into your own hands to mitigate against this!
"Office 365 is a completely different beast. This was built from the ground up to be a multi-tenant cloud service. I think it's very probable that Windows Server does not form any part of it, nor does Exchange."
Don't think that's true... Parts of it may be, but windows servers are in there, and there's either parts of the existing exchange codebase in there, or fresh implementations that happen to share the same bugs.
e.g. https://srcincite.io/blog/2021/01/12/making-clouds-rain-rce-in-office-365.html
"Therefore, the amount of care and attention applied by Microsoft to building, maintaining and securing its O365 platform is orders of magnitude higher than its traditional software products."
You'd hope, but this is Microsoft.
None of which exonerates Rackspace, who from what I've read were running Exchange 2013 and didn't have it fully patched. Also, they clearly did not have an adequate backup regime in place despite selling their service on the basis that they did.
"Therefore, the amount of care and attention applied by Microsoft to building, maintaining and securing its O365 platform is orders of magnitude higher than its traditional software products."
Is this a joke? Because massive failings like OMIGOD and a raft of other issues with Azure clearly show that Microsoft doesn't care any more for the security and reliability of its cloud offerings than it does for its offline software such as Windows - which is very little.
Is Microsoft doing a better job than Rackspace? Probably, but it's still pretty much a choice between Black Death and Cholera.
Office 365 is a completely different beast. This was built from the ground up to be a multi-tenant cloud service. I think it's very probable that Windows Server does not form any part of it, nor does Exchange. There will be data sanitization and security tripwires in every Internet-facing component, and between every internal component, and 24x7 security ops monitoring every aspect of the platform.
You would think that, but there are a lot of tasks that involve a remote powershell session to the Office 365 servers to allow you to make changes to Exchange settings that have not been exposed via the web GUI.
Hosted Exchange *should* mean an Exchange tenant in the provider's Exchange infrastructure, not a singe VM, contrary to a couple of the answers here. This means that you're getting advantages of scale (clustering, database availability groups, hopefully better backups and restores) than you might be able to afford running on-premise.
I wondered about that.
I mean, unless there's some scaling or redundancy advantage or similar, if "hosted exchange" is just "running a Windows server VM with Exchange on it in somebody's remote colo or cloud", and optionally also "paying some clever consultant to run the thing properly", it hardly seems like a big win. So I would think there's (supposed to be?) more to it than that.
Sounds like Rackspace may have been doing an expanded (somewhat?) version of that, but perhaps with not so many clever folks looking after it as they should have had.
I have worked for a major telecommunications company for 25 years as a direct hire and as a contractor. I have a lot of experience with The Work Around. My small business is IITL certified. We know process. I want to be part of the solution to anyone trying to restore their Rackspace account not another part of this painful process. A competent IT person was in my office yesterday and we spent 8 hours on the phone with various levels of Rackspace and Microsoft support. Anything that is posted in Rackspace's portal is a best guess and not the 100% solution in our experience. There are multiple layers of authentication hiccups to overcome. When we were able to port my account to the Microsoft platform it only loaded in a web version not my desktop Outlook version. Then this afternoon emails magically started appearing in my desktop Outlook version until just now when I deleted all the cookies in my browsers. Now it is not working again in my desktop Outlook. Microsoft wanted me to authenticate via a QR code that didn’t work but then it gave me the option to receive a text message via my cell phone. Now my emails load in the web version but not my desktop Outlook version. If you have chosen this path do not delete the cookies in your browsers. Use Bing it seems to like that. Hope that helps.
The only reasonable explanation is that Rackspace don't have any backups for Exchange.
Otherwise they would have recovered to N days before the incident by now and be saying "We're working on the rest".
The involvement of Barracuda doesn't make any sense. If they were responsible for any part of Rackspace' backups, then Rackspace should be paying them to recover the data as part of that contract.
And if they weren't, how could they be technically capable of restoring anything?
>>The only reasonable explanation is that Rackspace don't have any backups for Exchange.
Nope - Backing up and restoring Exchange is (or was, last time I was involved in Exchange - somewhere around 2013 IIRC) interesting. Having a backup (an achievement in itself) is one thing, restoring it quite another. They may well have backups... that's the relatively easy part.
>>Otherwise they would have recovered to N days before the incident
See above... restoring Exchange is non-trivial even on a single (well it has to be two these days) server with 10 users. It is often easier and quicker to say "sod it" and start again and hope to be able to import usrs mailboxen from wherever they may be.- hence the encouragement by RS for customers to migrate to Microsoft 365.
>>If you can't restore it (or you've never even tried to restore it) then it isn't a backup.
Totally agree.
The issue here is that the design/architecture of Exchange server makes it very difficult indeed to do a restore... it is possible to do, just completely non trivial.
Getting experience in restoring Exchange servers is kind of hard without breaking stuff in a big way - and as we all know a live system behaves very differently to a lab set up to practice backup/restore processes...
Many years ago, I worked at a company who had set up shiny new VM Ware systems and a huge (for the day) SAN. Exchange 2008 (I think) had been spun up and the Exchange 2003 migration was happening as expected: completely transparently and seamlessly. Until, of course, it didn't: the new drive for the mailbox database hadn't been provisioned big enough, and the transfer (and server) fell over. "Don't worry," said our new hire who brandished his Microsoft certifications, "I can deal with this". And off into the server room he strode.
"FUCK!!!!!!" came the scream a few minutes later. Instead of expanding the drive, a very simple process in VM Ware, he had accidentally deleted the mailstore drive. Now we were in trouble: email was not flowing because the old 2003 server was in the process of being decommissioned, and maybe 50% of the mailboxes had been transferred to the new (now deleted) system and were gone from the 2003 mailstore. Backups were not yet running on the new system, as the tape library was still being set up.
It fell to me to spend the weekend restoring the old mailstore from 2003 backup tapes and get that system running again, as our new hire had prior commitments over the weekend (and out IT manager didn't trust him alone with the systems now). It took the weekend, but the 2003 Exchange was up and running with the loss of several days email, then we transferred operations to the new server.
All in all, a process I never want to experience again.
So, assuming each hosted server is in a discrete VM/container, a full VM/container back up/restore should do the trick, no?
If data-only back ups are not technically viable, and if a 1:1 backup footprint is too much of a footprint, I suggest shifting to a system that is not, what's the word... oh yeah: shit.
Getting experience in restoring Exchange servers is kind of hard...
The kind of experience that one would reasonably expect a third party service provider, offering mission critical services, to have available.
> Getting experience in restoring Exchange servers is kind of hard...
> The kind of experience that one would reasonably expect a third party service provider, offering mission critical services, to have available.
Having an experience and being prepared to repeat it are two different things. I have had teeth pulled. I am in no hurry to repeat the experience. I know people who have forcably had to acquire experience in restoring Exchange instances, all have expressed a devout wish to never have to repeat the experience. Perhaps Rackspace did indeed still possess such people. Who, the moment they were faced with the prospects of weeks or months of doing nothing but Exchange restores, summarily submitted their resignations to pursue a number of more attractive prospects such as a relaxing career training lions, or spending more time with their families, or undertaking a deep study of auto-dentisty, or living on a bench in Trafalgar Square in mid-winter fighting with pigeons for sustenance.
If Rackspace were not "comfortable" with the prospect of restoring the data they should not have offered the service.
If Exchange is such a PITA, don't use it, use something fit for purpose including backup restoration.
There's no acceptable QQ for a service provider here.
Following your tooth-pulling analogy, extraction may not be a wonderful experience for a normal person, but the result is greatly more appreciated than sticking with an abscess, Shirley! Exchange appears to be analogous to a tooth with an abscess! Rip it out, replace it with an implant or plate :D
at my first system admin job back in 2000 one of the managers there(not someone I reported to) would on occasion ask me to restore some random thing. I thought it was a legitimate request so I did (or tried, sometimes could not depending on the situation). Later he told me he didn't actually need that stuff restored he was just testing me. Which I thought was interesting. I wasn't mad or anything. Never have had a manager do that again or at least never admit to it.
One company I was at we finally got a decent tape drive and backup system in place. I went around asking everyone what do they need backed up, as we don't have the ability to backup EVERYTHING (most of the data was transient anyway). Fast forward maybe ~6-9 months we have a near disaster on our only SAN. I was able to restore everything that was backed up, some requests did come in to restore stuff that was never backed up and I happily told them, sorry I can't get that because you never requested it be part of what was backed up. In the end minimal data loss from the storage issue but there was several days of downtime to recover.
My first near disaster with storage failure(wasn't on the backend team that was responsible) was in 2004 I believe, double controller failure in the SAN took out the Oracle DB. They did have backups, but they knowingly invalidated the backups every night by opening them read/write for reporting purposes. Obviously the team knew this and made the business accept that fact. So when disaster struck, it was much harder to restore data as you couldn't simply copy data files over, or restore the whole DB as the reporting process was rather destructive. Again multiple days of downtime to get things going again, and I recall still encountering random bits of corruption in Oracle a year later(would result in a ORA-600 or something error and the DBA would then go in and zero out the bad data or something).
My most recent near storage disaster was a few years ago at my previous company. Their accounting system hadn't been backed up in years apparently, IT didn't raise this as a critical issue, if they had raised it with me I could have helped them resolve it, it wasn't a difficult problem to fix just one they didn't know how to do themselves. Anyway, the storage array failed, again double controller failure. End of life storage array in this case. They were super lucky that I was friends with the global head of HPE storage at the time and after ~9 hours of our 3rd party support vendor trying to help us I reached out to him in a panic and he got HPE working around the clock, took about 3 days to find and repair the metadata corruption, minimal data loss(no data loss for the accounting folks). Was quite surprised when I asked for a mere $30k to upgrade another storage system so we could move the data and retire the end of life one, and the same accounting people who almost lost 10 years of data with no backups told me no.
I wonder if Rackspace has worked out a deal with hackers whereby the hackers will receive payment for each customer that pays the restore from backup fee. You pay the fee, Barracuda (after a cut) passes that payment to the hackers along with a customer name, and the hackers then make that customers data available.
...I do wonder what will happen if Big Cloudy Services like Exchange365 etc fall prey to ransomware.
They will have proper preparation for exactly this eventuality: that is, they will be able to restore everything to exactly as it was N hours or days ago. They may be out of service for hours, but they will get through it. Being able to restore to a known state is not magic, it is just difficult (read: expensive) to do right, especially with large data volumes.
They will also have deep security monitoring, tripwires and honeypots, which will alert them as soon as any files start changing unexpectedly, or a hundred other behavioural anomalies. The chances of any ransomware running rife but undetected is very low, and the response will be immediate and well-drilled.
It would be great if every large organisation that hosts its own IT were able to deploy the same level of protection and observation, but sadly it's usually too hard and too expensive. Besides: when your managed IT infrastructure contains tens of thousands of Windows *workstations*, running local applications with direct filesystem access via SMB mounts or whatever, it's a whole different level of insecurity. A self-contained cloud service at least doesn't have to contend with that.
If Microsoft does get attacked by ransomware (and who says they haven't already?), I'd expect it would be their staff office network that succumbs.
"They will have proper preparation for exactly this eventuality:"
If you really believe that I've news for you: It is Microsoft, a corporation run by marketing *and lawyers*.
Marketing makes promises they won't even try to keep and corporation won't pay any damages to anyone, so they have lawyers. More profit that way, you see.
"They will also have deep security monitoring, tripwires and honeypots, " ... that's what the *marketing* is saying. Have you ever seen any of them? I thought so, I haven't either.
I find it quite disappointing that Rackspace's PR say they have "restored access to email for x number ot affected users", as though it is supposed to mean they are back up and running and fixed. What they should be saying is "moved x number of customers to an alternative supplier (Microsoft 365) while we are still unable to restore hacked systems and affected data".
They're just not being straight enough.
Ex-Rackspace Hosted Exchange user here.
From the outset, it has been obvious Rackspace has no intention of restoring the Hosted Exchange service. Why force-migrate people to Microsoft 365 if the Exchange service will be back up soon? Bringing them back will be just as hard, and honestly who would trust Rackspace's Hosted Exchange now?
There is the question of email archives. What is missing from the Barracuda discussion is that you needed to have been subscribed to that system before the outage in order to recover emails. If you try and sign up now, there is literally nothing to archive because the mailboxes are offline.
I don't envy the decisions Rackspace has to make now, but how they have treated customers is contemptable. It feels like nothing but lies from them. At this stage, we all know there will be data loss; the fact they can't spin up the last known good backup in a read-only state so people can salvage stuff says it doesn't exist. That is inexcusable.
Will Microsoft 365 end up being any better? Time will tell, but for now I'm researching cloud2cloud archiving.
This post has been deleted by its author
Those saying Office 365 is the best thing to use, MS is up front about their stuff too:
https://learn.microsoft.com/en-us/azure/security/fundamentals/shared-responsibility
The "information and data" section is shown as the sole responsibility of the customer, not of MS.
"Regardless of the type of deployment, the following responsibilities are always retained by you:
Data
Endpoints
Account
Access management"
There are backup solutions for Office 365 for a reason.
I'm assuming here probably greater than 90% of their customers don't realize this.
(I don't vouch for any provider in particular, I'm a linux/infrastructure person never touched Exchange in my life, and I've been hosting my own personal email on my own personal servers(co-lo these days) since 1997)
Genuine Q
I often hear of applications where backup and restore (usually the latter) in a royal pain, and Exchange, which I confess I have 0 clue about, is apparently one such example
But ... If you are running such applications in VMs, can't you just back up the volumes (or snapshots of them) on which the virtual hard drives are hosted?
Indeed.
It comes down to the amount of real estate required to store the images/snapshots. Assume retention of 7 nightly backups, 4 weekly backups and 3 monthly backups [YMMV]. That's 14:1 ratio of the entire VM/container footprint. And for the almighty cloud service provider that's then per hosted server. That's a lot more 45U racks than they want to pay for.
[YMMV] The numbers I've suggested are to demonstrate some resilience against malicious activity such as a ransomware attack. It's pragmatic to assume (in a ransomware attack) that the attackers have had access to, and been monitoring, the target systems for some time before they execute the coup de grâce, ie. the backups may well be compromised (if not off line for example). In this Rackspace incident, it's quite likely this was at least 4 days, (29/11 - 2/12) but I wager a while longer. I'd further wager that, in the apparently unlikely event Rackspace had backups, that those backups were restored on a separate system for verification or stored in a way that prevented abuse.
A much simpler solution is to use an email server setup that does allow for practical and timely back up and restore of the data (emails) only. But sheeple are sheeple, and they bleat better than they care to think.
Addendum:
The part that makes the "whole container" back up approach less viable is that it cannot easily be ignored that the container(s) may have been breeched and laced with time-bomb nasties. So even a backup verified as good shortly after the back up is performed, may be FUBAR after n hours/days.
It can't really be done by a third party in a practical way (that I can see, willing to be told otherwise). They can verify backup image matches source image, but that's the end of it.
"Assume retention of 7 nightly backups, 4 weekly backups and 3 monthly backups [YMMV]. That's 14:1 ratio of the entire VM/container footprint."
Nope. Not even near, as VMWare makes delta snapshots and that's nothing but changed bits and few kilobytes of header.
Full backups go to tape and into office safe anyway, size is irrelevant and only important thing is that *they exist*.
Trying to save some space on tapes is absolute bonkers, even as an idea.
Backups cost money, but not even near as much as claimed here. And the option #2 is no backups. Good luck on that.
"ie. the backups may well be compromised (if not off line for example)."
Sure. the *software part*. No-one restores software from backups in case of ransomware, *only* the data. Software is installed from scratch.
Which is a fail in Exchange case, you can't do that, AFAIK.