back to article Rackspace rocked by ‘security incident’ that has taken out hosted Exchange services

Some of Rackspace’s hosted Microsoft Exchange services have been taken down by what the company has described as a “security incident”. The company’s most recent incident report at the time of writing, time-stamped 01:57 Eastern Time on December 3rd, offers the following information. “On Friday, Dec 2, 2022, we became aware …

  1. Mike 137 Silver badge

    Love the language

    "became aware of an issue impacting our Hosted Exchange environment. We proactively powered down and disconnected the Hosted Exchange environment while we triaged to understand the extent and the severity of the impact."

    'impacting' = colliding with (but nothing apparently hit the racks)

    'proactively' = acting before the fact (buit they didn't - they responded to the identified issue)

    "triaged" = classified as [a] a write off, [b] a fixable issue or [c] something that would sort itself out (did they really do this?)

    This is, I suppose, PR speak intended to demonstrate expertise. It fails to demonstrate that in respect of language.

    1. Commswonk

      Re: Love the language

      This is, I suppose, PR speak intended to demonstrate expertise. It fails to demonstrate that in respect of language.

      Thank you for sparing me the trouble of making this point.

      My only niggle with your post is that you missed out Hosted Exchange environment which is an abomination equally deserving of censure.

    2. iron Silver badge

      Re: Love the language

      You need a dictionary...

      Impact - have a strong effect on someone or something

      Proactively - taking action to control a situation rather than just responding to it after it has happened

      Triaged - decide the order of treatment

      So actually it is not PR speak, it demonstrates greater expertise with the English language than yourself and is the standard terminology for similar incidents.

      1. -v(o.o)v-

        Re: Love the language

        Naww, it makes no sense. How do they triage with everything powered down?

        1. Berny Stapleton

          Re: Love the language


          Pull a mirrored disk, take an image and test with that, leaving everything intact. Boot that on an identical system.

          1. Anonymous Coward
            Anonymous Coward

            Re: Love the language

            Mirrors?! I'm guessing you are about 1 year in career or worked at some straight trash places commenting about Mirrors.

      2. Displacement Activity

        Re: Love the language

        So actually it is not PR speak, it demonstrates greater expertise with the English language than yourself

        Incorrect use of a reflexive pronoun. Unless, possibly, you're Irish... :)

        1. TimMaher Silver badge

          Re: Irish

          There you are @displacement. Is it yourself?

          Coat because it’s green.

      3. oiseau

        Re: Love the language

        ... is the standard terminology for similar incidents.

        Hmmm ....

        .. is the standard terminology hogwash for similar incidents.

        There you go.


      4. Jimmy2Cows Silver badge

        Re: Proactively

        Proactively - taking action to control a situation rather than just responding to it after it has happened

        Yeah, but no. They demonstrably were not proactive:

        ...we became aware of an issue impacting our Hosted Exchange environment. We proactively powered down and disconnected the Hosted Exchange environment...

        My emphasis: became. They did not proactively take action. They were 100% reactive. They responded to something that was already happening or had already happened. By very definition that is not "proactive". Suspect they're only using "proactive" because they think it sounds like they're on the ball, and because many people are too ignorant to realise it doesn't mean what they think it means.

    3. bernmeister

      Re: Love the language

      Fancy language, but i reckon they just panicked, pulled the plugs and flipped the big switch.

  2. sitta_europea Silver badge

    Off-prem. Microsoft. Exchange. What could possibly go wrong?

    1. adrianww

      One might even say…

      Microsoft Exchange. What could possibly go right?

      1. Ian Mason

        Re: One might even say…

        "I know. Nothing." - Sgt Shultz

        1. oiseau

          Re: One might even say…

          "I know. Nothing."

          Now, that was a blast from my past.

          Never missed a Hogan's Heroes episode.

          Thanks for that.


          1. Adrian 4

            Re: One might even say…

            Hogan's Heroes ? What's that ?

            This is a quote from Fawlty Towers.

            1. Ian Mason

              Re: One might even say…

              No, he's right. Sgt Shultz from Hogans Heroes, whose entire vocabulary seemed to be "I know nothing", "Raus, Raus!", and "Nein, nein, nein! I vill get into trouble Colonel Hogan".

              I too loved Hogan's Heroes. Think of it as "Bilko!" crossed with a comedy "Mission Impossible", set in a POW camp.

    2. Anonymous Coward
      Anonymous Coward

      > "Off-prem. Microsoft. Exchange. What could possibly go wrong?"

      Just a whole load of different wrong to on-prem Microsoft Exchange wrong?!?

      1. Wayland

        Yeah but Exchange can do "Away" emails, surely a little down time is acceptable for the ability to do those?

  3. flibble

    It's all down

    "Some of Rackspace’s hosted Microsoft Exchange services have been taken down by what the company has described as a “security incident”."

    As I understand, all hosted Microsoft Exchange services have been taken down for all customers.

    Only some customers are directly affected by the "security" incident; i.e. only some customers will suffer data loss or stolen data.

    Most disappointing was the poor communication from rackspace - it was about 24 hours from the first outage to the time they admitted they'd taken servers down themselves, and all that time all they said was they were "investigating".

    This is my favourite part of the question/answers:

    "Will I receive mail in Hosted Exchange sent to me during the time the service has been shut down?


    1. sitta_europea Silver badge

      Re: It's all down

      ""Will I receive mail in Hosted Exchange sent to me during the time the service has been shut down?


      Thank you for sharing that little gem. :)

    2. katrinab Silver badge

      Re: It's all down

      The correct answer, is, it depends on the sending server, and how long it is down for.

      Servers will keep trying to transmit the message for a while, at decreasing frequencies, then send a delivery failure message to the sender.

      The default settings for Exchange are 24 hours before it gives up and returns to sender. Most people don't change those.

      I don't know what the defaults for other mail servers, eg Gmail are.

      1. flibble

        Re: It's all down

        Surely the correct answer is “we got one of our other teams to spin up a bunch of SMTP relays to queue the mail so nothing will be lost”?

        1. ElRegioLPL

          Re: It's all down

          In a few days the headline would've been "Rackspace DDoS'd themselves with the mail queue"

      2. TimMaher Silver badge

        Re: Returns to sender

        “I gave a letter to the postman,

        He put it his sack.

        Bright and early next morning,

        He brought my letter back.

        She wrote upon it:

        Return to sender, address unknown.

        No such number, no such zone.

        We had a quarrel, a lover's spat

        I write I'm sorry but my letter keeps coming back.

        So then I dropped it in the mailbox

        And sent it special D.

        Bright and early next morning

        It came right back to me.

        She wrote upon it:

        Return to sender, address unknown.

        No such person, no such zone.

        This time I'm gonna take it myself

        And put it right in her hand.

        And if it comes back the very next day

        Then I'll understand the writing on it

        Return to sender, address unknown.

        No such number, no such zone.”

        Thanks Elvis. Still alive you know.

    3. Drs. Andor Demarteau (ShamrockInfoSec)

      Re: It's all down

      Possibly not I suspect.

      I’ve been trying to mail a business friend, who turns out to be a Rackspace customer, and all mails time out after half a day or so.

      Although just one example, it would not surprise me if this is indicative.

      1. John Brown (no body) Silver badge

        Re: It's all down

        But isn't that your server config that gives up after 1/2 a day rather than indicative of how others servers may be set up? Or are you saying you know your sending server is using default settings and therefore that is what most other people will have?

        1. tango_uniform

          Re: It's all down

          Depends on what's happening over at Rackspace. If they've got mailbox issues then inbound emails (if they're hitting the Exchange infrastructure at all) could be getting bounced with 550 status (no mailbox found for that address). SMTP is pretty resilient, but it only takes one cranky MTA to bung up mail flow.

          One with some experience running on-prem Exchange could conclude that Rackspace was slow on the patching process. There were some pretty significate exploits in Exchange this year.

      2. Wayland

        Re: It's all down

        So it takes half a day before you learn it's failed?

    4. flibble

      Re: It's all down

      As an update (for one of my contracts I have a mail account that was hosted on rackspace, indirectly via another managed serviced provider) our service provider managed to complete the transition of the domain over to O365 yesterday and sent us all new login details and we're now receiving new emails fine.

      We're expecting that everything sent between sometime Thursday and the DNS updating sometime on Sunday has been lost, in most cases unrecoverably - in some cases the sender will have got an 'undelivered message' report.

      I was using a local mail client (MacOS Mail) so it had all my older messages locally and I was able to copy them all to a local mailbox before deleting the old account (a necessary step, as MacOS Mail only allows you to have one exchange connection for each account) and then setting up a new account using the new login details.

      My colleagues that were using OWA (Outlook Webmail, basically) have no access at all to emails prior to the outage. It's not clear if we'll ever get access to those old emails, but if we do my best guess is it'll be weeks away.

      We've no idea if any data was accessed by the attackers.

  4. Lil Endian Silver badge

    Business Continuity

    A former client of mine is in the financial services sector. Very good at what they do, with their fees representing that. Some of their clients were (at the time) from the eastern parts of Europe - the kind of people with bent noses and tattoos that tell a story.

    I could not advise strongly enough regarding data sovereignty and business continuity. Stay on-site. "The Cloud! The Cloud!" they cried. Advice ignored. Client ditched.

    SLA failure leading to financial loss is one thing. Kipping with a horse's head is another.

    Rackspace host many mission-critical applications and vital data.

    Yes, they host mission-critical stuff, but they can't possibly hope to provide the service levels required if the phrase "mission-critical" is to be adhered to - no third party, remotely hosted [1] service can. Yet the buyers believe the hype.

    [1] "Cloud" is for sales people to sell to clients that think they care, but really don't.

    1. Nate Amsden

      Re: Business Continuity

      Curious what you mean by this. Rackspace has been managing exchange systems for well over a decade at this point so they obviously have a lot of experience there. I assume they still operate their own data centers in many/most cases?(I know their business has changed quite a bit in the past decade). In this particular situation I would say they are expected to provide service levels associated with mission critical stuff for Exchange. That's what the customers are likely paying for anyway. Just not sure what you mean by "no third party hosted service can". Do you mean that only doing it yourself can you provide true mission critical services? Or do you mean only Microsoft can provide mission critical Exchange?(obviously far from the only mission critical app stack out there) or both or other?

      Sounds like Rackspace's communication was poor on this, but taking down everything was a good response assuming they did it right away after they determined it was a security issue.

      I'm very much pro on prem for everything, at least everything I know(mostly Linux based and I do infrastructure too). I've been operating mission critical internet facing infrastructure for 20 years(as of March 2023), and non mission critical internet facing infrastructure since 1997.

      I don't know Exchange(bulk of my Windows expertise dates back to NT4 era) and I find it interesting that so many self proclaimed Exchange experts/admins advocate using Office 365 over hosting it themselves, I guess MS did a really poor job with that software stack, or the average Exchange expert/admin is an idiot (or both). I remember seeing some cool "time machine" based backup systems for Exchange ~12-13 years ago, backed up stuff in real time and you could roll back to any moment with a click of a button(or at least that was the marketing, never saw it in action). Forgot any of their names. Probably put out of business by Office 365(because there are fewer customers running Exchange) though which from what I understand doesn't have anywhere remotely that ability.

      I've never been a Rackspace customer myself, every time I looked at the pricing(last time was 12 years ago) it didn't make sense given what I can accomplish with regular co-location. But there are probably lots of customers that really need their hands held on everything, so probably a good solution for them.

      1. Lil Endian Silver badge

        Re: Business Continuity

        Firstly, I'm certainly not pointing at Rackspace as a weak player.

        Secondly, "mission critical" is subjective (to the org). Their mission, their level of criticality. Of course, balance is a huge factor.

        If a case is mission critical, one reduces the points of failure, not introduce more. Every point of failure reduces viability. Failures result from (eg) kit failure and malicious or accidental human activity. The amount of gear and third parties required to go over the public net (for SaaS etc) makes so many points of failure, and attack vectors, it becomes farcical to claim resilience, and reduces (IMO) the meaning of the word critical. Just call it important!

        Banks, military etc, can afford mirror sites linked by proprietary infrastructure.

        No system is immune to failure. If an org self hosts, on prem, and their ISP goes down, they're going to be quiet for the duration, but at least they're cocooned and know the data's good.

        It's quite feasible that Rackspace clients can kiss their historical data goodbye. Did they take their own backups, or rely on Rackspace for that service too? If their (the Rackspace customer's) data infrastructure was critical to their continuing mission - they just fooked themselves and parties downstream (ie. their clients).

        1. Nate Amsden

          Re: Business Continuity

          I'd be willing to best most(90%+) of the customers did not take their own backups, just like most likely most office 365 customers don't take their own backups. Quite surprising really (maybe I shouldn't be surprised).

          Mirrored sites can still be compromised, if anything it may be easier, compromise one site and the replication automatically compromises the other site(s) for you (depending on how it was compromised and what kind of replication). Failures can also replicate, data corruption can destroy multiple sites as fast as your replication can send it.

          ISP going down and security compromise are very different things. Myself I have been involved with 3 primary storage array(SAN) failures in my career, all of them took multi day recovery efforts, all lost some data with a risk of total data loss, and in all cases the company did not have good backups, ALSO in all cases the company chose not to immediately invest in better protection going forward following the near disaster. All 3 situations were the most scary of my career as well, and in the two I was directly involved with I pulled an unbelievable amount of monkeys out of my ass to get the systems working again. The first one was early in my career and I was on the ops/app team not the backend team so I just waited while they worked to fix the issue. But I was the one to report the issue to everyone, will always remember the Oracle DBA telling me he almost got into a car accident when he read my emergency alert sent to everyone on that Sunday at around lunch time(with output from the HPUX Oracle systems showing "I/O error" on several mount points from the df command). Spent about 32 hours on a conference call for that, probably my longest ever conference call.

          I've been fortunate never to have been involved in a serious security incident(have had to deal with a few stupid hacks from unmaintained systems that I was asked to help with over the years).

          I run my stuff pretty well, though nothing is perfect, the best strategy (if possible) is try not to be a tempting target. Rackspace, hosting a lot of customer stuff is obviously not in a position to do that, so they have to deal with a lot more things than I.

          1. Lil Endian Silver badge
            Thumb Up

            Re: Business Continuity

            Seems like we're talking the same, the circle's contracting. Prevention, mitigation or resolution. All ofc.

            I thought we were talking about prevention initially, ie. lessen areas where "mission critical" means fek all.

            I'm not writing a white paper!

            [Your experiences sound like an option for On Call or BOFH! Gopher it!]

          2. malloctheballots

            Re: Business Continuity

            Nate - I appreciate your background and loving the background in Linux.

            I will tell you as an MSP and on prep administrator for Exchange and as a partner in Rackspace, while your points should be considered by anyone arguing the point, RAC also has broken their SLA on many levels. They under represent the issue, and they are no longer the company they once were. It should hav never happened because it never happened for over 15 years. These things don't just happen based on clever idiots renting BotNets who are clueless. They happen due to negligence. No one five years ago would agree that RAC would have a breach. As subscribers we didn't pay attention to the writing on the wall. If you look at their past white papers and work performed to create a secure environment they had the elements in place. Unfortunately they have abandoned the purpose of their business and that rotted the business entirely. It only worked with the priority of security of customer data, and that includes hiring the best, keeping the best and honoring their workforce.

            If you had been a part of their 'pay for' experience, 15 or 10 years ago, you would find an immediate support call answer, and relevant support help. If something needed to be escalated, it would reach a solution rapidly. It ran as as business, as you would expect. Over the past five years, before COVID, things had started to slid backwards. I suspect it was because the company has reinvented itself in ownership and in C level management over time.

            This is a GRID of machines. Backups and HA security was throughout the company housed data. They had insane expertise for all things Exchange, which has been upgraded over the years from 2000, to 2003, 2007, 2010, 2013, 2016... Rackspace never upgraded based on release dates, but instead based on awaiting Dev testing and re-testing in stability. Their mission was providing the best most stable and most secure environment for business users at one point. The reason they held on to older versions of exchange was because they were stable. Also, they grew as a business to understand cloud dynamics and hired experts who understood security and security concerns within cloud. They were great.

            But something happened, and I suspect it has to do with both management and engineering. They changed their management and this is when things collapsed. Internally I believe their mission changed based on ignoring the past success of the business culture in engineering and the entire company. They hired management with different goals and priorities. I suspect at the root of the matter is based around unqualified employees in upper management who should have never been promoted nor hired.

            Instead of actively going after phishing attempts daily, and seeking those recent endless phishing attempts to access their billing system in SPAM, they got caught. Their systems of security had been compromised from the edge all the way to employee training (with lots of neglected protection point opportunities in-between).

            There is no more stark example of what happens when you cater to the whims of both the wrong VC Capital group "best advice: (DROP YOUR COSTS)", a true lack of ethic internally to reward employees who are qualified (eg raises WHEN WARRANTED, training opportunities for employee knowledge growth, recognition for successes that are meaningful to the corporate mission of fanatical support, pushing for recognizing valuable women in STEM with continued learnings and recognizing their engineering growth paths, and giving appreciation to the people that endlessly drive the product engineering and security). When you hire people who can't leave the social media mirror (narcissism) and ignore the machines around them, you compound the problem.

            It's happening in all social media companies and in many tech companies.. it is a FAIL. The importance of an employee's 'feelings' is not in line with the fiscal responsibility of a board at a public company. Hiring careerist HR specialists who only know how to divide workforce based on how 'unfair' realities will destroy companies.

          3. malloctheballots

            Re: Business Continuity

            "I'd be willing to best most(90%+) of the customers did not take their own backups, just like most likely most office 365 customers don't take their own backups."

            Point well taken but the marketing information and the past history of RAC was that they did it all and risk was much much lower.

            Also exchange went into something called cached exchange mode by default and you had to turn it off to get all your mail all the time. Otherwise it constantly compensates for data set sprawl by optimizing your folders - leaving things on the server until you click on 'see mail on server' every time you start the program or leave the Inbox. Even your searches could hide real mail because of MS's crappy databased system.

            There USED to be a PST file that held everything locally and that allowed for easy reconstruction of your data if the server died. But today you only get a fraction of what is available unless you turn that default off and wait patiently for 2-5 hours to pull all your mail down.

      2. Anonymous Coward
        Anonymous Coward

        Re: Business Continuity

        You sound like someone clinging on to the past. The future is cloud. Greenfield into cloud. It’s Microsoft 365 by the way. Keep up.

        1. Anonymous Coward
          Anonymous Coward

          Re: Business Continuity

          " The future is cloud. "

          Someone elses computer, you mean. Anyone running actually mission critical stuff on someone elses computer is an idiot.

          Also everything managed by US corporation is accessible to every US TLA agency. If you don't compete with *any* US corporations, not a big deal. If you do, a problem. A major problem.

          Microsoft 365 is one of the worst offenders, literally everything is relayed into NSA: Directly, they don't need even to ask.

          Anyone who doesn't understand even that, isn't competent enough to comment anything.

          1. ElRegioLPL

            Re: Business Continuity

            Have you heard yourself?

            There are numerous multi billion £ companies using cloud services. Even banks, such as Monzo who use AWS.

            This isn't 1990 anymore mate

          2. Phil Kingston

            Re: Business Continuity

            >Anyone running actually mission critical stuff on someone elses computer is an idiot

            There's a lot of idiots out there then

          3. Anonymous Coward
            Anonymous Coward

            Re: Business Continuity

            Spoken like someone clinging onto a legacy on-prem skilled job

          4. Phil Kingston

            Re: Business Continuity

            I wonder how many idiots this took to organise

      3. Anonymous Coward
        Anonymous Coward

        Re: Business Continuity

        "Rackspace has been managing exchange systems for well over a decade at this point so they obviously have a lot of experience there"

        I see you missed the point altogether. If I slap a server for Exchange somewhere and it runs there until the hardware dies, "over a decade", literally, I haven't done *anything* since initial setup.

        That literally means everything I know about Exchange, is 10 years old and I have that one time experience of setting it up. That barely counts as 'experience' and even less 'lots of experience'.

        To me it's obvious Rackspace has literally this type of "experience". on Exchange: They could run the setup program. 10 years ago.

        1. John Brown (no body) Silver badge

          Re: Business Continuity

          "If I slap a server for Exchange somewhere and it runs there until the hardware dies, "over a decade", literally, I haven't done *anything* since initial setup."

          If you can get an Exchange server to run for 10 years without having to do a database rebuild or other maintenance, you can write your own ticket when it comes to finding a job as an Exchange admin :-)

      4. RichardBarrell

        Re: Business Continuity

        > Rackspace has been managing exchange systems for well over a decade at this point so they obviously have a lot of experience there

        More like two decades I think, but it doesn't help because if the experienced staff aren't retained. I'm led to believe that Rackspace's corporate culture went bad about 10 years ago and they lost or discarded most of the expert staff.

        Friends of mine who work at a company that was a Rackspace IaaS customer have told me that the reliability of the service provided by Rackspace to them fell off a cliff about three to five years ago. It went from "you just set up VMs here and rarely think about them again because they proactively maintain hardware" to "we got an email from them several hours after Pingdom detected sites going down, saying our VMs are just gone". They originally were charging premium pricing for a very good product, but now it's a bottom tier offering with top tier pricing.

        1. Nate Amsden

          Re: Business Continuity

          That very much could be true, I have read some similar stories. I certainly can't vouch for their quality of service never having been a customer only from what I know of what their model was years ago. But support in general from many vendors has fallen off a cliff in that same time frame, which also is sad. I've experienced this myself over the years too. Can't remember the last time I read someone speak positively on VMware support for example(even those in big accounts that spend tons of $$). One of my last HPE 3PAR support tickets I literally had to help their support type in the right commands to get the task done (via HPE MyRoom). These were basic linux commands (the task was to delete some ISO images related to past software updates on the storage controllers to free space). They actually wanted to replace the controller hardware because the internal disk was getting full. I forced them to escalate and not replace hardware when a few "rm" commands to nuke those ISOs was enough. What should of been a 30 minute process dragged out over days, and the final call where we did the tasks probably took over an hour while they struggled to get the commands right until I had enough an intervened.

          Fortunately (and I'm sure there is some luck involved) my strategy is building simple yet reliable systems that end up rarely needing to interface with support staff. Also very conservative software versions and configurations further limiting exposure to bugs. I'm almost always years behind the bleeding edge allowing others to experience the bugs and get fixes first. My VMware stacks averaged less than 1 support ticket per year(with front line VMware support provided by HPE) over a decade. Fortunately none of those tickets were too concerning.

          I read some comments last night regarding Rackspace's hiring practices, the staff seemed super protective of their domains, and were more concerned with "who you knew at rackspace" rather than "what you know about the technology". Which is a bad situation for sure. All the more reason I like to operate my own stuff end to end (but I do use co-location). I haven't dealt with corporate email(as in managing it) since 2002(which I ran with Linux at the time, the last time I considered myself part of corporate IT, in the years since have been in operations).

        2. usbac Silver badge

          Re: Business Continuity

          I can confirm the three to five year time frame. I think about five years ago is most accurate.

          My former employer was a heavy Rackspace customer. Rackspace's support used to be outstanding. I've been in the IT filed for over 35 years now, and Rackspace used to have the best support I had ever seen. That all ended about five years ago, however.

          It started to become obvious when you called support, and found out that very good support people you have worked with for years, are suddenly not there anymore. People change jobs, so the occasional person moving on is not that surprising. However, when you see most of the really good techs are gone, you know you have a problem.

          Then, after dealing with entirely US based support (and being told by the sales-weasels that all support is US based), they move most of their support offshore. At that point, we started moving all of our systems to other providers. When I left, there were only a few minor services running at Rackspace. Exchange had already been moved well before I left.

    2. An_Old_Dog Silver badge

      Re: Business Continuity

      Wasn't there a general business rule to never outsource "core functions"? Yet here we have, "messaging and calendaring are core business functions." -- being outsourced.

      1. Lil Endian Silver badge

        Re: Business Continuity

        Sense and Sensibility left the building with Elvis.

      2. trindflo Bronze badge

        Re: Business Continuity

        I recall hearing it as "never leave the mercenaries to guard the castle".

    3. Wayland

      Re: Business Continuity

      The great thing about The Cloud is they can host a huge number of customers on relatively little hardware. If you bought enough hardware yourself you'd really not be using it to it's full potential. However it's not a huge cost to the business if you run your own gear and you can avoid these sort of problems or at least have that option within your own hands.

  5. NatNatNat

    Rackspace has to be finished after this

    Based on the following analysis,

    -No real comms to customers until 24hrs in.

    -work around is to use another service at a cost of thousands of pounds in human cost for some orgs.

    - no timeframe to restore services.

    - Exchange had known vulns that Rackspace appears not to have patched.

    - no work around solution provided for hybrid exchange shops that host part of their off-prem exchange infrastructure with RS

    - some reports suggest that some of their hosted exchange infrastructure was running on old versions of exchange (2010 & 2013)

    On top of that it looks like UK customers with customer personal information stored in user/process mailboxes on RS Exchange will need to submit a report to ICO as they would have lost access to the personal info.

    If the above is accurate who would trust Rackspace with anything?

    1. DJV Silver badge

      Re: who would trust Rackspace with anything

      I'd trust them to charge exorbitant fees and provide a service that is second to a nun.

      1. Lil Endian Silver badge

        Re: who would trust Rackspace with anything

        Services seconded from a nun... ordained by DJV!

        1. DJV Silver badge

          Re: who would trust Rackspace with anything

          Well, at least a nun can ask a higher power when things go wrong!

    2. Lil Endian Silver badge

      If the above is accurate who would trust Rackspace with anything?

      Again, read the title... If...

      1. The hoodwinked (sold to)

      2. The incompetent (want selling to)

      3. The wilfully ignorant, which will be either

      (a) Selling to 2

      (b) Neglectful of any result

    3. John Brown (no body) Silver badge

      Re: Rackspace has to be finished after this

      "If the above is accurate who would trust Rackspace with anything?"

      ...and, if they are selling themselves as being able to host "mission critical" services and that's what people are buying, then surely they ought to be able to fail over to a backup DC. Or are they concerned the backup DC is in the same boat? All of their DCs? isn't the whole point of "cloud" that data is replicated and always "safe"? Obviously, we El Reg readers understand the reality of fail-over and "safe", so are Rackspace selling "mission critical" without proper redundancy, letting the punters think they have redundancy when in reality they are buying the cheapest, non-redundant options due to potentially misleading sales guff?

    4. Phil Kingston

      Re: Rackspace has to be finished after this

      - Exchange had known vulns that Rackspace appears not to have patched.

      That's a bit of an assumption

    5. Adrian 4

      Re: Rackspace has to be finished after this

      If only they'd chosen decent music-on-hold, all this angst could have been saved and their customers happy.

      Another case of learning the true cost of cheap services.

  6. anthonyhegedus Silver badge

    Clusterfuck extraordinaire

    So if suggesting using a free Microsoft exchange service they’re supplying, then

    A) this means that old emails won’t be accessible

    B) they’re advising customers to change dns and they’ll have to set up each user individually. That’ll take some doing

    C) what happens when it’s fixed? Will the users have to migrate back? Will they even want to? I sure as hell wouldn’t.

    This whole thing smacks of terrible management, terrible communication and a lot of incompetence.

    1. Lil Endian Silver badge

      Re: Clusterfuck extraordinaire

      Top title!

      Be aware that if the Beep broadcasts that it'll be dubbed as "Clusterflop Snufflebear". [1]


      [1] Heartbreak Ridge killer moment!

  7. 4palmers

    Time to Change Providers

    My company has been with Rackspace for many years. This time I'm done. They are happy to encourage us all to go to 365 but it's not that easy. It takes several steps and of course, I need Rackspace support for a couple of them. They have to add my subscriptions, transfer my tenant ID, etc. This has been an all day project. i have been on hold three times totaling over 9 hours collectively. Right now I'm going on 3 hours and 30 minutes for this particular hold to get the rest of my subscriptions. And of course, the Tenant ID transfer tool was also not working so add that to the Rackspace problems. After an hour on chat support, I am inquiring about my domain still now showing up and he tells me I just need to be more patient. Seriously, I have spent all day on hold. I was number 1,152 in phone hold cue this particular instance.

    1. malloctheballots

      Re: Time to Change Providers

      You would be better off by using a service like GoDaddy as a go-between or just biting the bullet and going to MS direct. We are an MSP on the East Coast of the US and offer to move people over and place them on 365 hosted.

      Either way, it is wise to get your tenant elsewhere.

  8. Anonymous Coward
    Anonymous Coward

    Rackspace Gutting

    Rackspace's customer support has plummeted since they moved/outsourced it to India. It is incredibly difficult to get even a wrong answer from them.

    I wonder if this incident is a result of the gutting of their tech teams rather than something nefarious.

  9. cb7

    "Exchange had known vulns that Rackspace appears not to have patched."

    Wait, how do you know that?

    How do we know it's not a new Exchange vulnerability?

    I'm trying to help a customer on Rackspace email and need RS support assistance. I received a callback 15 hours later. Except I missed it as was fast asleep.

    1. NatNatNat

      Wait how do we know this

      Kevin Beaumont has done some analysis that points to the lack of patching by RS

      1. cb7

        Re: Wait how do we know this

        "Kevin Beaumont has done some analysis that points to the lack of patching by RS "

        Thanks for the pointer. Shit. That means on-prem is still vulnerable. Bollocks

        Separately, I've requested 3x call backs from RS Support so far since Fri (5 days and counting) and have heard nothing.

        Last night I tried staying in the queue and the call cutoff after 3 hours and 1 second. Likely EE as was calling the 0800 support no. from my mobile.

        The 1st callback did happen, but at 04:30 GMT so I ended missing it as was asleep. Given RS operate internationally, you'd think they'd consider local time across time zones.

        My problem isn't even Exchange. It's a user who let their regular RS email mailbox get too full and it now appears to be corrupted.

  10. neilo


    I have - had, rather - a custom domain on Rackspace's hosted exchange. We only had email; no need for office 365 or anything like that. Thus, the instructions given to create a Microsoft 365 tenant for Exchange didn't work. After two hours on hold, I gave up Friday night and spun up my own Microsoft 365 level 1 plans of our domain. Our monthly charges for email have dropped from the $17.95 / mth on Rackspace to $8 / mth on Microsoft, which easily covers the price increase for Disney+ (nothing to do with Rackspace, but the price increase was on oof the last emails we received).

    Recovering emails, contacts etc. has been a pain; however I've had all the pain and I'll be able to get my wife's stuff ported easily. I pity the people who have to deal with this for an organization. Rackspace has been no help, and even if they were of any help, recovering emails, contacts etc. is all on me.

    If this ends Rackspace, so be it. Their DR planning has been shown to be essentially non-existent for hosted exchange. What other aspects of their business has a similar lack of DR planning?

  11. anthonyhegedus Silver badge

    Amateur workaround

    Apparently their latest communiqué to embattled customers was to tell them that they can have their emails forwarded to alternative email addresses like gmail, hotmail etc.

    I expect that solution from a small provider in an emergency, but this smacks of ‘give them something to grasp because we’re in for the long haul with this one’

    1. Phil Kingston

      Re: Amateur workaround

      Do have to wonder if they're maybe finding out how slow tape can be when it comes to restoring data. Assuming they've confidently and competently rebuilt somewhere to restore it to. If indeed that is what is happening.

  12. ElRegioLPL

    "Rackspace has promised to update users every twelve hours."

    That is a fucking disgrace for such a critical service

    1. anthonyhegedus Silver badge

      It absolutely shows that they’re a) hiding the truth and b) having no hope of fixing it any time soon

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like