I was in a meeting with a customer, and we saw (yes, I've got witness) all my files (from my ftp area) were renamed to .lock and a message stating the area was attacked and encrypted.
Sounds remarkably Ransomeware-ish
Oops.
Bewildered customers of A2 Hosting have endured a multi-day outage this week as the company battled to clear some pesky malware from its fleet of Windows Servers. Problems surfaced early on Tuesday, 23 April, shortly after the company deployed the duct tape to deal with a "service interruption" at its Singapore facility. Users …
Hosting companies have a lot to worry about : server stability, virtual server management, complicated administration rights, I get it.
But there is no reason on Earth that I can accept a hosting company falling to malware. Your staff is supposed to be alert to that, phishing is a well-known concept that network administrators are supposed to know by heart. A hosting company falling to malware has failed its number one obligation : having trained personnel to deal with issues. If A2 has indeed had its servers taken over by malware, then it is through sheer incompetence, either of management or in implementing procedures properly.
If I were being hosted by them, I would be transferring my site elsewhere pronto.
Not just that.... your systems need to be 100% isolated from those you're managing AND those from each other AND from each other.
If you can't manage that, then for all I know, Joe Bloggs who rents a £2.99 VPS next door to my server is actually living in my server scot-free and wreaking whatever havoc with my data. Or your call-center agent is one-click away from knocking my server offline and losing my credit card details in the process.
Security is about isolation - VLANs, filtering, port-settings, firewalls, VPNs, administrative back-end networks, privilege separation, etc. etc.
If you fail at that in the design stage, it really doesn't matter what happens in the implementation stage, I wouldn't want to touch you.
And then you get to the backup scenario - why are you not able to rebuild every hypervisor machine back from an clean image with deployment tools, and then suck individual backups of the machines back, and restoration of the machines themselves is on a huge grandfather-father-son basis or - worst case - you restore an image and tell you clients to restore from their own backups.
Windows malware able to run riot through a datacenter unchecked enough to affect every customer is an indication of an amateur-hour setup.
I can't even fathom a reason how it would be able to spread from one machine to another, or why the backend control and administration systems would be using Windows at all.
True. I don't want to add to the 'A2 bad' sentiment - the article and comments so far cover it nicely.
What has not been emphasised enough is that when your IT is hosted by a company that claims to have experts looking after your IT infrastructure 'so you don't have to' you are completely in their hands when the shit hits the fan. You have no option but to sit back and wait for their expert help (compared with, for example, TSB hiring IBM to sort out their recent migration cock-up). Admittedly you can leave them once they've fixed the immediate problem - but there's not usually that option during the crisis.
A business *could* try to protect itself by using multiple separate hosting providers but that requires proper planning and investment in IT infrastructure which seems to be anathema to many businesses.
Proper Planning Prevents Piss-Poor Performance as a number of ex-colleagues would say.
Really? I would expect any company running RDP to fail any basic pen test - even the smallest of companies have accepted that it's not a practical option and accepted a VPN alternative that increases the complexity of brute force attacks via either pre-shared keys or certificates.
I know scans show open RDP, but a hosting company doing this let alone having any control plane functions directly accessible from the Internet is a disaster waiting to happen.
A host I used to use had all of its phone lines (VOIP) and email running through the same feeds as the servers. I guess it made it quieter as they did their troubleshooting that they were completely cut off from the world and couldn't even put up a system status message (their home page would be off-line too).
I won't bother to condemn the company, their own consciences are no doubt punishment enough; nor do I facilely insist that 'you get what you pay for' in the sense of monetary cost, since often enough reasonably priced goods are superior to their 'exclusive' [ = mass-marketed ] heavily branded rivals; but in a deeper sense this is the result of choosing inferior tools.
People cheaping out by using Microsoft have to pay in other ways over the longer term.
One of my sites is back up, while the other isn't working from their restore. I have e-mails again, BUT I'm missing over month's worth of e-mails from all accounts I have with them.
I had a chat with their support, for which you need to wait hours before you get to talk to someone, with those who have paid for premium support pushing in front of you. They told me that they have suffered a malware attack (as another commenter already indicated) and that their backups were also affected, so they don't currently have anything newer to restore. I asked them whether they were going to pay to get their files decrypted and the chat representative said that nothing had been announced yet - so basically he is as clueless as the rest of us.
Way before this incident I've written reviews of A2 Hosting online which slated their technical support as being incredibly poor and also their indifference to security at offering Webmail without SSL (yes, in 2018 and probably still today - their support claims that some system they use doesn't support SSLs for each account, but they didn't seem to grasp that they shouldn't be offering Webmail without SSL). But even I didn't expect this kind of level of complete incompetence from them. If they don't manage to get my e-mails back, they can expect to be reported to the ICO by me.
If anyone would like to recommend a quality budget hosting alternative which provides ASP.Net hosting, then I'd love to hear about it. I'm currently looking at eUKHost, but aren't sure. I used to be with ICDSoft which were amazing, but sadly they don't do Windows.
I have a site on a2 hosting due to the great pricing. They advertised as allowing shared hosted python applications, but the support for such was barely adequate. Just recently they admitted that they are unable to support python applications and I should buy a VPS. I think I'll just self host my hobby site instead. I think the cpanel sites are on linux, so my site should be ok. Just checked. It's dead.
Having been aboard since punch cards had to be carried through the snow in punch card boxes, I find this sad, but unsurprising. All the appropriate words should be used (but not on FB) but there's ;nothing to do but take it apart piece by piece and put it back together. I was told when email was just beginning to always consider anything I wrote online to be as public as a postcard. I have also had the experience of my hard drive failing the same morning as my Colorado Backup. Yes, we recovered for $2,000 in the Nineties...god knows how much that would be now.
I've run across a lot of folks runing A2Hosting because I've been in the computer world so long and because I live in Ann Arbor (A2) where so many of the geeks and gurus reside. Everything goes down sooner or later and unless you're the Guru in Chief, there's nothing to be done but wait. I advise my staff to eat chocolate and play in their favorite ways. This is their only chance for a holiday. It will be all too soon before we're swamped again. Nobody panic. It won't help. Try to guess what the next big crisis will be.
96 hours+ after the outage began, well past their 'before the weekend' promise there are no updates for over 4 hours on their status page, their @A2Hosting_alert feed or my open ticket politely asking when our specific servers might be in the mood again.
Still no information on whether, when they do manage to dig an uninfected punched card backup out from behind a filing cabinet and get their vaunted 'highly trained team of monkeys' to type it all back in, that it'll be from the night before this debacle began, or how long before that.
A dozen or so server names have been mentioned in the now epic 'system status' thread as having been recovered and now back online but no impression of how much of the oft-mentioned 'recovery process' is actually complete, how many servers are left to recover, which are being worked on now or why on earth it's all taking so long.
We have no email, no ftp, no database, no web server - it's fair to say that we have no business.
How to mortally wound your hosting company : Have a 4-day outage.
How to make damn sure of the job with an ice pick : Resist the urge to tell users anything and be as vague and generic as possible when you are forced to. Under no circumstances give any indication as to how complete the process is.
"We will provide more updates as they are made available." - made available by whom ? - is the recovery being done by séance or something ? - see those guys in lab coats over there running around with their hair on fire ? - go ask one of them !
Even without considering SEO / marketing consequences of a so prolonged downtime, I have my biggest problems with my students that are waiting for their kung fu courses...
I am a martial arts instructor (so take my words with the necessary caution) but, for what I know about ransomware:
- If a2hosting do not pay for the restoring, a ransomware needs (at least) 4-6 months to be decrypted
- It is not sure that they effectively have the option to pay (it could be the action of their competitors)
- It is not sure that the ransomware would be decrypted for free (like it happened instead for example with SOME versions of Gandcrab cryptolocker)
The question is: what should we do?
From numerous pointed customer messages on their @A2Hosting_alert feed, and in the absence of any word to the contrary from A2 on the matter in their replies or their status updates, it very much appears that they have no uninfected backups more recent than 2 months ago.
Several clients on the feed have said on finally gaining access to their filestore and databases that their data has reverted back to mid-February, with the blithely nonchalant replies from A2 being that the most recent backup has been applied and that if the customer has a more recent backup, they will be pleased to help restore them, which is nice.
In the past few days I read a lot on Twitter complaining about A2hosting. Many customers whose website is down almost a week. Even worse, luckily I had moved to asphostportal before the bad thing happened. I am grateful to still be able to save myself from the chaos.
As someone who works in the firing line, I feel for the A2 techs. Last week must have been a nightmare for them. The stress levels would have been through the roof. Unless I'm missing something, it seems their competence is being criticized without due awareness of what really went on behind the scenes. Perhaps A2 was the victim of a rogue employee, or the testbed of some fancy new malware, or they were targeted by a top tier hacker. Who knows, perhaps their machines were infected at the firmware level before they were even delivered onsite, with the malware only awakening from hibernation last week. Yes, it is a stretch but the reality is, given the information above, we don't know why this happened in this case.
What I do know is that the advantage is almost always with the attacker. If you polled sys admins globally, I'm sure a high percentage of them would register as not confident their systems would survive a concerted attack by a skilled hacking team.
Because of all this, whenever I read a disaster story like the above, I don't pick up stones and join the mob. I instead think: "but for the grace of God go I".
I must say, I too feel wholeheartedly for the people on their support chat, the phone lines, and the operators who are trying to sort the mess out, yes, but the overall architecture of the setup at A2 has to be badly flawed if all of their data centres can fall victim to malware that hits one of them, Singapore in this case it seems.
That architecture should have been inspected and reviewed internally, and to some degree externally, and the flaws that have allowed a single burning dumpster to set the entire fleet alight could have been highlighted and resolved. I have no sympathy at all for the folk that should have been making sure that happened.
Similarly, with whatever lack of process or diligence left them with no unaffected backups more recent than 2 months old, it's hard to sympathise with whomever is responsible for that.
I would however, reserve my ire especially for whomever is making the decisions about how the progress updates are dribbled out.
Given that there are dozens of servers, and understandably every user wants to know when theirs will be restored, A2's approach of stone-walling all specific enquiries and just popping up every 6-16 hours saying 'server xwz-123 is restored and online' seems to be the optimum way to generate floods of vitriolic tickets, caustic twitter remarks, shouty phone calls and the like into every support channel they have; which are then fielded by the poor souls on the support desks who aren't allowed to tell the users anything more than 'please refer to the service status page'.
Somewhere in A2 Towers is a spreadsheet or the like with a list of servers, their current status and some idea of the order in which they will be attended to, perhaps with the date of the last known unencrypted backup, maybe even with a rough %age complete for the ones in progress. If this were published, even in a partly redacted form, users could make their own estimates at how long it will be before theirs would be complete and not have to constantly badger A2 for the information.
Admittedly, it would generate some argument from those whose machines are at the bottom, but if there were some rhyme and/or reason to the ordering, and that justification published with the table, then it's in some way fair enough and would at least only generate baleful squawks from the folk at the bottom of the list.
For example, if I knew my servers wouldn't have even been started on by now, which as far as I can tell they haven't; I would have gone out this past weekend, maybe watched the grand prix at a bar or got madly drunk, maybe both; and not sat at my desk watching my recurring ping batch file prodding at my ftp, db, web and email servers ready to grab whatever I could when one finally responds.
I could have told the business folk that it looked like it would be early week, not 'before the weekend' like their announcement on Thursday said, and they could have made decisions on that basis as to whether to wait, or to start re-keying orders and counting stock.
The whole approach of drip-feeding a server name at a time as they are completed and refusing to give any information on any others just causes huge inconvenience, a great deal of irritation and stops anybody making any vaguely informed business decisions about the way forward for their particular setup.
I suspect this approach, and the bad feeling and outright rage that this is causing; not the outage itself is what will kill A2.
>left them with no unaffected backups more recent than 2 months old
To be fair, I don't know if this is the case across the board, but several people have complained in the twitter alert feed on getting their servers back that the data restored was from February. I've obsessed that feed for days in the search for snippets of information and have seen no messages saying anyone has been set back 3 days, 10 days, one month or any other timespan but 2 months; so it's certainly the state of affairs for many users.
With my having a local backup of our live db from rather more recently than that, I also have no idea whether I'm essentially waiting for nothing, and whether we should just cut our losses, revert to the backup I happened to download and go from there. I have our system running on another host now with that backup so at least internal users can see a stale copy, and would very much like to know when the latest clean backup for our server was taken; we might have been moving along for a few days re-keying what was lost rather than still waiting, possibly for no reason.
Typically for convenience a company will have all their windows boxes in a domain, and then staff will be logging in as domain admin (or using service accounts with domain admin privs etc) and those creds can be extracted from a single compromised system and used to access all the others. Malware can also automate this process and will harvest creds and spray them against other hosts.
It's possible to manage windows differently, but also a lot more hassle to do so so very few places do.
Also your domain controller needs SMB open from the domain members, and if you have the right creds you can login over SMB and take control of the machine - so you can spread from one host to the dc, and then from the dc to other hosts. That's assuming that the individual devices don't allow direct SMB connections between each other (which often they do anyway).
Unix is much easier to manage (and more commonly configured so) using ssh keys, so if a single box gets compromised all you get is the public keys - which are useless.
It's been 14 hours and 6 long days,
Since you took my servers away,
I stay in every day and can't sleep at night,
Since you took my servers away,
Since they've been gone I can't do anything I want,
Can't do anything I choose,
I can't go out to eat in a fancy restaurant,
Cos nothing
I said nothing can take away these blues (well, apart from getting my bleedin' data back !)
Cause nothing compares
Nothing compares to A2
It's been such a nightmare without you here
Like a business without a database
Nothing can stop these angry phone calls from coming
....got bored....
Not one of my servers is back, it'll be a week in a few hours. Still no idea what data will be there when something does return.
Already migrated, but running with stale data and re-keying and re-stocktaking.
Last update 22 hours ago.
May 1, and the servers are still down. Back ups are not available to customers so trapped. All support tickets have been closed off and A2 own website down.
My web sites are losing all their traffic, my apps being uninstalled and bad reviews being given, but no doubt a $5 credit will be offered to customers as a slap in the face
May 1, and the servers are still down. Back ups are not available to customers so trapped. All support tickets have been closed off and A2 own website down.
My web sites are losing all their traffic, my apps being uninstalled and bad reviews being given, but no doubt a $5 credit will be offered to customers as a slap in the face
I am a bit concerned. They are not replying, and not giving any update since 29th.
Also have anyone noticed they are not posting nothing on Tweeter or Facebook since 23rd? It looks like they abandoned the ship.
I just hope this is because they are all busy trying to solve the problem.. But it doesn't look good.
Let's see..
No responses on any of the tickets I have open (one is entitled 'Your chat agent pasted a response and cut me off' and another 'You closed this ticket without replying so I reopened it', the last update on their internal status feed was nearly 40 hours ago saying they were restoring 'the Singapore database server', must be the one I'm on if there's just the one there, but nothing since, I'm not an ops guy but 40 hours seems like an awfully long time to be restoring a server unless they're typing it back in from a hardcopy.
It does look rather like they're either sitting back re customer comms or the building has burned down or something (maybe the guy that designed the network architecture tried to rewire a mains plug or something).
It seems they use a lot of remote techs for support, so the poor folk on the end of the chat probably really have no idea what is going on either, rather than stone-walling, which I assumed at first.
We gave up waiting, migrated, restored from a recent-ish DB backup I happened to take locally to fix a bug and the business are literally re-keying stuff from the bin and from memory. We're a small outfit and empty our own bins, so it doesn't happen much.
At this point it really doesn't matter what they come up with as it'll be quicker to finish re-keying than to work out how to merge the backup into what our latest data is now even if it is more recent than the one I downloaded, which seems unlikely. As such, I no longer care if they can identify which client I am from my angry posts here and put our restore to the end of the list in spite.
Exactly my thought. Shit happens, though I put questions at the length of time it is taken to restore and get back up online. BUT.... the whole stonewalling customers and leaving them hanging, there is just no excuse for that. THAT is not an accident or some unlucky external factor playing havock, but is a bad and unprofessional decision from the ppl running A2 hosting. And THAT decision will be the noose by which they are hanging themselves up and the reason customers will flock away. If they had communicated and given me some information, even if that was 'sorry but it will take us at least a month and we don't have backups from the last 2 months' I would have known where I and my business stand (therefore making it possible to take some action and inform clients etc.). Now I'm left in limbo and so are my clients. Unforgivable.
Quite, it's been over 4 days since their message saying they were starting to restore my db server and not a peep since, we've moved on now so it's just of academic interest but jeez, over 4 days !
As you say, that's not an unfortunate event out of anyone's control, or the unavoidable consequence of the actions of a rogue worker or being victim to some cutting edge hacker that could befall any company; that's a conscious decision not to communicate, which is hard not to interpret as them having nothing to restore and clamming up rather than admitting to it.
Even if my server was completely unaffected by the outage, I'd be making plans to get the hell off their hosting asap.