* Posts by A2HostingUserEnraged

8 publicly visible posts • joined 29 Apr 2019

Turn on, tune in, cash out: Hipster chat plat Slack whacks beardie millennials with features

A2HostingUserEnraged
Unhappy

Somewhere along the way towards these bells and whistles, the handy link to open the local folder you just downloaded a file to disappeared, I mean, 2 or 3 clicks elsewhere and there it is but that, coupled with a smattering of "Lets move some interface stuff around, that feature has been there way too long" recently and having now slapped a massive search box where I used to grab the title bar to move the window... grrr.... I mean, when the search box gets focus it opens an even wider new box that covers half the messages below with suggestions I don't want anyway so why does the search box on the header bar need to be so massive...

grrr... grumble mumble whycanttheyleavethebloodythingalone..

DBA locked in police-guarded COVID-19-quarantine hotel for the last week shares his story with The Register

A2HostingUserEnraged
WTF?

Re: What a shit hole

>Other tropical countries such as Indonesia are also hard hit.

You'd think so but... I'm a coder living on Bali (and have been working at home since forever anyway) and there are a bare handful of deaths here. A couple of months back I was expecting by now there would be piles of corpses in the streets but practically nada.

Sure, one suspects the official figures but I'm in an unusually well-informed position, my girlfriend is a Balinese journalist and her Mum has just had a kidney operation so she's been in and out of the main hospital in Densasar (Bali capital) most days initially to visit her pre and post-op, then subsequently 2 or 3 times a week to collect medicine for her, she's also been paid recently by an Aussie newspaper to actively snoop around and investigate 'where the bodies are' but there just aren't any big piles of them. In fact, it's a notable event in the local journo community when somebody dies and it's associated with CV.

Some villages are being locked down either by the 'Banjar' (the local village council/elders) or in one case at least by the military as a hotspot of people have been tested positive there or have symptoms but the number of deaths remains super-low, even with a slew of Balinese locals returning to their villages in March and April from work on cruise liners (that's been stopped now, may are still stuck on ships or quarantined on Java).

All very odd, and counter-intuitive but there you go. With the density of population here, multi-generational families crammed into small spaces, the general low level of organisation and medical facilities especially for the poor (aka. the vast majority), and the lack of a serious lockdown (the beaches are closed, shops, bars and big-name stores like circle K etc. close at 9pm now) you'd think it'd be screaming through the population like wildfire, but not.

Behold: The ghastly, preening, lesser-spotted Incredible Bullsh*tting Customer

A2HostingUserEnraged
Holmes

Beware enterprising users...

Several geological ages ago, it seems, I was contracting for a major financial company writing a VB4/Access2 (yep, MANY years back.. 1996 my CV says) reconciliation system which involved users typing in data from an ever-inceasing stack of paper Traveller's Cheque slips and my code trying to make sense of the input, reconciling each record with theoretically matching data from an imported text file and producing a bunch of reports.... mmmm.... Crystal Reports... lovely.

It went live and worked pretty smoothly for a couple of months, then suddenly one day the numbers stopped adding up, dates were going haywire and all-sorts. Journals didn't add up any more and reports on data that shouldn't have changed for weeks had altered dramatically.

By then I was working on another project for a department upstairs, I'd not changed the code for some time and there were no other developers that might have (all the other techies in my old team were writing code for the ominous AS/400 - IIRC, something black and TARDIS-like anyway - standing in the corner) so this was most puzzling. I tested and retested, inputting data and nursing it through the process, running the reconciliation, producing reports and all was fine every time.

The machine it ran on was OK, running repairs on the db file produced nothing scary, no other programs used the database at all, my program didn't even have an 'edit this old record from weeks ago' screen anyway, how on earth was this happening ?

After much head-scratching and somewhat in desperation, I tried to get a handle on what point in the process everything was going awry, I added a function to store record counts and totals of various fields from important tables into another table and had this function run on startup, shutdown and various key points in the workflow.

All worked fine from that point, classic 'add diagnostics, problem goes away' scenario, I was wary of touching anything so left the row counter code in, executed the time-honoured 'walk away slowly backwards' procedure and things were peachy... for a while, then the problems came back as before.

I was called back downstairs to sort things out again, checked my meta-numbers table and was surprised to find that some pretty dramatic changes in record counts were happening between two key points, namely the program closing down and the program starting up.

In that order.

The data was changing when my program wasn't actually running.

Some swift investigation resulted in a confession by one of the more enterprising users that in order to not have to wait until that PC was free, and to save time with pesky stuff like data validation, he was typing his pink slips into a spreadsheet on his own PC, copying it to a floppy disk and using the Access import wizard to insert the data on the application machine.

Of course, this was just MS Access on a local drive so credentials ? - pah - anyone could get at it if they could get their bum on the seat. Local machine, no internet connection, LAN just for printers, super-secure room anyway with all the foreign currency lying around so what's the problem ?

He had reverted to boring old manual input when the sirens went off and kept his head down while I was running around with my hair on fire, but then started being enterprising again, when things had settled down, one would hope with some fixes applied to his process.

He was told in no uncertain terms not to do that again by his manager but I tidied up the new 'DetectEnterprisingUser' functionality and left it in just in case, and added some choice comments to explain why it was doing this; and walked away slowly again.

Several months after that I was called downstairs again, not by a phone call from the users as before, but by my current manager telling me my old manager had most sternly requested my presence at a specific time in his office that afternoon. Curious. I turned up as requested and found him, my old team leader, a suit from Personnel and a stranger waiting for me, all having clearly been there a while already.

It turned out that the stranger was a new contract developer they had hired to make some changes to my system and he had read my 'choice comments', which included the expletive 'c*nt' in close proximity to the christian name of the enterprising user in the explanation about the record counting. He reported me for my potty fingers and it was only my old team leader pleading on my behalf that stopped it being fatal to my contract.

Sheesh... reporting your predecessor for rude comments... that's simply not cricket !!

That same team leader chap once fell foul of the super-secure nature of the Traveller's Cheque room, the door opened by card access and required a card to exit as well as enter. After a particularly fraught evening out, one morning he was at his desk and felt the need to ah, dispel his breakfast. He got up, ran for the door, realised his card was on his desk, made it half way back before decorating the carpet right in the middle of the room, in full view of the entire technical team and a dozen admin workers - a great boss, and still a good friend :)

Holmes icon, because it was an investigation, and that's what I'm reading right now.

Customers furious over days-long outage as A2 Hosting scores a D- in Windows uptime

A2HostingUserEnraged

Quite, it's been over 4 days since their message saying they were starting to restore my db server and not a peep since, we've moved on now so it's just of academic interest but jeez, over 4 days !

As you say, that's not an unfortunate event out of anyone's control, or the unavoidable consequence of the actions of a rogue worker or being victim to some cutting edge hacker that could befall any company; that's a conscious decision not to communicate, which is hard not to interpret as them having nothing to restore and clamming up rather than admitting to it.

Even if my server was completely unaffected by the outage, I'd be making plans to get the hell off their hosting asap.

A2HostingUserEnraged

No responses on any of the tickets I have open (one is entitled 'Your chat agent pasted a response and cut me off' and another 'You closed this ticket without replying so I reopened it', the last update on their internal status feed was nearly 40 hours ago saying they were restoring 'the Singapore database server', must be the one I'm on if there's just the one there, but nothing since, I'm not an ops guy but 40 hours seems like an awfully long time to be restoring a server unless they're typing it back in from a hardcopy.

It does look rather like they're either sitting back re customer comms or the building has burned down or something (maybe the guy that designed the network architecture tried to rewire a mains plug or something).

It seems they use a lot of remote techs for support, so the poor folk on the end of the chat probably really have no idea what is going on either, rather than stone-walling, which I assumed at first.

We gave up waiting, migrated, restored from a recent-ish DB backup I happened to take locally to fix a bug and the business are literally re-keying stuff from the bin and from memory. We're a small outfit and empty our own bins, so it doesn't happen much.

At this point it really doesn't matter what they come up with as it'll be quicker to finish re-keying than to work out how to merge the backup into what our latest data is now even if it is more recent than the one I downloaded, which seems unlikely. As such, I no longer care if they can identify which client I am from my angry posts here and put our restore to the end of the list in spite.

A2HostingUserEnraged
Paris Hilton

Nothing compares to A2

It's been 14 hours and 6 long days,

Since you took my servers away,

I stay in every day and can't sleep at night,

Since you took my servers away,

Since they've been gone I can't do anything I want,

Can't do anything I choose,

I can't go out to eat in a fancy restaurant,

Cos nothing

I said nothing can take away these blues (well, apart from getting my bleedin' data back !)

Cause nothing compares

Nothing compares to A2

It's been such a nightmare without you here

Like a business without a database

Nothing can stop these angry phone calls from coming

....got bored....

Not one of my servers is back, it'll be a week in a few hours. Still no idea what data will be there when something does return.

Already migrated, but running with stale data and re-keying and re-stocktaking.

Last update 22 hours ago.

A2HostingUserEnraged

>left them with no unaffected backups more recent than 2 months old

To be fair, I don't know if this is the case across the board, but several people have complained in the twitter alert feed on getting their servers back that the data restored was from February. I've obsessed that feed for days in the search for snippets of information and have seen no messages saying anyone has been set back 3 days, 10 days, one month or any other timespan but 2 months; so it's certainly the state of affairs for many users.

With my having a local backup of our live db from rather more recently than that, I also have no idea whether I'm essentially waiting for nothing, and whether we should just cut our losses, revert to the backup I happened to download and go from there. I have our system running on another host now with that backup so at least internal users can see a stale copy, and would very much like to know when the latest clean backup for our server was taken; we might have been moving along for a few days re-keying what was lost rather than still waiting, possibly for no reason.

A2HostingUserEnraged

I must say, I too feel wholeheartedly for the people on their support chat, the phone lines, and the operators who are trying to sort the mess out, yes, but the overall architecture of the setup at A2 has to be badly flawed if all of their data centres can fall victim to malware that hits one of them, Singapore in this case it seems.

That architecture should have been inspected and reviewed internally, and to some degree externally, and the flaws that have allowed a single burning dumpster to set the entire fleet alight could have been highlighted and resolved. I have no sympathy at all for the folk that should have been making sure that happened.

Similarly, with whatever lack of process or diligence left them with no unaffected backups more recent than 2 months old, it's hard to sympathise with whomever is responsible for that.

I would however, reserve my ire especially for whomever is making the decisions about how the progress updates are dribbled out.

Given that there are dozens of servers, and understandably every user wants to know when theirs will be restored, A2's approach of stone-walling all specific enquiries and just popping up every 6-16 hours saying 'server xwz-123 is restored and online' seems to be the optimum way to generate floods of vitriolic tickets, caustic twitter remarks, shouty phone calls and the like into every support channel they have; which are then fielded by the poor souls on the support desks who aren't allowed to tell the users anything more than 'please refer to the service status page'.

Somewhere in A2 Towers is a spreadsheet or the like with a list of servers, their current status and some idea of the order in which they will be attended to, perhaps with the date of the last known unencrypted backup, maybe even with a rough %age complete for the ones in progress. If this were published, even in a partly redacted form, users could make their own estimates at how long it will be before theirs would be complete and not have to constantly badger A2 for the information.

Admittedly, it would generate some argument from those whose machines are at the bottom, but if there were some rhyme and/or reason to the ordering, and that justification published with the table, then it's in some way fair enough and would at least only generate baleful squawks from the folk at the bottom of the list.

For example, if I knew my servers wouldn't have even been started on by now, which as far as I can tell they haven't; I would have gone out this past weekend, maybe watched the grand prix at a bar or got madly drunk, maybe both; and not sat at my desk watching my recurring ping batch file prodding at my ftp, db, web and email servers ready to grab whatever I could when one finally responds.

I could have told the business folk that it looked like it would be early week, not 'before the weekend' like their announcement on Thursday said, and they could have made decisions on that basis as to whether to wait, or to start re-keying orders and counting stock.

The whole approach of drip-feeding a server name at a time as they are completed and refusing to give any information on any others just causes huge inconvenience, a great deal of irritation and stops anybody making any vaguely informed business decisions about the way forward for their particular setup.

I suspect this approach, and the bad feeling and outright rage that this is causing; not the outage itself is what will kill A2.