
Dooh
Where’s the Homer Simpson icon??
InfluxData has lost the data of customers using its services in Australia while users in Belgium are struggling to figure out if they can restore the last 100 days. The vendor behind the InfluxDB time series DBMS has now apologized to customers caught out when it discontinued InfluxDB Cloud service in the two regions: AWS …
They were paid to look after other people's data. They didn't. If you're paid to do that you make sure the data is preserved even if you withdraw the service.
DBA's priorities:
1. Ensure the safety of the data.
2. Ensure the continuity of the service.
In that order.
It seems like the DBAs at the companies affected have some questions to answer:
1/ Why didn't they enter contracts that protected against this case?
2/ Why didn't they have a separate backup of their oh-so-important data?
3/ Why didn't they pay attention to the emails etc that Influx sent out?
That claim would have to be on some sort of *implied* guarantees of service - which might work in a consumer court, but not in B2B, where parties are expected to read the contracts they sign.
I haven't seen the contract itself, but I'd expect that it has clauses that explicitly disclaim all consequential damages; that compensation is limited to service credits; that they have the right to cease the service by providing X days notice; and that serving the notice via E-mail is treated as sufficient.
That doesn't mean that they were not jerks. At minimum they should have had brownouts prior to closure, identified the customers still using the service and helped them to migrate, and even at the end they should have *switched off* the service but not *deleted* anything for at least a month.
However, you can be sure that there will be no redress in court for those affected. Maybe some private deals will be reached, but only to sweeten those customers to stay with influxdb and not walk to a competitor.
"In hindsight, our assumption that the emails, sales outreach, and web notifications would be sufficient to ensure all users were aware of and acted on the notifications was overly optimistic,"
They were evidently acting as DBAs for their customers. I've said a number of times that the first requirement of a DBA is paranoia. That comes before any particular skills or product knowledge. Optimism of any degree is not a substitute; it's the exact opposite.
The other thing I've said here a number of times is something their customers are now realising: cloud is somebody else's computer and you don't control it.
"We have a running use case and are not in the habit of checking the documentation every week just in case our service gets cancelled without prior warning."
When you store your data on someone else's computer, it's probably necessary to keep checking just in case.
When decommissioning *your own* production data where you haven’t had absolute confirmation that it isn’t in use, you at least snapshot or backup first prior to removing it from your active set. That way you can quickly revert a potential mistake, which is ALWAYS a possibility when performing any non standard operation with a data source.
It’s frankly imbecilic not to do this with customers’ data.
>What guarantees do the Cloud providers give for data recoverability and integrity.
None whatsoever. Ive been through this with clients who have lost data and servers from cloud providers (Azure & AWS) and VPS providers (Vultr, IONOS, HeartInternet and others). Even when you replicate data to multiple zones a stray deletion command in one zone replicates to the others before you can finish your brew and investigate. The contracts are always water tight on the providers side, there is no comeback or compensation when it happens
As an earlier poster remarked: There is no cloud, its a mental abstraction, only other peoples servers. If you are not doing your own, locally stored backups and replicas then you are asking for a disaster. The cloud isnt some magic place where data is secure. Its just a marketing abstraction to make you believe thats the case. Its also fucking expensive compared to running your own, even on a small scale, unless you are in a location where appropriate bandwidth isnt available. In which case you probably chose the wrong location to base your data intensive business
I've worked with this company and their products in the past and have found them to be useful. I've also worked with "cloud" providers and universally found them to not give a rat's ass about my data, making data integrity and disaster recovery wholly my responsibility. Yes, a scream test is always warranted, just did this myself on around a thousand servers. No one responds after multiple attempts at contact? OK, power off, but wait to destroy for a month or more. Saved my bacon more than once, and I'm pretty surprised this particular company let themselves forget that lesson.
Unfortunate bad press, certainly, but corporate suicide it ain't.
Thousand servers at once, good God, I imagine there were a lot of screams unless it was 99% unused legacy VMs.
But yeah, even if I'm removing a server that I'm 99% sure is part of an already decommed system and just got left behind I'll still shut it down to do a scream test for peace of mind before I nuke it.
Oh and following the change process is a pretty essential solution for anything that is or has been production. Just not worth getting chewed out or fired for being a cowboy.