Disgruntled ex-employee, maybe?
BTW, I wouldn't want to run my accountancy software "in the cloud" either.
Accounting software outfit Sage Group has been hit by a data breach affecting between 200 and 300 of its customers. The company told Reuters the breach was a misuse of an employee login. This post by Richard De Vere at “The AntiSocial Engineer” claims an employee was behind the breach. Sage says it doesn't know how much data …
Not even desktop software on a "subscription" basis, IMO - but that's another thing Sage are doing (as noted in the article).
When it started, I asked their Twitter bod about what happens when the sub ends (bearing in mind you have a legal obligation to maintain your records for a minimum of six years after the fact); would you still be able to access it (i.e. would it at least become read-only)? The answer was no.
You can get around that by ensuring to export reports for just about everything - but loading the software and drilling into it to find stuff, if it's ever necessary, is a darned sight easier than searching through loads of PDFs.
See http://www.computerevidence.co.uk/Cases/CMA.htm
"R v Richard Goulden
Southwark Crown Court The Times 10 June 1992 Computer Weekly 18 June 1992
Computer Misuse Act 1990, ss 1, 3 Unauthorised modification - Denial of access
Software contractor in dispute with company over unpaid fees installed access control security package. Denial of access by witholding password. Alleged damage of £36,000. Defendant convicted. Conditional discharge and £1650 fine."
That appears to be a somewhat different situation, since the defendant installed software to deny access, hence falling foul of the computer misuse act - but it could prove to be a useful case to refer to if the problem ever led to a legal case.
At the time I recall that the lock was already there. If he was paid he would log in and remove it.
He got fined because the company could not access their records and if he provided a way that they could still access the data but not amend or update it then he would have been OK.
Come on, this was rather predictable.
When you push such data out beyond your own network boundaries instead of following Best Practice and sticking it on an internal, controlled separate subnet your attack footprint rather massively enlarges because now the whole planet can have a go at accessing your rather critical business data instead of just insiders (or hacked inside systems, but that still means multiple layers of defence).
Whoever thought that was a sane idea in both Sage and its customers ought to quickly take away their aura from anything sensitive because they clearly have no talent protecting it.
The difficulty I find with cloud is that its impossible when standing on the outside and peering into the darkness to see if its been designed, secured, built and maintained properly.
After all, this is just "someone else's data centre". Snake oil and promises that its all OK do not cut it.
What would help is if there was a recognised minimum standard that people had to demonstrate that they met during design and build and regular re-evaluation say annually or on significant change, Loads of recognised infrastructure good practices, ISO2700x and all the security good practices seem to be conveniently forgotten when the word cloud is inserted into a marketing slide.
Even in cases like AWS, where its designed and built right, doesn't imply that the person deploying into the cloud platform understands the technology stack and has configured it correctly. How many Amazon examples use 0.0.0.0/0 as examples in their training for firewall rules ?. How many people leave this when they were configuring and just forget to tidy it up before they start the fanfare and marketing that they are in the cloud ?
Now take the other end of the spectrum, you phone up some supplier for their hosted cloud server offering, which is just a physical server in a rack with no security at all.
A standards scheme, like hotel star ratings would go a long way here.
GDPR is intended to address this issue, with both the cloud supplier and the customer being required to perform risk assessments of the data being stored (Article 30 section 1 and 2). Trouble is, this means that the cloud provider will need to know what data is being stored - after all, how can you perform a risk assessment without knowing what it is you are performing the risk assessment on?
If the cloud provider now knows what your data is (whether or not it is encrypted), surely the customer's data is more at risk from internal attack than before? This also puts a very large target on a cloud providers internal [customer] records from the outside too. The more people/companies put data into the cloud, the more people will try to hack the datacentres.
As people have said, keeping the most important data in-house will always be the safest option. Trouble is, that solution won't work for the determined boss who actually believes the snake-oil salesmen!
This makes a lot of sense. A standard star rating system like hotels though?
You'd need oversight and auditors for that and as we know organisations like that are why the brown envelope industry does so well.
The trust effectively shifts. Do you trust the organisation that gave the rating?
What is needed is a decentralised standards model based on previous issues.
Personally, I find it harder to trust companies that claim to never have been hacked vs the ones that have.
The ones that have been hacked have generally learnt important lessons (one would hope) whereas the ones with no history just havent been attacked yet or are telling porkies.
"Sage is looking to change 500 years of accounting," exclaimed chief executive Stephen Kelly [...] "There is no company that integrates payroll, time and billing, general ledger, CRM the way we do". (from www.accountingweb.com)
Well, that certainly seems to be true. Everybody else generally hangs on to their customer's accounting data.
"There is no company that integrates payroll, time and billing, general ledger, CRM the way we do"
Yep, I'd believe that. Everyone else's solution probably works.
Sage's software is the buggiest set of rubbish I've ever worked with. It frequently corrupts its own database and when you ask 'support' for diagnostic help they shrug their shoulders and/or claim it has no logs or diagnostic features whatsoever...They'll happily charge you to repair the data though.
There are even third parties that have built businesses on repairing Sage databases for less than the cost of having Sage do it.
The AV *is* the trojan. Runs as "root", Snuffles through all your data and files, email, web traffic, news, ...., looking for interesting stuff. What "interesting stuff" is and how to handle it, is determined by someone else and you don't get to review it either.
You get to "trust" whatever TLA-sponsored outfit who sold you the AV software.
Possible yes, but far less likely if good security design, strong authentication and centralised logging was performed.
Should support staff have access to the underlying customer data / be able to say report on / extract that data ? (select * from customers into outfile get-rich.txt;)
I expect that the average support person only interacts with say 10 customers a day, so if they are accessing more than that or via unexpected routes, then the warning lights should start to flash in the sage security office.
Even an insider should have appropriate access for their role, that is after all why there are security mechanisms built into all the core technologies..
@Dwarf,
You're absolutey right of course but as you say, might not have been support or a typical user. For a outfit the size of Sage with hundreds if thousands of clients, a couple of hundred could easily be accessible to a support team leader or somesuch.
Either way the comments linking it to cloud based issues are off the mark really.
Now I'm a cloudy guy (I own a business that sells cloud apps) ....but there are 3 apps that I would never stick in the cloud.
Accounts, Payroll and HR.
This data these apps have is just too sensitive to ever allow it outside the business walls.
you prob. will, just be aware that there's someone now has your NI Number, Knows where you work, what your job title is, if you have a company car and if you do what it is, if you get provided accommodation from work and if you do where, knows how much you get paid (etc, etc, etc)
This post has been deleted by its author
While it won't help this leak as it used an employee login, they might want to rethink this approach
For the DevOps August Meetup there is a slight hint of Sage in the air.....
Mike Goodwin will explain “why we have not patched a server in two years but are not worried about it”
David Darwent will then discuss a "framework from monitoring in a DevOps world"
http://www.meetup.com/DevOpsNorthEast/events/232248313/
Cloud, own servers in DC, own servers in Office are somewhat irrelevant when the leak came from an active employee account.
They might want to address their internal security policies and account privileges across all the networks pretty quickly... although it'll be cold comfort for existing clients.