That is a bloody good idea.
Spanking the pirates of corporate security? Try a Plimsoll
On New Year's Eve 2019, the good ship Travelex struck the iceberg of ransomware. That's not a good metaphor, to be honest: when the SS Titanic hit its frozen nemesis, it had the good taste to unambiguously sink in two hours and 40 minutes. Not so Travelex. At the time of writing, more than two weeks after the lights went out, …
COMMENTS
-
-
-
Thursday 16th January 2020 15:31 GMT alain williams
How to make changes happen
With lifestyle changing fines to the IT director if s/he knew about it and did nothing in reasonable time.
People don't care if their company pays a fine, change will happen when penalties hurt individuals and their daughters can no longer afford to go to pony club.
-
Thursday 16th January 2020 17:38 GMT BillG
Regulatory Capture
Regulatory capture – where an industry gains control of its own police force – is one of the primary structural problems of liberal democracy.
Absolutely true. I used to argue until I was blue in the face with idealists that believed a Democracy (and its extreme form, Socialism) is the perfect representational government. The issue is that representation must be balanced with resistance to corruption.
A Republic is structured to protect rights, and so seeks to represent the people as best it can while being as immune as possible to corruption. However a pure Democracy grants and rejects rights making it is so vulnerable to corruption that regulators always end up being owned and controlled by those it regulates.
Reminds me of a phrase "You must be a modern Socialist because if - you believe the reason why Socialism has always failed is because people like *you* have not been in charge of it!"
-
Friday 17th January 2020 07:58 GMT Richard 12
Re: Regulatory Capture
Socialism works very well. Every EEA state is socialist, as is Canada. Actually most of the world now I think on it.
I think you have it confused with Communism, which is a great idea in theory but doesn't work for societies larger than ~100 individuals, if said individuals are much more complex than bees.
-
-
-
-
-
Sunday 19th January 2020 13:08 GMT Version 1.0
Re: Mandatory rewards for bug disclosure and fines for failing to fix
Compulsory bug bounties would prolong the problem - a programmer could add bugs, or simply make some notes, and then retire on the compulsory bug bounties.
This isn't a one-off problem, it the way we build the world these days. Perhaps a better solution would be to require that every software/hardware team leader has an engineering PhD. Unlikely isn't it? Nobody cares about talent these days, executives only care about making a couple of million each year and a golden parachute when your Simpsons programming team screws up.
And it's not just the IT world, it's everywhere.
-
Thursday 16th January 2020 10:34 GMT DJV
Absolutely.
And make it something that needs to be regularly reported on to make sure it's up to date - e.g. a bit like updating regular company accounting to Companies House in the UK. If the report is missing/late then alarm bells should start ringing.
Maybe someone (maybe yourself, Rupert?) with access to an intelligent politician (there must be at least ONE around, Shirley!) should attempt to get something moving on having this implemented.
-
Thursday 16th January 2020 16:19 GMT Bronek Kozicki
Re: Absolutely.
Such regular (annual or quarterly) report would include just useful information : number of disclosures received, number fixed, number closed without being fixed, number fined and the remainder (i.e. "not a bug" acknowledged by the regulator or bugs in the process of being fixed). Also the name of director(s) responsible.
-
Thursday 16th January 2020 23:40 GMT veti
Re: Absolutely.
There's not much point in naming the director responsible, that would just lead to someone being designated the official fall guy for this purpose (and remunerated accordingly).
"Numbers received, closed, fixed" - I think a case could be made that these would give away too much commercially sensitive information. I suggest a single headline number, which is the total amount of fees/fines levied by the regulator against the company - which, for fairness and to forestall accounting shenanigans, could be published by the regulator, not the company.
-
-
-
Thursday 16th January 2020 10:35 GMT macjules
And AWS?
Part of the problem is that Travelex's developers or their DevOps team seem to have been scope-locked onto using AWS for hosting, while apparently not ensuring a security group protocol and not having a clear backup policy; something unthinkable in this day and age. A relatively simple audit of their digital estate would have revealed this and (heavens forbid!) someone might have recommended that since Travelex were very much Microsoft orientated that they move to Azure hosting, which would at least have ensured automated backups of sites, databases and the API.
Not only that, but also someone would have been able to quickly work out that Travelex was using a framework that was effectively discontinued in 2012 (.NET 4.0.30319) and that it might be time to upgrade to something a bit more current.
Oh and we have a name to blame for this fiasco.
an AWS security architect worked with us closely, shared industry knowledge, and ultimately helped us
Remind me not to employ AWS security architects in the future. :)
-
Thursday 16th January 2020 12:59 GMT DontFeedTheTrolls
Correlation ≠ Causation
While there may be lots of smoke, is there a smoking gun?
Do we know the outage is related to any of the AWS related service or is it impacting internal legacy infrastructure and applications and is that legacy a major factor in the spread of the problem?
I'm not defending Travelex, AWS, DevOps or Legacy, just pointing out that at this point I don't believe there is clear enough detail to
laughblame at any particular business decision or technology choice. There are things we can tut at, but tutting is not a good stance.
-
-
-
Thursday 16th January 2020 12:47 GMT DontFeedTheTrolls
Re: A decent backup strategy is very expensive.
And a seat belt will probably make fuck all difference if a Boeing 737Max falls out of the sky on top of your car.
You must consider multiple failure scenarios and have multiple options of defence and recovery. A backup and DR is not the answer to everything, they are single tools in the box of tricks.
-
Thursday 16th January 2020 15:27 GMT alain williams
Re: A decent backup strategy is very expensive.
And a seat belt will probably make fuck all difference if a Boeing 737Max falls out of the sky on top of your car.
Crashes on roads happen a lot, a seat belt is likely to help save a life - so worth installing
A 373 Max hitting a car is unlikely - so not worth protecting against
-
-
Thursday 23rd January 2020 21:54 GMT macjules
Re: A decent backup strategy is very expensive.
Ok, a returning space shuttle hitting a 737 Max which then changes its MCAS profile and crashes into your car might well be very unfortunate indeed. But not impossible. And please do not forget that that is more likely than anyone winning the Euromillions jackpot, so be warned.
-
-
-
-
-
Thursday 16th January 2020 23:41 GMT veti
Re: A decent backup strategy is very expensive.
The operative pronoun being our backup. Not theirs.
Then we showed, by our collective choice of banks/building societies we dealt with and the choices we made with them, that we weren't in fact willing to pay the costs of that backup. And so they stopped providing it.
-
-
-
Thursday 16th January 2020 11:13 GMT big_D
Re: A decent backup strategy is very expensive.
Every place I worked at had a backup strategy, even though it cost time and money.
It has been useful a couple of times. A lightning strike, for instance. Failover to Veeam hot stand-by, order new kit, install ESXi, shovel the data back from Veeam, back up and running.
Another place, however, the management had a "secret" NAS, that not even the sys admins were allowed to back up. Management said they'd do it themselves, never got around to it, then the CIO got phished by ransomeware... Which was the only system affected? Yep, the NAS that he was supposed to have backed up, the systems that us plebs used and were backed up every night weren't affected.
-
Thursday 16th January 2020 16:06 GMT Brian Miller
Re: A decent backup strategy is very expensive.
No, the rigor of the exercise is "expensive." The cost is not in tape drives and scheduling, for that is minimal. The cost is getting the fscking users to close their apps is what is "expensive." "Oh, I can't do that!" Even though they are going home for 16 hours.
It's the users, not the equipment, that stand in the way of good backups. And yes, managers are users.
What Travelex should face is the managers should be fired and barred from management positions for life. That is how regulation should work. You may not have that job because you have proven yourself to be a danger.
-
Thursday 16th January 2020 23:42 GMT veti
Re: A decent backup strategy is very expensive.
If I have to pay 10% of my annual profits, for a form of insurance that mitigates a risk that has a 5% per year chance of occurring, then the rational choice is not to buy that particular insurance.
Of course you can quibble about how the percentages are calculated, but ultimately it's a judgment call. There is no single "right" answer in every situation.
-
Friday 17th January 2020 08:17 GMT Richard 12
Re: A decent backup strategy is very expensive.
Depends on the consequences.
If it'll cost you 20% and you can afford to pay said 20%, then probably not - but you will have to ensure that you can afford it.
If the unmitigated consequence of that hazard is that you cease to exist, then it is extremely irrational to ignore it.
An event with a 5% chance per year has a probability of either 9.75% of happening at least once in the next two years - unless it's assumed to immediately kill you if it happens, in which case it's 9.5%.
-
Friday 17th January 2020 08:39 GMT Phil O'Sophical
Re: A decent backup strategy is very expensive.
If I have to pay 10% of my annual profits, for a form of insurance that mitigates a risk that has a 5% per year chance of occurring,
A risk assessment needs to consider more than just the probability of it happening, it also has to take into account what the cost will be if it does happen. If the result is total loss of your business then 10% of annual profits may be worthwhile even for an unlikely event. If the result is just a 20% hit on income for 2 years, then perhaps it's less worthwhile.
Agreed, though, it's why a risk assessment always has to be the first step. Work out the likely losses, and likely costs, and choose the right solution (which may indeed be "do nothing"). Unfortunately too many people get it backwards, I've seen far too many customers say "I've bought your XXX solution, how do I configure it to protect my business". Too late...
-
-
Friday 24th January 2020 15:09 GMT simple soul
Re: A decent backup strategy is very expensive.
Having a decent backup is only one half of the puzzle, when did you last verify it? You can backup as much as you like but unless you've regularly tested a restore its like a pencil without lead; "pointless".
The other thing to keep in mind with Ransomware some of these very nice people will start encrypting your files but not tell you for a week or two in the hope that you've now overwritten your decent backup with encrypted gibberish, back to the point about testing and checking your restores. Take a the Mark 1 eyeball to the restored files and make sure they are what you expect to see not a bunch of 1's and 0's.
A long time ago I had the misfortune to have written what I felty was a good backup and verification routine based around doing a checksum comparison of the last file written to the tape after it had been restored. The last act of the routine was to eject the tape. Came the fateful morning when a remote site calls and requests a restore rom the previous week. No problem please insert the tape from that day, we have already came the reply. Imagine the horror when I discovered that as they walk past the server every morning they shut the drive door with swapping the tapes.
-
-
Thursday 16th January 2020 12:10 GMT John Jennings
Wouldnt work - without some modification.
It wouldnt work.
Why? 2 main reasons spring to mind, and there would be others you think about it...
1/
Precisely how do you define the plimsol line? Rember the story this week about the ICO running out of legal budget - how big would a sim1ilar cybersec organisation need to be to empower an authority to properly audit a company - and it would have to be every company, their suppliers and partners etc.? Every industry type has a variable level of compliance required for regulatory compliance. It would be a nightmare of compliance and enforcement.
2/
Because my BOFH would know where the out of date flares or dodgy liferafts are. So, they have an option to effectively blackmail the company. They pass the details to a friendly researcher who squeals to the beak, and so spits the profits as they go out the door. No company would willingly put itsself in the situation. Imagine the compliance a BOFH would have to go through to prove they were not going to do this at some stage in the future. Be careful what you wish for!
Our problem is that 90 days is what the market really values. IT and associated service teams need to justify expensive change over years. It is, and will remain. But its the same with most things. In manufacturing, we dont have everything done by robots becasue they are expensive, and cant recover the cost in some areas quickly enough.... A ROI of a year might be the best a production engineer might get away with - its the same in IT, as it is everywhere - HR, Transport etc. Whatever the service....
-
Thursday 16th January 2020 13:33 GMT stiine
Re: Wouldnt work - without some modification.
re: 1), If you read the excellent, and very snarky, article again, you should pick up on the fact that the money paid to you as a bounty == the fine issued to the company -- therefore the cost to ICO == 0 plus administrative overhead (which is their job description)
re 2). Yes, those fuckers already exist. I believe we could blame the ransomware infection of Travelex on them.
-
Thursday 16th January 2020 18:04 GMT jmch
Re: Wouldnt work - without some modification.
(The BOFH)... "pass the details to a friendly researcher who squeals to the beak, and so spits the profits as they go out the door."
End result: holes in company IT get fixed, company increases awareness and focus on security, security researchers and company It get financial compensation. In other words, the exact desired result
-
Thursday 16th January 2020 23:42 GMT veti
Re: Wouldnt work - without some modification.
1. Yes, precise rules remain to be specced. Who decides what industry your company falls into? How do you decide what level of fines should be applied to it? If someone finds a hole in the website of (e.g.) a hotel, that allows an intruder to double-book, it seems unreasonable to charge thousands of dollars - or even many hundreds - for that. On the other hand, a similar exploit for an airline would be more serious (because it would expose the airline to security threats that have no real relevance to hotels). Likewise, there needs to be flexibility in the timeframe allowed for the victims to fix their problems. Not every system has to be taken offline immediately, or fixed within 48 hours of notification. Who makes all those rules, and how?
This is a non-trivial problem, and one I can imagine sinking the whole idea once you get into the nitty-gritty of it. But it's not self-evidently insurmountable.
2. This is not a problem. If the BOFH blows the whistle on a particular issue, that's good, because it means the company is now motivated to do something about it. If they threaten to blow the whistle, that's even better, because it means the company is motivated without having to pay off the bounty.
-
Friday 17th January 2020 10:52 GMT richardcox13
Re: Wouldnt work - without some modification.
One approach would be to require reporting the cases and their outcome within the audited accounts.
Resolution of reported issue would also need to be audited.
Ie. the organisation has to report they have had issues, and if they have not resolved them fully.
Much like other existential threats to a business (or its profitability) have to be reported.
-
Thursday 16th January 2020 16:05 GMT vtcodger
A decent backup strategy
The absolute defence against ransomware is a decent backup strategy.
And what about all the transactions entered more than X hours after your last good backup? Where X is a number largely controlled by the same fine folks who trashed your disks. It's not hard to see where that path leads. You make backups and save transactions. The ransomists respond by booby trapping your backups. Load them and you're reinfected (and the ransom amount goes up). You respond by ... I dunno. You'll think of something ... and they respond by ... They'll think of something ... and you ...
I'm not saying that frequent and complete backups aren't a good thing. I'm just questioning that they are going to be a universal effective answer to ransomware.
-
Thursday 16th January 2020 18:00 GMT Robert Carnegie
Easy targets preferred.
While it's important to remember that cracking vulnerable systems is a full time everyday job for naughty people and they are serious and professional about it, it's also the case that if your system is better secured than the company next door then your neighbor will be hit first. For that matter, profitable extortion can be as simple as the good old "protection" racket. They threaten violence, credibly, against your premises and staff, unless you protect yourself by paying them to leave you alone. They also bribe or threaten your staff to facilitate all this. No need to fiddle around with computers, which can get pretty complicated.
-
Thursday 16th January 2020 18:44 GMT MCPicoli
Re: A decent backup strategy
What is preferrable:
a) Losing X hours of transactions and Y pounds and then coming back to business, vulnerabilities fixed, reputation (and ego) bruised, but alive.
or
b) Losing ALL hours of transactions (past and current), 100Y (or more) and possibly never coming back to business, everyone fired and your (company) name forever written in the hall of infamy. Also, not being allowed within 100 meters of a production datacentre.
Backups done right aren't a panacea for curing all security headaches, but for sure they're a damn good starting point.
-
Thursday 16th January 2020 22:54 GMT SImon Hobson
Re: A decent backup strategy
And what about all the transactions entered more than X hours after your last good backup?
That SHOULD be covered in the Business Continuity Plan of which the DR plan is only a part, and the backups are only a part of that.
At a previous job I was given the task to "write disaster recovery plans for the IT" because an auditor from either the insurance company or parent company (can't remember which of them created this particular problem) had asked to see them. When it came to getting manglement to define the parameters needed, the response was "stop being difficult".
The business continuity plan will have defined the technical recovery point objective - with is basically "how long ago can X be". It will also have defined what processes will be used in order to "roll forward" from X and recover those transactions. Lastly it will have defined the Technical Recovery Time Objective (which is how soon after an incident do we need the technical stuff back working) and the overall recovery time objective (which is how long after that it takes to deal with all the other stuff like re-entering data, entering data that was recorded manually, etc). These times may be different for different systems & parts of the business.
The technical recovery point objective is primarily what drives your backup strategy. If the BC plan says the business is dead if it's longer than an hour ago, then an overnight backup isn't going to cut it. But if the BC Plan says you can manage with a couple of days, then aiming for a hot spare with backup no older than 1 hour is wasting money.
-
-
Thursday 16th January 2020 18:12 GMT jake
The two biggest problems in IT security today ...
... are the same as they have been for the nearly half a century that I've been making money in the IT world, and probably go back to the dawn of time.
The first is convincing management to throw enough money (resources) at the problem to have the correct hardware for the situation ... AND the staff to run it properly.
The second is the big problem ... 90% of the userbase is incapable of wrapping their tiny collective hive mind around the concept of security. This is doubly so for management.
-
Friday 17th January 2020 19:08 GMT Michael Wojcik
Re: The two biggest problems in IT security today ...
90% of the userbase is incapable of wrapping their tiny collective hive mind around the concept of security
Of course. For any topic X of at least moderate complexity, it's likely true that for a sufficiently large population, at least 90% don't understand X - regardless of its relevance to their lives or jobs.
Security researchers have made the point, over and over, that blaming users is unproductive; and that training users, while it can have some benefit, is limited and rarely or never a satisfactory solution in itself.
None of that contradicts what you wrote, of course. It's just to point out that while we have mechanisms for encouraging greater investment (regulation in various forms, possibly with some contribution from market forces, e.g. by coupling security measures to insurance premiums), the human element remains an intractable problem. It can likely only be successfully addressed with a complex of human and technical measures that's customized for different use cases.
-
-
Thursday 16th January 2020 18:18 GMT macjules
At sea ..
At sea as in commerce, the best defence against both icebergs and pirates is a sharp look-out.
And a minigun. With a fsck load of ammo.
-
Thursday 16th January 2020 21:43 GMT Marty McFly
CIO = Career Is Over
CIOs have such a short life span that they can roll the dice. Let's say there is a 1 in 10 chance of a given company being bit by Ransomware in the next 10 years. If the CIO is on the normal corporate life span, they are only going to be around 2-3 years anyway. That means there is only a 20-30% chance of being bit on their watch.
Now that new CIO came in with the full support of the board and CEO. They were given money to spend. So they did. Good security & backups. The CIO avoids Ransomware their first year or two on that financial support. Soon the board & CEO are having good times and forgot about why IT needs that money. So they have the CIO start cutting. Pretty soon security is reduced, backups aren't happening, etc. No problem for the CIO, they are on track to jettison that job in a few more months anyway.
Budgets get cut some more. The CIO exits with whatever executive level package they negotiate. Now is when the Ransomware hits. But not a problem for the departed CIO, the new guy has to clean up that mess. At which point the CEO and board decide they really should spend money on security....and the cycle repeats.
-
Friday 17th January 2020 10:29 GMT RobinCM
The IT Security Plimsoll Line exists
It's called Cyber Essentials (Plus, the basic is largely meaningless as it's self assessment).
There's an additional one called ISO27001.
The problem is that nobody is bothering to look at them.
No CE+? Triple the price for business insurance. Triple the taxes. Blocked from operating in certain industries.
Have CE+ and ISO27001? Maybe you get a discount.
This is indeed all about money, government and regulators should therefore use that to their advantage. Short term they get more income, long term they get more secure businesses in their country/jurisdiction.
The public could also be told to use CE+ as a differentiator, but so few companies currently have it there's often nobody to choose from at all!
-
Friday 17th January 2020 10:40 GMT Mike 137
"The absolute defence against ransomware is a decent backup strategy"
In my experience there are no "absolute defences" other than terminating the vulnerable operation, which is of course rarely practicable. However, a major contribution to defence that's almost universally ignored is robust management. ISO/IEC 27001 requires "top management" to take ultimate responsibility for security, but they don't - they merely delegate and stop thinking about it. The 27001 ISMS rarely exists except on paper, as the purpose of certification is not to manage security well, but to get the certification to open the door to high value contracts. Usually, as long as all the required documents exist you get your certificate. they don't have to actually contribute to being secure - indeed in many cases I've examined, they don't even have to make sense.
This is where this otherwise excellent proposal falls down. An organisation that behaves perfunctorily or negligently over security does so because that's its ingrained culture. Consequently, finding individual vulnerabilities and forcing their remediation only skims the surface of a shark infested ocean. Equifax were told about a critical vulnerability (and a patch) by the US CERT, but they couldn't find the vulnerable server because of a completely disfunctional technology management regime that didn't include any oversight at tactical, let alone strategic, level. This (very common) kind of problem is not soluble by point fixes - it needs an entire change of corporate culture. Unfortunately that's not likely to happen except in very few cases.
-
Saturday 18th January 2020 09:19 GMT John Savard
Silly Advice
Given that ransomware is a thing, makers of computers and operating systems should design computers so that backup is easy and inexpensive.
Ideally, they should also design computers so that they are secure, and thus virus infections, including ransomware, just can't happen. Oh, but how could you do that? Well, put everything that involves connecting to the Internet in a sandbox. One that really works.
Of course some data coming from outside on the Internet may need to be taken to the part of the computer where the work really gets done, so this connection needs to be made transparent, visible, and controllable.
-
Monday 20th January 2020 07:04 GMT Charles 9
Re: Silly Advice
Since when has a sandbox ever really worked? ANY portal to the outside is a potential exploit. And it's hard to make a bulletproof user-facing OS. While an OS in ROM can't be modified to be dangerous, that doesn't prevent an exploit being IN the ROM as a recent iPhone story tells.
-
-
Saturday 18th January 2020 17:50 GMT Paul Stimpson
Why?
Why did nobody do the sums comparing the insurance of proper disaster recovery against the massive costs of cocking it up this badly?
I would go with:
a) because management didn't understand "all this technical mumbo jumbo" and wouldn't spend that much money because everything "was working".
and/or
b) because nobody thought it would happen to them.
I've found phrases like "reputational damage" and "existential threat to the business" to be quite useful in meetings.
-
Monday 20th January 2020 09:56 GMT Kevin McMurtrie
Put it on the WAN. Put it ALL on the WAN!
My experience with companies not caring about security and disaster recovery is that they already have a secret imminent disaster. Maybe there were some investment mistakes, historical data corruption that inflated perceived performance, gross exaggerations, or a flawed business model. Having everything wiped clean by an outside influence is, by comparison, a graceful exit.