Phew. Glad I'm no longer in I T
So glad I no longer work in I T. I'd be heading for a nervous breakdown.
Colonial Pipeline's operators reportedly paid $5m to regain control of their digital systems and get the pipeline pumping oil following last week's ransomware infection. News of the payoff was broken by Bloomberg – which not only cited anonymous sources but also mocked other news outlets' anonymous sources for saying earlier …
I recently spoke with our corporate insurers regarding cyber insurance. I was quite shocked tbh just how ordinary and business as usual it seems to them to tlak about negotiating and paying ransomware demands.
I had no idea it was so normal just to pay. No wonder it's a crazy epidemic right now, it's clearly good business.
Pretty bizarre to say it's a "private business matter" if it resulted in 17 states declaring a state of emergency, the President of the United states personally got involved, and the federal government issued an emergency ruling to lift restrictions on oil/fuel transport over land and water.
Where do they find the people who write these public statements? Have some intern use an online media quote generator?
If their business was effectively shut down and the ransom was $1 to recover it they would have been certifiably insane not to pay it. Given that paying the ransom makes sense at that price point we are just arguing about the price. $5 million was just loose change to the oil company.
Well guys, looks like you can now budget $10 million for your backup procedures, because they'll be back in a quarter or two and you obviously need to lock down your backup procedures to something a bit more robust.
And there should be a fine of 10 times your blackmail money to prevent this kind of thing from happening.
Anyone who talks about backups fails to understand how this ransomware works. They also fail to understand the scale of the problem (I've been there).
These are mostly file system attacks and generally work by exploited systems having Virtualbox VM's or similar running encryption jobs a file or folder at a time. Impacts on the affected systems range from immediate crash if a critical file is encrypted to a creeping death as more and more the the file system becomes unavailable.
Unless you are running high frequency snapshotting (and who does that on everything - especially file systems?) restoring from backup is a guaranteed loss of data, whereas if you can get the decryptor to work you can restore with no data loss - some machines even coming back clean without a reboot. Very few companies can handle massive amounts of data loss in parallel across a good percentage of their core systems.
The decryptors supplied are bare bones but generally work, albeit some files and folders need multiple passes. The nature of the attack also means the decryption are as scalable as the encrytion tasks were.
The best way to remediate is to wrap the decryptors in a parallelisation wrapper, and deploy the f*ck out of it using Jenkins agents or similar. Restores and rebuilds are reserved for machines that are critical or dont come back clean from the decrypt. 10/90 rule applies to restore vs decrypt. Though if this article is to be believed Colonial weren't able to achieve this.
Our DevOps guru's ran themselves ragged over about 14 days but had critical core systems up in 2-3 days.
Needless to say you also install plenty of virus, malware and other security tool agents - sometimes bringing the boxes to a grinding halt again!
Companies pay because its the most pragmatic, data loss minimised way to get their business running again.
Anyone who talks about backups fails to understand how this ransomware works. They also fail to understand the scale of the problem (I've been there).
I've been there. I removed the affected endpoint and swapped it for a maintenance spare, whacked a sheet of paper on it with "UNSERVICABLE: MALWARE" in caps on it and shoved it in the re-image pile just to be on the paranoidly safe side.
The damage was limited to the shares that user could write to; as I expect my users to be a hazard I don't allow them to write to system files, VM's etc so the damage was limited to things that could be written to with their user account, which was their departments share, plus some of the "all staff" stuff.
The file shares were recovered from tape backups.
And yes, tape. Call me old fashioned but the download of online backups is just too slow even if you can rely on dedicating 100% of your internet line to recovering your backups. (which in practice you never can, as the users need to use it to work) If you have a 100mbps fibre line that could you dedicate 100% to downloading backups then that's still only ~600MB a minute or 35GB per hour and you can't rely on even that if the disaster if your office burning down. A single LTO6 drive will kick out something like 140GB per hour, and if you've got a couple of drives for different data on different servers then of course that becomes 280GB/per hour.
I simply asked which files the users immediately required while waiting for the tapes to come back, and restored those first in a qued series of batch jobs that did everything affected. Recovering the tape from offsite took longer than recovering enough files to get back into action.
I had total service restored by the end of the day, and I operate on an absolute shoe string as I have to individually justify all spending rather than getting a budget.
If you hadn't figured out recovery times before hand then you weren't paying sufficient attention to the Disaster Recovery side of doing backups; if you have them but they take a week to recover then they are going to be useless in most cases. "You may ask me for anything you like, but time..." probably carries more meaning in today's world than it did when a frustrated Napoleon snapped it at somebody 200 years ago.
I'd say it's far, FAR cheaper to boot out Windows and start structuring your IT the way it should be, i.e. mission critical stuff on subnets with well managed authorisation models on platforms that actually ARE industrial.
The best way to handle tehse things is still preventing it from happening in the first place, and if there's one thing that has kept the doors firmly open for any third party to have fun with your IT and operations it's Microsoft Windows.
Yes, I have heard all the usual excuses like sysadmins should halt all operations in the company and deploy the next TB of patches from Redmonsd the very microsecond they become available or it's not Microsoft's fault but they are just that: excuses. It's not like the Internet is a terribly new idea by now (even though Microsoft was so late to the party they had to starve out the incumbents like Netscape to get any traction): we know it's dangerous, we know criminals roam, but the very volume of patches flooding out of Redmond suggest something pretty fundamental is STILL screwed up.
Maybe it's worth looking elsewhere until they get their act together.
Certainly with SCADA based critical infrastructure.
Let the downvotes begin while I grab some popcorn and time how long it will take from the Redmond denialists to spot this one.
I agree with a lot of what you said, but trying to blame a vendor - mainly (based on your post) about the size of their patches - is the one thing I would disagree with.
A well designed architecture builds into the design that layers are not infaliable and can / will be compromised. This could be the fleshy bit behind the keyboard, an explot in the OS or an application, the network itself, something server/side, or even operations / processes (e.g. social engineering).
Security works best in tiers. Would I use Windows on a SCADA system. Sure, why not? Embedded Windows works fine, is easy to manage and can be just as secure as anything else given the right configuration. A poorly configured *BSD or Linux appliance will be more insecure than a well configured Windows device.
Personally I think the OS should be chosen based on compatibility, configuration management, support and total cost. If my team and suppliers only know Windows then shoving in another platform maybe "more secure" (in your opinion) out of the box, but as we wouldn't have the appropiate configuration management and monitoring tools nor the expertise and skills then over time it would almost certainly end up being less secure than a Windows based platform.
Air-gapping (doesn't even need to be literal, but severely limited network access such as totally seperate networking with network based security services, blocking all ports in/outbound, proxying), SIEM that's actually used, end user education, security reviews including pen-testing, well configured firewalls, IDPS, endpoint protection, extremely robust backup and DR processes (as with other posters, I'm still a big fan of using tape for critical workloads), MFA and good credential management policies / proceses, attack surface reduction, disk encryption, honeypots, web and email filtering, many small subnets as a security perimeter, physical security and.... patching.
I'm sure that if we could have our way, we would make it physically impossible for a single packet to get from anything into the SCADA network - but security is always a usability trade off. We can protest as much as we want, but pragmatically in this modern world it's unlikely we can have all the security we would want to have such as real air gapping - so as with so much in the security sphere we need to implement a comprehensive, tiered security and recovery strategy.
If you can only provide good security because of your OS choice I'm not sure I'd personally want to be hiring you for your security skills.
This post has been deleted by its author
If you can only provide good security because of your OS choice I'm not sure I'd personally want to be hiring you for your security skills.
If you're so quick to reach the wrong conclusion there's a question if I ever wanted to work for you, but cheap sniping aside, if you analyse contributing factors it's the one constant over years. If Microsoft would spend even half as much at shoring up their OS defences as they spend on marketing to make sure it's always something or someone else who gets the blame for yet another breach I'm sure it is likely it would no longer be an issue.
Internet security started with hard shell, soft centre, i.e. a firewall before the gateway, still the predominant model. Despite fun episodes like the I Love You virus which should have been a heads up, I still come across plenty large companies that do not have even the most basic network and functional segregation so anyone can in principle reach the servers on which the finance and HR services live and yay - these days, everyone is so happy going cloudy that they rarely ask for firewalling, because that happens to cost extra. Oh, and security surveillance on clouds is 100% dependent on what the vendor provides.
Further, plenty corporate websites dangling off the raw Net instead of at least a DMZ, even if they interface with internal systems. Yay! The only surprise here is that people are still surprised when they get breached.
Now, when you start looking at your vulnerability footprint - that's where it becomes clear that Windows simply needs a godawful amount of extra work to remain safe. This is where my preference comes through because I simply prefer to have as little work as possible keeping things safe because it (a) means I will not immediately have all my data smeared over the Net the moment an operator goes for a break and (b) I can spend all that lovely moolah I save in labour costs on even better platform surveillance tools and pick stuff up before it has established hooks on the inside. They're smart now - a breach can be sitting dormant for months.
As for SCADA, air gap all you want, it only takes one service engineer with a laptop, I've seen it happen. That is not really a Microsoft problem (although it was a Windows laptop), it's a procedural one which should have requires screening. In addition, the PLC market needed to adapt too. Earlier PLC device research found one brand that could be made to fail in an indeterminate state with ONE (1) malicious TCP/IP packet in a manner that was unrecoverable - the device needed to be factory reflashed before it could be used again. Oops.
However, if you do NOT firmly take your OS into consideration when you model your risk exposure and develop your defences, then indeed I would not want to work for you because I don't like to be employed solely to take the blame when it all goes wrong.
Unless you are running high frequency snapshotting (and who does that on everything - especially file systems?) restoring from backup is a guaranteed loss of data
Some time ago I had a hard drive that was developing bad sectors in a short period of time. It was my server box. Here is how i handled it:
* do separate backup of as much critical data as I can, data that is not corrupted.
* install OS onto new hard drive, plus the basic software needed, as quickly as possible
* swap hard drive
* restore important data from most recent backup
Now it is up and running. OK I spent a day doing that. Better than a WEEK.
Then I went about analyzing the old drive to see what stuff was recoverable, and what wasn't. In the mean time, the server was RUNNING.
FIRST, get it BACK RUNNING AGAIN. *THEN* you worry about data recovery. Human safety gets shoehorned into the front of the line, as needed.
But I don't know how easily their systems could be restored, which might suggest their backup and restore process was a part of the problem. So maybe my perspective is off a bit. Still, I think they OWE us an explanation, regardless.
In any case, you can get SOMETHING running fairly fast if you set things up properly with your backups. If you're missing a week's worth of billing, at least you did not STOP THE OIL FLOW.
I'm also thinking that if I had set things up better, i.e. having a backup hard drive waiting in the wings with identical software [minus data] on it, that I could just swap in the drive and restore the most recent data from backup, and be up and running in a couple of hours, and not most of a day. BETTER planning, yeah.
Are you familiar with the phrase "attractive nuisance?"
If some twunt rich beyond the dreams of avarice moves into your neighborhood and just leaves a fortune worth of easily-portable wealth in plain view behind the kind of wafer lock that's used to secure toilet paper in a public bathroom and can be jiggled open by using literally any object thin enough to fit in the keyway, and gets burgled repeatedly, only to replace the wealth the next day, you've nobody to blame but the rich twunt when suddenly your neighborhood is totally overrun by crims who are sizing up everyone else's wealth for accessibility versus difficulty-of-access because the rich twunt's place is already picked clean that night.
And there should be a fine of 10 times your blackmail money to prevent this kind of thing from happening.
that would be a good start, yeah. step 1.
Hey Vlad Putin, you can earn some worldwide kudos by ACTUALLY SENDING those perpetrators to the modern day equivalent of a gulag... and THAT would be an EXCELLENT "Step 2"!!!
I had the opportunity to pursue the role of Director of Cyber Security to form (!) a security team at a large regional utility in the west of the USA a few years ago.
The utility fell under city government.
Took one look at their public-facing web site and other city-run web sites (mixed HTTP/HTTPS content, Qualys SSL score of "F") and told the person who contacted me that I was not interested.
Figured the level of support I'd get from management would be close to zero, but whomever got the job would be sacrificed at the first sign of a breach.
Over a decade ago there was a bill in the Massachusetts legislature that would have made executives personally liable for breaches if they had been informed of vulnerabilities but failed to adequately address them.
The bill failed.
This is why ransomware is taking over the internet.
It pays so well and if you can advertise how successful your 'Ransomware as a Service' is, it will pull in even more business selling on the 'skills' to get even more on a 'franchise' basis.
Companies that pay should be unable to get company insurance in future, as they are increasing the costs and risks for everyone else.
Maybe, the warnings and requests for funding to protect the companies infrastructure/data from these threats will be taken more seriously ....... no silly me .... of course not, as the threat only impacts others !!!!! :)
Just a thought, the next big target should be the Bank accounts of the people that are successfully getting paid for this stuff.
More money for each hit and they are not likely to complain to the Bank/Police if someone hacks them.
You only need to worry about the FSB or SVR [Not sure where one ends and the other starts :) ] and Putin wanting the money back !!! :)
> and Putin wanting the money back !!! :)
Pretty sure that he doesn't give a damn as long as you don't:
1: Hit Russian interests (or specifically, the Russian interests of his Russian friends);
2: Start an international incident of the type the United States of SPARTOFREEDOMERICA is likely to respond to with JDAMs;
3: He gets hit cut!
In this case, the ransom was probably so low specifically because they realized they'd hit a piece of U.S. critical infrastructure, and if the incident didn't blow over FAST, then politics was going to get involved, and Putin would ABSOLUTELY have them all rounded up and handed over rather than risk allegations that the Russian federal government was supporting (even, perhaps, inciting) a direct attack on U.S. infrastructure.
I wouldn't also be surprised if all five million didn't go directly to Putin.
Call me old-fashioned, but I thought the presence of armies indicated that 30% of Ukraine is currently under Russian control.
Do you by any chance think that because Rudy Giuliani didn't dig up the dirt his boss wanted on Hunter Biden, the remaining 70% of Ukraine is firmly under US control? Wasn't his boss in charge of the US at the time?
Die Hard 4.0 was optimistic.
It seems that nuclear and hydro power plants have exposed SCADA systems, requiring some ridiculously simple techniques like phoning a non listed number with the right guessed extension to gain access.
Sure its not "exposed to the Internet" as such but with the right credentials and a suitably modified laptop with freely downloaded demo software can get you access to things like the ECCS and cause all kinds of mayhem.
Fortunately settled for proof of concept, and AFAICT no harm was done but the point is if someone had malicious intent they could potentially bring down the entire grid for weeks just by feeding the software plausible-but-bad values.
My last experiment got me access to people in the know at an undisclosed location, who actually were aware of uh, "things" and were astonished how one individual could access their secure phone network without even a verification step so settled for a brief technical discussion and left it at that.
It will likely be considered an act of war on the insurance end so that insurance wouldn't cover the losses, and it will be considered an act of cybercrime by the government so that nobody in the DoD has to get involved, and the evidence will have needed to be wiped because there was no extra storage for imaging the infected servers, so the NSA and FBI will never be able to analyze it. The IT department will also likely get destroyed for allowing this to happen. In the end, we'll never really know who did it, and somebody is $5M richer. Not bad.
"the ransomware KO'd back-office systems used for monitoring oil flows and generating billing records based on those flows"
That shouldn't be a reason to shut down completely. There should be some back-up means of measuring quantities, may be less accurate and you might lose a profit margin, but better to do that and keep things flowing and save on the ransom. For example, if the oil comes from or goes into storage tanks (which it probably does), then send someone out to read the level gauges on the tanks before and after, take a photo perhaps, write out the bill by hand if necessary. Point out to customers they have a choice of delivery charged on that basis or no delivery at all.
On parallel lines , my local supermarket once had all the barcode scanning equipment down. To keep things moving and probably prevent a riot, the check out staff were authorised to assess the value of each shopping trolley by sight and propose a price , if the customer disagreed , they got out the pen and paper. Sure , they must have lost something, but they didn't throw away perishables as unsold, and all the customers were happy in getting their dinners as normal.
@Krassi
Quote,"the ransomware KO'd back-office systems used for monitoring oil flows and generating billing records based on those flows"
That shouldn't be a reason to shut down completely. There should be some back-up means of measuring quantities, may be less accurate and you might lose a profit margin, but better to do that and keep things flowing and save on the ransom. For example, if the oil comes from or goes into storage tanks (which it probably does), then send someone out to read the level gauges on the tanks before and after, take a photo perhaps, write out the bill by hand if necessary. Point out to customers they have a choice of delivery charged on that basis or no delivery at all. Unquote
Interesting comment, so on one side you are saying that it is wrong for hackers to do ransom, on the other side, you would happily hold your customers to ransom?
Hardly holding the customers for ransom. Let them know what's going on, do your best to properly measure how much product they'd be getting, and make sure they're ok with it before doing the delivery. Sounds like good business practice. I'd imagine most customers would be VERY happy with this approach, as opposed to a "product not available" situation.
(Edit: Ninja'd!)
Medal? Nope, no medals. Not a chance in hell.
They didn't just bleed a company here, they attacked U.S. critical civil-and-military infrastructure.
That's the kind of thing WARS can start over. Putin does not want a war with the U.S., because Russia WILL LOSE. It may be the end of the world, but that means it's the end of Russia, too.
Putin would happily have handed them all over to the Agency, or just had them all rounded up and shot summarily, to prevent that. The very last thing he needs is some kind of obvious unifying incident that will unite the entire U.S. population behind a rhetoric of "make Russia pay," and he knows he's already on very thin ice indeed with all the elections tampering.
One of his band of hacker halfwits nearly starts a war?
Putin got all USD$5m, and they got told that they were dead men if they did that shit again.
Since the pipeline is up and running now, paying the ransomware techs was much faster and easier than sending their techs out to try and find all the backups (assuming they even exist) and then try and restore every single system and verify that it's all working. $5M was probably only a few days profits and is probably covered by their insurance company and tax accountants.
DarkSide ransomware crims quit as Colonial Pipeline attack backfires
Russian-language cybercriminal forum ‘XSS’ bans DarkSide and other ransomware groups
"On top of that, many countries are absolutely cybercrime safe havens. Many countries have no problem with cyber criminals originating from their country as long as the criminals don't attack their own countries and tacitly agree to do favors for the government, if asked," Grimes explained, adding that some nations use stolen money to help fund government services.
"It funds it directly because the perpetrators are paying expensive local and political bribes to stay in business, and indirectly because they spend the money on goods and services in the country. In many countries cybercriminals are almost celebrated by the officials."