"Many of these victims reported [MFA] was not enforced"
Well, now you know why you should enforce it.
Some people only learn the hard way.
Karakurt, a particularly nasty extortion gang that uses "extensive harassment" to pressure victims into handing over millions of dollars in ransom payments after compromising their IT infrastructure, pose a "significant challenge" for network defenders, we're told. This is largely because the criminals use such a wide variety …
That's fine to say, but it's not always as easy as that to implement.
What external device should people use for the MFA? Not everyone has a company phone, so mandating that people use their personal phones has a raft of legal and security problems (what if an update to the Authenticator App either a) bricks the phone or b) opens a security hole, who pays for phone repairs, and who holds responsibility if the security hole leads to a personal or company breach?). We all know personal phones are never as locked down as a company phone, how do you guarantee you're not just adding an extra security hole in the network?
If an additional Device is mandated, like a TAN code generator, well out of who's budget does that come? IT because they're the one's mandating the MFA? The department because it's that User who will need it to do the work? The fight there will be one for the ages...
My firm is in the process of getting ready to mandate MFA on all user accounts, and the looks of surprise and astonishment, when I bring up the above points, and the uncomfortable squirming when you start pointing out the legal pitfalls is fun to watch. But there is a hell of a lot of hand waving, "It will be fine, dont worry about it...", "what do you mean you wont use your personal phone for work?" style of answers. It's very depressing...
IT shouldn't be able to mandate MFA without approval by the C level.
If C level approves a project to use MFA that means approving the budget for it. If they can't affort the ~50 bucks for a hardware token for every user not already having a company smart phone C level shouldn't allow the project to go forward. If the project proposal didn't include projected costs C Level should have asked for that information. Which department is in charge of adminstrating that budget should not matter - and with an approved budget most department heads would fight to get control of it.
Problems like fighting over who has to provide the budget for hardware tokens for mandatory MFA is not showing how difficult IT security is but how (un)fit for purpose the processes in a company are.
Yes, quite right all around.
Which goes to their point about MFA not always being easy to implement, and is further evidence of how Big Problems[tm] are usually political, personal, organizational, etc. rather than purely technical.
Which points out how it fails:
"IT shouldn't be able to mandate MFA without approval by the C level."
IT: We should have MFA.
Management: We're busy. Go away.
IT: There are major security risks. We can prevent them or at least significantly reduce them.
Management: Fine, go do it.
IT: We'll need some money and you will all need security tokens on you at all times.
Management: How about we skip this idea?
IT: That's dangerous, and it's not that much money.
Management: Bring it back up at the next planning meeting.
Admittedly, everywhere I've worked full time, IT has and it's gotten implemented, but you can see how it isn't quite as simple as we'd like it to be. I'm not comfortable letting IT off the hook when the management has pushed back, but neither can IT bear all the blame in that situation.
When our company introduced 2FA not many had a company phone and no-one was willing to use their personal mobile. So in order to get going they used my company phone to receive their PIN number. I had a collection of about 20 personal id's. Apart from the blatant disregard for security, when I left the company their authentication channel went too. I didn't take the phone. It was considered old fashioned so it was never switched on again. Even if it had been switched on, they would still need the phone PIN-number. Not wishing to appear ignorant or stupid, the boss would never have dared to contact me.
The thing is, for some, it is never "that easy to implement ..."
There is an old saying about not letting perfection be the enemy of progress.
And "authenticator app bricks phone" is one of those things that never happened. Much like that time a driver had to choose between running over a pram or bus queue.
(Quite impressed our Googly friend now backup the authenticator codes to your workspace account).
No security will ever be 100% effective. And it's beyond irritating when IT systems seem to be held to a higher standard than real life.
The damage from most ransomware attacks would have been the same if the company affected had a fire at their offices revealing a general lack of resilience.
I'm with you on "authenticator app bricks phone" never having happened, but a phone (hell, anything) that fails shortly (ie. any time up until ten years) after IT has visited it was clearly the fault of IT. And if it's the user's own property, there'll be even more hell to pay.
Having said that... we're in the same boat and, as our deskphones have no external DDIs, can't use that method for MFA either. We've been creative. We recently implemented an app-based T&A system that uses a geofence to allow staff to log in to work when they're on-site - it's for H&S as much as anything else, providing fire register etc. too. Of course, you don't have to use a phone to log in if you prefer not to - look, here's a couple of tablets at the front door where you can enter your email and password at 8am. And again at 5 when you're heading home. What's that? No, no one does. They all have phones (attached to the free company WiFi natch) so they use those - and they use those for their MFA too. It's a bit of a carrot and stick approach, but it works for us and for them too - something for everyone. If you've got a generally content workforce (and if you haven't, why not?) it's relatively easy to get a consensus. I've implemented similar in a couple of organisations and there's never more than one or two tinfoil hatters who refuse. In the case of preventing malware that could steal your own personal info or bring down your entire employer, there's a definite quid pro quo.
Every company I’ve worked for since 2FA became a thing has just assumed that I would be willing to hand over access to my phone.
One even insisted on enrolling my (personal!) phone in their device management / remote wipe capability as part of the deal.
That was the employer that ended up providing me with a HW keyfob…
Yes. It's appalling (though no longer surprising) to me how many companies behave like this. Most would absolutely forbid you bringing your own laptop with your OS of choice and softwares and authentication etc. onto "their network", but think nothing of requiring you to essentially do the same thing in reverse.
"One even insisted on enrolling my (personal!) phone in their device management / remote wipe capability as part of the deal."
I'm with you on not allowing that. I had somebody I was working with want me to have Text. I fought like mad to have the telco take it off my service since I found it a massive time waster and buggy. I told this other person that if he wanted me to have text, he could bloody well provide me with a phone and service. Didn't happen since he didn't want to pay for it.
My personal business if none of an employer's business. I am not going to allow them access to my personal phone. It could happen that I'm looking into employment elsewhere and would not like them to know about that until I'm ready to tell them. I might have data on my phone that I don't want to be remotely wiped if I'm dismissed and it's done before I have a chance to make sure it's backed up. When I was working for a company with a very abrasive person nominally over me, I was looking for a job elsewhere. He managed to get himself invited to leave and the company then hired a very good engineering manager so I decided to stay. Had the company known I was shopping my resume, they might have hired a replacement for me to train and then let me go.
When connecting to an MS Exchange server using IOS Mail, you are explicitly giving the Exchange administrator the ability to remotely manage (including wipe) your device, as stated in the setup description which reads:
"Adding an Exchange account will allow the Exchange administrator to remotely manage your device. The administrator can add/remove restrictions and remotely erase your device"
Does iOS security permit this to be a full device wipe or just an app being able to remove itself along with associated user data, within the apps designated storage?
Might be a useful experiment for someone to determine just how far a remote exchange server can reset a user device…
According to the disclaimer/warning (above) when setting up an Exchange account, yes.
When I noticed this functionality last year in the AWS WorkMail console it rang alarm bells, as I did not recall seeing the disclaimer at the time. After a bit of surfing I found (and lost) a youtube video by someone you unexpectedly confirmed using their phone, that "wipe" really did mean the full device.
Apple also confirmed it was expected behaviour and referenced - https://support.apple.com/en-au/guide/deployment/dep158966b23/1/web/1.0#dep10d49a1cc
Thanks, did a quick Google and discovered this useful resource
https://learn.microsoft.com/en-us/exchange/clients-and-mobile-in-exchange-online/exchange-activesync/remote-wipe-on-mobile-phone
The Note and Caution are worth reading.
Basically, only use Outlook to connect to exchange and not the native android/ios mail client.
Whilst I agree with your sentiment, if you are a user due to employment etc. you don’t have a choice, but you do have a choice (I assume) on the way you access a business exchange based email account from your personal device.
The ability to perform a full factory reset via the email app is concerning: compromise a companies exchange server and thus gain user administrative access and the first the company knows about it is all their phones get reset and thus locked out…
I wonder when this threat will gain a bitcoin value…
We give the users a choice - if they want to use their personal phone for the authenticator app then they can, subject to signing our agreement. If they don't want to use a personal phone we will provide a work one.
We've had occasional situations where they say they don't want to put it on thier own phone but neither do they want to have to carry around another one - and the answer in this case is that it's an either/or choice and they have to choose one of them!
Did have one user asking recently whehter I could turn off MFA on his account because it's annoying,..
Not a bricked phone, but have seen scenario where required authenticator software would not install on a colleagues phone.
.. Colleague had an old phone (like many people do - she just used it for calls and texts, she didn't do any online banking / purchasing on it, so of no relevance to her that it was an old version of Android).
She felt pressured into getting a new phone to install the required software (at a cost to herself, personally I would have asked employer to pay (as her existing phone met her personal use needs) or provide a company phone, but I'm rather more outspoken than her)
Anon for obvious reasons as employment related tale.
"You'd really like to trust your personal phone to Microsoft (for example), not to screw up an update? Really?"
You mean of the Microsoft Authenticator app? Yes, I'm completely fine with that. I've had that authenticator, along with several others, for years. The app has never broken, let alone caused damage to anything else on my phone, because can you name any app update that went through the normal procedures* that bricked a phone? At worst, the authenticator would break which is more my employer's problem than mine, but that's never happened with that one or any of the other ones I've used, and I've used at least four for various purposes.
* I'm referring to a normal app with user-level permissions, not for example one installed as part of a rooted device's firmware. I have broken, although not bricked, a device by messing with those, but a) that wasn't an update, it was me deliberately deleting and replacing files in the /system/priv-app directory which you can't expect to be perfect, b) authenticator apps are not in that directory and don't do anything like that, and C) they wouldn't have the ability to do that even if I was able to replace them with code intended to brick your phone because of app sandboxes for user-installed apps.
Our medical provider uses the employee ID as both a NFC keycard and login authenticator. This approach seems to have merit because authentication isn't persistent -- you don't come in first thing in the morning, sign in and that's that for the day, you sign in on different machines as you move around the building and as soon as you leave the area of a machine you're logged out. Since the authenticator is also your keycard and employee ID then you're going to carry -- and usually wear -- it at all times.
What about remote working? Because this is a medical provider there are satellite offices but not work-from-home as we understand it. From my perspective, if I'm working from home I'd expect not to be able to rove around the corporate network at will but instead be confined to a relatively small sandbox. I'm a developer so I wouldn't expect to interact with production or business systems unless I was directly concerned with them (and even then I'd expect to be on-premises or at least working closely with someone who is).
I think we have to assume that anyone of us could be compromised at any time and design our workflow to contain and so minimize the damage.
Only like everything else to do with crypto, it's *badly* designed. You need to do a lot of extra work if you really want your transactions to remain "anonymous". And that extra work involves using hacked accounts.
GCHQ certainly has enough width, depth and grunt to use basic statistical analysis if it were really needed.
It wouldn't be very accurate, since it would at best track criminals cashing out and victims initiating payments, both of which come after the original crime. It also doesn't very much because people gambling on it causes a lot more noise, so fluctuations related to criminal activity are absorbed and somewhere between very hard and impossible to notice.
>"Although Karakurt's primary extortion leverage is a promise to delete stolen data and keep the incident confidential, some victims reported Karakurt actors did not maintain the confidentiality of victim information after a ransom was paid," the US government warned..."
Every time there is a ransom threat for releasing information, the targeted entity should claim they paid the ransom and secretly release some snippets of private information. Then in a very public way complain about the ransomware thieves being dishonest and releasing the data anyway.
Ransomware payments will go away the moment victims believe the threat actors will break their promises.
Go one step further. Get an AI to generate a lot of fake but believable data, but also containing low importance real stuff. Release that. A large quantity with some bits provably right will be more believable, and it will also confuse the thieves. If they release the real stuff out of spite, nobody will know which is which
This is a good write up on this threat, concise and to the point. But, personally, I would like to see a little more from articles like this. In general, I think information that would help others prevent an attack would be helpful. For example, with this crew, what IP addresses are they working from? Is there a spcific list of their most common tricks and tactics that we should know about? What are the best practices to prevent it and resolve it (if possible). What is the most important thing users and targets must know and do?