This sure beats reading newspapers
Keep it up El Reg.
It has been a week since the Wannacry ransomware burst onto the world's computers – and security researchers think they have figured out how it all started. Many assumed the nasty code made its way into organizations via email – either spammed out, or tailored for specific individuals – using infected attachments. Once …
First rule: the early reports are mostly wrong on major details.
Second rule: the regular media is clueless and will latch on to any meme that they can hype- NORKS, Russian, Chinese, etc. did it.
Third rule: good security and update practices across the board will block most exploits harming you.
Fourth rule: avoid sites that are known to be sources of malware infections.
If you follow these rules and most malware will not be a problem. You still will be vulnerable to 0-day exploits for your OS.
@ a_yank_lurker
Good set of rules. First time I was infected my son was in primary school. (He brought home the first Word macro virus). Second time he had a LAN party and disinfestation was moderately painful. It was his last LAN party at home. The Gitling's now 32 years of age.
My AV warns of malware infested websites and it's often surprising.
Best insurance against zero-day is backups. One of the reasons I like Linux/gparted. Makes such a breeze.
@Andy Prough - My basic take is all OSes are vulnerable to attack. Some are harder to break than others. And all require some TLC including patching and updates. From view the argument that Bloat 7 or Bloat 10 is more secure is somewhat pointless as Slurp is not known for producing the most secure OSes available. It is sort of like arguing over how leaky on collander is compare to another.
@a_yank_lurker
It is sort of like arguing over how leaky on collander is compare to another.
Not quite - the whole point about a collander is that it is designed to let fluid through: the size and number of holes defining the rate and what doesn't get through. It's not a 'leak' - it's what it's intended to do.
We could argue for ages about whether MS operating system holes are there for a reason...
And who the zarking fardwarks exposes SMB ports to the Greater Internet?
4 classes of people:
1 - Malware researchers (what's how the original off-switch domain was discovered)
2 - Home users who just plug their Windows PC directly (wired or wirelessly) into their ISP-supplied routers and imagine that the ISP has configured the router to be secure[1]. They then turn on SMB sharing because they want to get stuff transferred from an older computer and forget to turn it off.
3 - Small businesses that just use IT without having Someoe Of Clue[2] to look after it - either on a contract or regular basis.
4 - IT admins in public bodies, underpaid and overworked (or clueless[3] - I've met both varieties) who are being screamed at by someone higher in the organisation to do stuff that is fundamentally unsafe (in data protection & security terms). They do something that destroys security[4] without understanding why it's a bad idea and have no clue about how to fix it.
[1] Ha ha. Like the ISPs care. Any more than they care about SMTP traffic that ignores SPF domain validation.
[2] "My nephew can do it - he plays a lot of online games".. (yes - I've had that one).
[3] Sometimes all 3. In the days I had to deal with Trust IT departments it was a real, mixed bag. Some were really brilliant, professional teams, others were staffed by students, people who would never be able to get a job in a professional IT department and time-serving wasters.
[4] Like bridge between N3 and the Internet. Back when I had anything to do with it, N3 seemed to have an implicit trust model - traffic on the N3 side was assumed to be trusted and not requiring firewalling... That may well have changed now.
So the question is - why have you got several thousand W7 desktops unpatched?
While I understand that the servers will need to be a variety of VMs I would just use a standard image of NHSbuntu [ www.nhsbuntu.org ] for the sheep as I can lock it down tighter than a duck's chuff, and it has secure email, an office suite, web-browsing and that is all I want them to have.
So the question is - why have you got several thousand W7 desktops unpatched?
I can think of a couple of reasons. The first isn't that unusual; as many have noticed, even Microsoft, there are always those that plead incapable when it comes to computers. The question here is whether such users should be allowed access; consider that these people can cause all sorts of problems for other users by not being up to the task of handling their system responsibly.
The other is a little more sinister. Since just before the release of Windows 10 there have been increasing amounts of concern about Microsoft's patching habits. The biggest concern has been that Microsoft have spent a lot of time and effort trying to integrate spyware into their products (see the Register article "Mud sticks: Microsoft, Windows 10 and reputational damage") to the extent that some people have stopped patching. While originally it was easy to spot the spyware patches and avoid them, the current regime of rollups makes this all but impossible to do.
So if finger pointing is really necessary, and before we charge headlong into a fit of blaming unpatched users or the people that perpetrated this problem, let us also consider Microsoft's role in this.
The vast majority of affected systems were corporate, not personal systems. These end up being maintained by corporate IT departments, which usually don't automatically patch the desktops.
This is usually because they need to ensure that any patches released will not prevent software used by the company from working. They'd want to regression test it before rolling out the updates.
This all sounds reasonable to a degree, but you get cost saving measures whereby corporate IT department's use a static patch deployment cycle of their own (maybe every 6 months) rather than every time an update is released. As such, security updates can go many months waiting to be deployed in corporate networks, increasing the level of vulnerability to pretty much every type of Malware.
The solution of this is for corporations to change their procedures. Interim security patches like the March patch doesn't require batch regression testing as they might when a large feature patch is released.
> So the question is - why have you got several thousand W7 desktops unpatched?
Well I for one stopped allowing my Win 7 box to auto patch when Microsoft started fucking about with what was included in updates and to avoid being automatically "upgraded" to Win 10. MS have destroyed the trust that was placed in Windows Update to only update and fix problems, not push malware at us that installs OSes we don't want!!!
So the question is - why have you got several thousand W7 desktops unpatched?
Because you are running custom software that's incredibly picky about OS versions[1] and patches? Because you don't have anyone that knows about WSUS or SCCM? Becuase your CxO doesn't give you any budget for anything other than getting their team the latest and greatest and certainly not for wasting time fiddling about with servers?
Been there, done that.
[1] Yes - in one job we had a hardware card supporting some custom machinery that caused us a lot of grief. If you put it in anything faster than a 386 the card would run for about 5 minutes before locking up. That's why we stockpiled old Compac 386 parts and spares to keep the machines it was running on going. Oh - and for extra delight, the driver we had for the card only worked under Win 3.11.
And this was in the mid 2000's.
Because you are running custom software that's incredibly picky about OS versions[1] and patches? Because you don't have anyone that knows about WSUS or SCCM? Becuase your CxO doesn't give you any budget for anything other than getting their team the latest and greatest and certainly not for wasting time fiddling about with servers?
All of those are valid explanations why an individual techie working at an afflicted organization might not have applied the fix that would have prevented this.
None of them are valid explanations as to why an organization allows their technology to be so poorly maintained. None of them explain why CTOs across the country are not getting canned for failing to ensure business continuity.
I've no problem with people getting paid big money for CxO roles, but together with the money comes the responsibility; if you are the CTO of a hospital trust, and your policies on patching desktops led to surgeries getting cancelled, you should be cancelled.
Indeed, all the "experts" using this as an excuse to bash the NHS are looking pretty silly right now. Even if the NHS had spent many millions eradicated every single XP machine from every dusty corner of the organisation, it would have made bugger all difference. The real issue here was the speed they didnt apply patches.
They clearly quarantined critical security patches for far to long (2+ months), so if anyone needs to be to blame, it's not the NHS, it's not government funding, it's not some other hidden political agenda, it's solely on who is responsible for timely patch deployment.
"Indeed, all the "experts" using this as an excuse to bash the NHS are looking pretty silly right now"
I don't think I am. I asked:
"As we discovered last time the NHS had a ransomware attack - which must have been all of a few months ago - everyone has full permission on everything at an SMB level.
If this turns out to be spread via SMB or anything below layer then someone needs to explain how the network was configured so badly."
It still seems a perfectly reasonable question.
Ditto to that. This Saturday's edition of the Wall Street Journal had a column on how to attach a cradle to the back of your cellphone so you could cradle it between your ear and your shoulder. They consider that a technical column and the "solution" a "life hack". Sigh. Their coverage of WannaCry wasn't much better.
Yes yes, no one should have an SMB port open to the internet, but poorly configured DMZs or small branch offices that are supposed to get their internet from the main office but improperly add their own 'business internet' connection from the local ISP because it is faster are probably more common than anyone cares to admit.
Microsoft firewalls off most ports by default, but leaves port 445 wide open. Why? Surely it would make more sense to have it open to ONLY the PC's local subnet, since that will suffice for 99% of home/small business installs! Require a configuration change by the admin to open it up wider - i.e. if your company uses 10.x.x.x internally open it up to 10.0.0.0/8, and pop a warning before allowing someone to disable it entirely.
@DougS
I don't. I blame the network administrator.
SMB is not normally open at all unless you enable file sharing. I would have to check what the default is on recent versions of Windows, but most ports opened by default are not open to the public network. You can't even ping a Windows machine now as ICMP is blocked off default.
However, in a corporate world you shouldn't be accepting any default ruleset anyway. Just look at what your org requires and push out the rules you want with group policy.
Yes, in the past default configurations of Windows were wide open to enable ease of use. This is less the case now. If you put in the effort though you can lock down Windows very easily. You can block off any port you want and only allow permitted applications to run. You can do all this centrally with group policy so there really is no excuse. Start from a model that no user can do anything or access any resource unless specifically allowed by a group membership.
Of course you are still open to zero days and some things just can't be anticipated. This is why you also make sure you have tested backups and a recovery plan. Preferably multiple independent backups to different media using different backup products.
Prevent what you can, limit the damage of anything you cant prevent, then make sure you can recover from any damage. Learn from any incident to improve your future prevention, damage control and recovery.
"I don't. I blame the network administrator."
This. Or whichever idiot overruled him.
Blaming Microsoft for someone else failing to secure the product and failing to install patches that were released months before the outbreak Is like blaming a door manufacturer for a break-in because you failed to engage the lock. Saying 'But the door was unlocked when it was delivered!' is not going to win you any court cases.
"I don't. I blame the network administrator."
This. Or whichever idiot overruled him.
Having dealt with NHS "network administrators" - who told me "You're very arrogant and you're talking gobbledygoook about viruses and IP addresses which I don't understand, I refuse to deal with you." and "I'm the administrator here, I know what I'm doing" - about a machine which was spewing crap all over the Internet. (And the boss, who sympathised but had no power to overrule the administrator), I'll say that a good chunk of the issue lays with the matter of adequate training coupled with Dunning-Kruger writ large.
Another similar discussion was had with a "NHS administrator" about a webserver used for GP patient bookings which was firewalling out around 2/3 of TalkTalk's entire UK ADSL IP allocations. "It works fine for me, you're making it up"
Perhaps those large organisations allow VPN access. Then you could have non-internet-facing SMB shares exposed to a box that might (for some other reason) have been internet-facing at some point in the recent past. For example, a GP's surgery might have an old Win2k8R2 server that has been mis-configured and no-one is really paying attention, but it probably does have access to the interior of the NHS network.
Organisation that spent 2 long days helping, last weekend was infected from a BYOD over VPN, owned and used by a very senior person in the org.
The same senior person that had ensured that well over 100 machines were still on XP, at least 75 of which were infected when I turned them off and disconnected them from the network as stage 1 of recovery.
FYI
Stage 2 was boot from op system CD, and reformat HD. Shut down.
Stage 3 was use system recovery disk for machine to install backup/restore software. Disable SMBv1 Shut down.
Stage 4 reconnect network cable and re-boot, map backup drive, restore full disk image and incrementals from server, reboot.
Network scripts were in place to disable SMBv1 and apply patch on connecting to the network
"Organisation that spent 2 long days helping, last weekend was infected from a BYOD over VPN, owned and used by a very senior person in the org."
You don't have to help if they're sabotaging themselves from within. In fact I'd hold up my hands at that point and tell them any further work is chargable.
Why? Gpod alone knows.
Most 'amateur' connections are via NAT routers. Which need explicit configuration to actually accept incoming connections.
One can only posit a very very poorly set up leased line system, in which intersite working was done simply by opening ports onto full publicly addressable IP spaces as 'the quickest way to get the job done'
You have to work quite hard to be this insecure.
NetBEUI can't be routed so cannot be exposed to the internet. It was great for really small networks on a single broadcast domain as it required no configuration. You just turned it on and machines could communicate.
However, anyone using TCP/IP should be configuring their firewall with a default block any/any rule, then justifying any exceptions. In any org I have worked in, you would normally only expose ports to machines in a DMZ to the internet. You would need an extremely strong justification to open any ports to the internal network and have to demonstrate that there was no alternative. Anyone suggesting opening SMB to the internal network would probably be told to go sit in the corner with a dunce cap on their head.
They don't have to have been exposed to the internet globally, just exposed to one external machine that itself is compromised. Management says give X access or its your ass, you lock down access so it is literally just to machine X, but if X gets hit then you're hit. It really is a case of just one weak link is enough. But management will never understand why security wants to be so doctrinaire and inflexible when its 'obvious' that one little exception, properly managed, will be OK... Really.
"Management says give X access or its your ass,"
Have had that, configured X, and then discovered a few weeks later that same manager brought in personal (actually, iirc it was his son's) laptop as the corporate one was too difficult to use, with loads of things that needed him to click (security updates that needed to be accepted before he could access anything). He then plugged it in to the network, which got him nowhere as unused switch ports were disabled, and screamed at his PA to get the "IT guys to fix it or get fired" (words to that effect).
Same guy, few years later ... director at NHS IT.
Rumour has it this was spread over the NHS N3 network, hence the wide spread infection for the NHS. Hence why SMB might have been more open for sharing data, as N3 is supposed to be a secure network (Still no excuse I agree!). However it must have got in somewhere initially. Again, rumour has it the telco was at fault. AC as I'm speculating.....
So why is it enabled on internet facing PC's as well?
Is that an actual method of working for any organization?
Don't feel too bad. Port scanning to find a port that shouldn't be open (but was) is exactly how Gary McKinon got into the Pentagon.
In 2002.
I think sysadmins don't like to do port scans from outside their network as the can't see the point looking for something they know isn't there.
Except of course when they are wrong and someone has left ports open.
HMRC. If you are a 'contractor'' and not a "worker" or "employee", your contract may state you must provide your own IT equipment EXCEPT when working on site of Client. This is an established part of proving 'independence' under HMRC probe for IR35. Thus ,yes, there is a case where a contratcxor would connect to NHS via their own computers from home, for remote working.
"This is an established part of proving 'independence' under HMRC probe for IR35"
Except when you work in the security field, there is no way you would be allowed to connect to the client with anything but a laptop built to their spec. I once even offered to buy my own laptop and have them build it for me to try and meet this rule (and also to bypass their shitty old tech that won't run my 3440*1440 widescreen at anything above 42Hz) but no cigar - it's their laptop or nothing.
It's a problem (re: IR35) which is why I take a lot more pains to ensure the working relationship is that of business-business etc. rather than employer-employee (much more important to IR35 imho)
We are, and the lunacy is widespread and getting worse. For some reason companies who provide their employees office space, desks, telephones, pens, paper, and everything else they need to do their job, somehow think the single device they use and depend on more than any other is somehow exempt from their responsibility.
"Surely a familar scenario for many remote workers?"
No - our VPN has security checks in place that won't let you connect fully until you've:
a - got the recent antivirus definitions
b - fully patched
c - had a recent scan
In the past, if you'd not logged in for more than a week it'd require you to go on site to get the updates... these days you get to update without being fully connected over vpn, so no trip required.
As a dev, I've mostly got control over the machine, but there are several group policies I don't have control over. Certain services are blocked, ports as well, and I can't disable security features like virus checker, or the software deployment software.
Being a remote worker isn't an excuse, or necessarily any riskier than on-site staff. Unsurprisingly we've not had any WannaCry infections in the multinational organisation of 10k people, with many remote workers.
"our VPN has security checks in place that won't let you connect fully until you've:
a - got the recent antivirus definitions"
Which still won't protect against something new enough not to have got into the definitions.
"I think sysadmins don't like to do port scans from outside their network as the can't see the point looking for something they know isn't there."
Most plain ordinary sysadmins probably have a clause in their contract that they won't do something like that unless they have specific permission in writing from their security bods to do so.
I know I've had contracts that say that, and I believe I’m far from alone.
There are plenty of known Android attacks, some remotely via SMS/MMS. Obviously many devices will remain unpatched against these vulnerabilities for their lifetime, so I wonder when we will see the first hybrid malware:
stage 1: infect Android device using an Android vulnerability, and lie in wait
stage 2: when connected to a new wifi network, look for PCs to attack using Windows vulnerability
The Android malware could even 'update' itself by checking in at a master host (make to look like yet another advertising site, with traffic that could be triggered only when browsing so the no one is the wiser) which would allow it to upgrade the Windows vulnerabilities it is using over time as old ones get closed off and new ones are discovered.
I think one of the main reasons we haven't seen widespread Android infestations is because hackers are so mercenary these days. The time when they considered it good enough to print some message about being 'p0wned' are long gone, now they're at it to make money, and ransomware on PCs is where its at.
Being able to infect devices that far too many workplaces allow people to bring in and connect to their internal network (so they can get access to email, internal web sites, etc.) is an easy way to bypass the expensive firewalls and IDS systems companies put on their network perimeter.
Obviously the same could be done with iOS, but Apple gets fixes out too quickly and people apply them too quickly, making Android a far better carrier for such a hybrid malware strategy.
" I wonder when we will see the first hybrid malware:"
IIRC, we already saw that exact scenario a couple of years ago; an SMS virus targeting 'droids which then dumped a payload onto Windows machines once it was connected to them. It's not actually that effective a vector, though.
If you work for the sort of outfit that has such a clause written into your contract and an actual "security bod" to deal with this then perhaps they should do the port scan?
I'd suggest an outside scan of your network, and a review of the results, should be a standard part of network maintenance procedures.
I'm not saying any open port is a bad port (although I think the fewer the better) but why it's open should be well understood and documented, if only to stop the next PFY hired from noticing it and (on the principle "all open ports are bad") closing it, naturally without telling anyone.
Unless your contract clearly states you are responsible for pen testing, you *need* to get written sign off before you do it. Or be the person who owns the kit. You usually have to write the letter yourself, and get the boss to sign it, since they won't care *unless* it goes horribly wrong.
And yes, it's a sensible and reasonable thing to do, but like anything where you're crossing a legal boundary for work, get it in writing. Then you have a clear defense if you get accused of computer crimes. Same as if you're repairing a machine, get the client to sign off on what is happening, so if you find dodgy stuff you won't get in trouble for illegally accessing it.
It's the difference between being a general worker who checks that a secure door is locked by trying the handle (which is OK), versus someone hired to do a security audit attempting to force the door open, attempting to pick the lock etc.
Actually I would have described a port scan as exactly like trying the doors on a building you work in.
Not attempting to enter (by your analogy), just test to see if it's open to begin with.
However if you're writing your own authorization letter you should probably include a clause to allow repeat scans whenever there is a significant change in the system, with "significant" being loosely or tightly defined on how awkward your boss is likely to be.
I think sysadmins don't like to do port scans from outside their network as the can't see the point looking for something they know isn't there.
I'll add one word to the start of your sentance: "incompetent sysadmins"..
(We pay an external organisation to regularly run pentests against us - both internally and externally. And the results are treated seriously.)
"Windows 7 machines most affected after so-called "experts" advised switching off updates to avoid Windows 10 upgrade notifications"
and the spyware as well.
At least in 7 you can pick/choose which updates to install. Knowing the proper KB means you can download it and apply the patch manually, last I checked...
"... you can pick/choose which updates to install."
Those who turn off automatic security patch application do need to actually choose and apply the important patches. A patch for a remotely exploitable vulnerability that allows execution of arbitrary code (e. g., MS17-010), NVD severity 8.7 if I recall correctly, is an Important Patch by any standard. Anyone clued in and attentive enough to have taken over patch management should have applied it within a couple of weeks from issue.
Updates to W7 also got switched off because it was taking until the heat death of the Universe or the arrival of WannaCry before the updates ran. There are still posts here from people complaining about that and even I, a non-Windows bod, know that there's a specific update to be downloaded and applied individually that fixes it.
@JimC - "Which is exactly why perverting the patch process with the pushed Windows 10 upgrades was a mind bogglingly stupid and irresponsible thing for Microsoft to do."
So? Switch the network interfaces to "metered connection", thereby shutting off auto-update. No one can help people who only want to complain and refuse to learn how to run their system. Just stamping their feet and yelling, "but it doesn't run like Windows 7 did!!!" is kind of pointless at this late date.
It'll tell you what's more ironic - 1) MS sending out non-security updates as security updates so people's only recourse is to turn the whole thing off and 2) Windows Update being so badly designed that turning it off and going for a few months without installing everything means when you turn it back on to automatic it can't find updates, it just gets stuck in a loop.
I notice you didn't mention updates taking forever with the windows standalone installer "searching for updates on this computer". What the hell is it actually doing for those endless hours running that lovely green band L-R ?
Paris, has better ideas for how to spend endless hours.
(So have I, btw.)
From Ars Techica: "The Kaspersky figures are illuminating because they show Windows 7 x64 Edition, which is widely used by large organizations, being infected close to twice as much as Windows 7 versions mostly used in homes and small offices. It's not clear if that means enterprises are less likely to patch or if there are other explanations."
I'd say homes and small offices have auto-updates turned on, big businesses don't.
Glad I'm not on that team at work.
It's not clear if that means enterprises are less likely to patch or if there are other explanations."
No - it means that corporate patch cycles are longer than most home users because history teaches us that MIcrosoft updates will break things and so they have to be tested[1] first..
[1] And not just on the testers PC.
Windows 7 machines most affected after so-called "experts" advised switching off updates to avoid Windows 10 upgrade notifications?
See my earlier comment. If Microsoft hadn't tried so hard to insert spyware into some of their patches for W7 then people would have not even suggested switching off updates. Well, some might have done but nobody would have listened to them.
Having said that, pointing fingers at users for not patching doesn't really escape the fact that the hole was there, it was exploited by the NSA, it was then stolen and somebody else used that same exploit to try to extract money. How many more holes are there in Windows, current version included, that the NSA knows about and keeps under wraps, even from Microsoft?
Or MacOS? Or Linux? I know where I'm pointing and it isn't at any specific end users.
this is what I thought, from reading the El Reg articles and other (supporting) articles that I found online, ones linked to from El Reg and other independent articles.
Although I had also heard about possible e-mail vectors, the primary vector appeared to be port 445 facing the intarwebs, which everyone with any kind of IT experience recognizes as being *VERY* *VERY* *BAD*.
thanks for the final confirmation on that. [it was in the 'teccy' El Reg article, too, but you had to look for it]
Seeing as those first articles were posted on a friday evening way after "beer o'clock", I'm glad they were more or less right on with nearly complete information.
"the primary vector appeared to be port 445 facing the intarwebs, which everyone with any kind of IT experience recognizes as being *VERY* *VERY* *BAD*."But Shirley you have all ports closed except those you explicitly need to be open. Or am I missing something? Why would you want port 445 open?
and even if you wanted it open, why open it to the world?
This smacks of 'I want to be able to file share my hospital data anywhere in the world on my totally insecure laptop'.
I remember one of my staff doing a security audit for a major company. He realted te conversation...
'so how secure is our firewall?'
'well your firewall is fine, but the IT directors PC with the modem in auto answer mode on his DDI line is a bit of a problem'
"why?"
Well there are no technical reasons, but when there is pressure to save money it can happen by accident, or because a senior manger over-rules the Technical staff and says that it MUST be done other wise you will be dismissed. Of course once the sensible ones have been dismissed.....
... the other factor may be the abolition of the Primary Care Trusts which again resulted in huge disruption to local IT services....
... lastly every local government office has to undergo penetration testing every 2 years. Why doesn't this apply to the NHs. Shouldn't GCHQ be doing this and warning people of unsafe practices...
"You might not have reason to open it, but maybe something you're running (probably malicious, but not necessarily so) opens it via universal plug 'n' pwn."Which is why I periodically run Steve Gibson's Shields Up.
People seem to be saying "So I can share files and printers with the network as I'm a contractor"
Seems excessive to me and asking for trouble in these day of large capacity flash drives. How big a specialist toolset/database do you need to take into work?
Did the malware initially launch through its own efforts or did it just use a handy list of open SMB ports published by one of those scanning companies, whose primary function seems to be the provision of information that is of great use to malware spreaders?
+/- 'allegedly' etc...
I think this would make sense much better than infection over email. The time frame just didn't make sense for infection via email. We've had this type of malware in one form or another for several years now. Time wise, the infections have been spread at varying levels over months and years. To get a (seemingly) synchronised attack going over a single day, just relying on people in hundreds of organisations all over the world all opening infected attachments at once in such a short window of time seemed from the beginning an unlikely explanation.
As I continue strongly advocating better file management, storage and backup as the key* defence against ransomware, I'm very interested to hear from people who think this is the wrong approach.
Also, what do people consider the current state of the art with respect to internet facing file stores? So far I've got certifcate-based sftp on a non-standard port with fail2ban or similar, all other ports firewalled.
* emphatically not underplaying the importance of up-to-date FW, AV and OS.
Start off with a default deny mindset. I configure all resources with their own resource groups. I then create role groups which are members of the appropriate resource groups. Until users are added to any of these roles they have access to nothing. They can't even log on. I can also see exactly what any user can access by just looking at the roles they are a member of. Use the AGDLP principle https://en.wikipedia.org/wiki/AGDLP)
Use minimum privilege for access to anything. Only grant the minimum required access for each role. This will limit any user can cause if that get malware.
If you can, implement applocker or some other application whitelisting solution. Use FSRM to watch for known crypto malware (see here: https://fsrm.experiant.ca/).
If you have a firewall or webfilter that categorises websites, block access to uncategorised sites. This can stop phish mails that try to pull malware down from the web. Block executables in email using your mail filter.
It is all about putting as many layers in the way to stop the malware to minimise risk. At the end though assume you can't block everything so have tested backups and a recovery plan.
If possible have independent backup solutions backing up up to different media (Veeam to NAS, Arcserve to tape for example). That way if one fails or is compromised, you still have a backup. Better to have lots of backups you don't need than no backups that you do need.
If we are taking about dimwits that have exposed SMB (which was a bad idea in the 1990's FFS...), there are a few issues with the theory as it stands;
By default, Windows will only allow file share access when in private firewall mode.
Consumer grade modems don't have any port forwarding enabled (again, by default) - even that flawed facility, UPnP, didn't tend to dynamically allow it.
A large number of ISP's will automatically block such ports for their subscribers (unless you request them to be open).
Very, very, very badly configured VPN's or shockingly bad gateways seem to be the most likely vector. If you had, or still have a private network with internal DNS, SMB, NMB etc. exposed, then I suggest you change careers voluntarily, before you are lynched...
Way back one of the client's network guys had discovered that someone was persistently trying to probe the firewall. He then looked at the IP address and found the eejit was sharing his C: drive. If it had been me I'd have tried to mount the drive and see how much could be deleted from it before it all fell apart.
If it had been me I'd have tried to mount the drive and see how much could be deleted from it before it all fell apart.
If you're in the UK, that's a really bad idea. It counts as unauthorised access under the Computer Misuse Act, and gets you 12 months in prison and/or an unlimited fine.
"If it had been me I'd have tried to mount the drive and see how much could be deleted from it before it all fell apart.
If you're in the UK, that's a really bad idea. It counts as unauthorised access under the Computer Misuse Act, and gets you 12 months in prison and/or an unlimited fine."
Bah. Pikers. Here in the Gunshine (no, that is not a typo) State it's a five-year felony. Which means of course that were I to do such a dastardly thing (which, of course, I would not) I would not have done it from a network or a device which could be traced back to me.
of course, if I ate their boot volume they'd have one hell of time proving whodunit, now wouldn't they?
I'm sure you meant the eejit? That guy is using the network aiming for unauthorised access (probe firewall). The op is talking about if he/she was the "the client's network guy". How is the client's network guy at fault for touching things on his own network?
It's like if an IT guy sets up a router that disable all unauthorised device connected to the router. How it is the IT guy doing stuff with "unauthorised access" when it is his network?
It would be a very bad idea for any government not to cooperate, though there are some good reasons not to hand people over to the USA for a trial. And we might not be much better But the scary prospect is for some government, absolutely sure of somebody's guilt, bypasses the legal process and arranges to push some hacker off the platform in front of a train.
Part of it is the thinking that we know, but cannot give away how we know.
This feeling that some governments are willing to mess around the system lies behind some aspects of the Julian Assange case. The people who worked on this malware may have killed somebody in the UK, and it might legally amount to manslaughter. They aren't innocents. But I would rather trust a court than a politician.
This post has been deleted by its author
Given that this attack is very handy for Microsoft (you can have our slurp or become vulnerable to attack, ah lookout here comes one just to prove our point) along with the previous ransomware mostly hitting the west then one wonders if perhaps this time around the attack has come drom a different source.
I would imagine that there are a lot of Western agents that had previous relied upon the bespoke back doors put into windows lately being worried that they might have to return to the bad old days, where they had to get out of their chairs. If everyone has a secure OS then I can see there jobs becoming much harder, well when it comes to spying upon their own populations anyway.
That Microsoft made obtaining updates without also taking their slurp increasingly difficult leading up to this attack might be also be suggestive.
Given that we know that both MS and the Western agencies were more than aware of this "vulnerability" and given their historic action it could not be seen to be out of character.
One thing is clear though, even limiting updates will not protect you since everyone really is out to get you, better to finally drop windows and go with something that doesn't come prepwned.
I wonder how many infected Windows 7 users had turned off Windows Update (as I had) because last year Microsoft was using it to try to forcibly install Windows 10 on us?
After reading advice on this very site, I went with the option of turning off Windows Update, and trying to do important updates myself. (Very easy for casual users to get wrong, or forget.)
Iain Thomson surpassed himself with this one.
For all the rubbish and hyperbole we've read/watched over the past week from the mainstream news, thank fcuk for this El Reg article.
Most notable being Amber Rudd talking to BBC News with a complete lack of understanding of exactly what she was up against. What I mean by that, her general demeanor / tone would have been more akin to her describing King Kong holding a poor girl to ransom (a Fay Wray equivalent) at the top of Big Ben / (Empire State building for US readers)
I sometimes wonder if these actual Cobra meetings have the same tone of urgency (everyone talks like Rudd) or everyone just sits with a cup of tea, bleary eyed and goes, "Well, let's face it - we saw this coming weeks ago and did nothing".
Generally it's nearly always comes down to incompetence, but I see so much sophistication in the layout/positioning of surveillance 'trip wire' devices in the UK (someone has spent lots of time and effort on their locations, even in very remote locations) that I'm starting to believe there is more at work here.
It even makes the idea of this code being "put out / left out in the wilds" of github - done (as a method) to "Scapegoat Encryption" per se, because the Goverrnment failed to win the argument for full access to encrypted messages after the Westminster Bridge attack, a plausible counter plan.
Well, if the far more important goal of full access to encrypted messages is your ultimate aim (which it clearly is) what's a few encrypted Windows machines between friends. A week of disruption for Plebs for a much bigger goal, seems a small price to pay to achieve what you really want.
From my perspective, Wannacry is just a PITA. My network is fully patched and firewalled, and we pentest regularly. Nothing happened to my network and after a while, nothing continued to happen.
What I spent much of last week doing, however, was reporting to the Board what we did to mitigate the attack, and meeting with vendors to get assurances that the planet continues to rotate. We are now bringing forward $20k of antimalware work to do a belts-and-braces improvement in protection against something we already protected against and I have to defer and reschedule other work that is more beneficial to the business. But heaven forbid that I be seen to take this nonchalantly. Because that would be BAD.
It's certainly true that we were mandated to have every computer checked for Y2K compliance - even the stuff that was only used to type up a few notes; not networked, no database, no vital records. Nothing that the millennium would have made any difference to. And there were a good few of these.
"even the stuff that was only used to type up a few notes; not networked, no database, no vital records."
I discovered more than a few of these would shit all over their bios after 1/1/2001 and effectively lose their date at every reboot. Not healthy to have computers which suddenly think it's 1/1/70 if the users don't notice.
re: Millenium Bug
3 memorable incidents which happened in the runup to that (memorable as in they happened to me):
1: A lot of NTP systems crashing in February 1999 as the seconds since january 1 1970 exceeded 2^31 +1, including every Allied Telesyn router using NTP and virtually the entire Internet in China going titsup for 24 hours as a result.
2: A lot of security systems discovered to lock up if the clock advanced to 9/9/99 (which was regarded as a test date) and stayed locked up forevermore.
3: The complete brainscrambling of the Palmerston North NEAX61E telephone switches (80,000+ lines plus national call switching) thanks to memory corruption which was only exposed when y2k software was loaded in and the switches rebooted. To compound it, that scrambled crap had been written to the backup tapes for 18 months or more with the only clean backup discovered being 24 months old - The entire area had no dialtone for 18 hours and restoration of the backup plus replay of all the changes made since that point took in excess of 6 weeks.
Here's my theory:
An employee takes their unpatched Win 7 laptop home and connects via VPN.
Meanwhile their kid has been messing with the family router in order to get their MineCraft/multiplayer server working and has enabled an Allow All port forwarding rule.
Perhaps the Windows firewall is also disabled or it switched to a more relaxed domain profile when the VPN connected.
So the laptop is now infected by a port scan. It then proceeds to infect the employees mapped drives over VPN which are unpatched Win 2008 shares and from those to clients on the LAN.
Is this plausible? No need for the corporate firewall to have had SMB open. Perhaps the spread was even exacerbated by more people working at home on Fridays.
My God, even my home hub doesn't does do anything as daft as that. No wonder everyone blamed some idiot clicking on a link in an phishing email - leaving vulnerable ports open like that is akin to having a door to the outside in an operating theatre and leaving it ajar so that the nurses can can pop in and out for a quick fag part way through surgery - all manner of nastys can just blow in off the street
Thing is, it could have just been 1 firewall at 1 pharmacy in the arse end of nowhere, connected to the 'net via the dial-up modem they received in 1998. No firewall. No decent security setup. No ports blocked and no updates because they take all week to download. Then you VPN into N3 and suddenly it spreads to the whole system, because you're inside.
I have literally seen exactly this setup in the NHS when I worked for them about 5 years back, so it's not far-fetched. Many of the trusts themselves have very good external security, but are helpless if someone can gain access from inside; it's still run on fortress-style security principles rather than compartmentalized.
As a large org with BYOD policies and SMB enabled (with passwords on anything writeable) is the risk of someone getting infected externally then scrambling the samba file shares despite them residing on *nixen.
Yes they're backed up every night and yes I have triggers picking up if too many SHA256 signatures change in any given file share, but the restoration time is still a hassle.
Vista has only just gone "end of life" - which means it's a sacking offence to connect one to the network here without written permission, but Win7 is still alive (barely), so there's still a risk.
Perhaps monitored canary traps/honeypots are an appropriate defence against this kind of thing.