It can hardly be called hacking ...
... if the system has a huge sign on it that says "C'mon in!", now can it ... And especially not when there's a key under the welcome mat, the back doors are unlocked, and all the Windows are wide open.
The continued inability of organizations to patch security vulnerabilities in a timely manner, combined with guessable passwords and the spread of automated hacking tools, is making it pretty easy for miscreants, professionals, and thrill-seekers to break into corporate networks. This is according to the penetration-testing …
This has been the norm for decades now. As a consultant I have rarely found any serious attempts at continuous management of security. "Policies" are written but not verified for efficacy or followed, and ISO 27001 certification is often obtained on the basis of an ISMS that exists only on paper or as electrons somewhere. Most corporate cyber security consists of a mission statement and pure luck so far.
Don't turn your hand to government then, for there are only committees and publicity, Metrics, measurements, tests, verifications, reviews, improvements, are either scams or didn't happen. Authorisations are concealed, and only claims and counter-claims are allowed to fight it out in the commercial media.
No wonder Gov+IT<=0
> About 60 per cent of the web application holes used were deemed critical
It seems to me that the basic problem is that systems are not designed with upgrades and patches in mind. You can play the IT support conversation in your head.
Hi CIO, I.T. here. We need to take the whole corporate internet presence offline to perform a vital security patch
CIO But you did that last week!
No, that was the office system and that was because of a bug in the email server
CIO And the week before that
That one was the database
CIO Dang! So how long will we be offline
It's hard to say, in theory only 30 minutes but more likely an hour. If things go pear shaped maybe a week.
CIO I'm not signing off on that. Cant you check the patches on a non critical computer first?
We tried, but it failed. Our web suite is running version 4.10.122b and the test systems are at 4.10.121a so (obviously) it didn't work
CIO No, you'll have to wait until there are more bug fixes, then we will take the system down and install all of them
Don't you remember last November when we tried that and it took 2 weeks to restore the service?
CIO My mind is made up. We were sold this package on the basis of 99.999% uptime. That's a 25 second outage per month. You lot in IT blow through an entire year's worth of downtime every week. Find another solution <click>
" We were sold this package on the basis of 99.999% uptime."
And there's the flaw in the thinking. Start thinking in terms of useful availability and downtime. You're trying to manage for minimum loss of useful availability due to downtime. If downtime isn't planned - you had a hardware failure, you got hacked, whatever - then there's no guarantee of it falling into a time of minimum usage. Planned downtime can be arranged fro when it will have minimum impact and its purpose is to minimise the risk of unplanned downtime. But risk is harder to measure than uptime.
Furthermore, if five nines (or even four nines) uptime is so important, you should have your systems configured for High Availability so you can take part of it down for the upgrade without affecting the remainder, then flip to do the other half once you've verified the upgrade works.
Frankly, having a complete outage on such a critical system (for performing updates anyway) is inexcusable.
It was _sold_ to the board as up time of 99.9999% it is not really business critical although failures would be best if they were not in normal working hours. But the board got this warm fuzzy from the salesman that it was a really high availability system. You will get the same response 'what again?' With the cheapness of hardware and comms these days there is no reason why all businesses do not have an identical shadow system ideally in a different location on different power etc
This is what I was about to post in this conversation. "Wait? If you need 99.9999% uptime, wouldn't you have a DR/business continuity protocol in place, with full data replication? Can't you just plan a switchover from the primary datacenter(s) to the backup centers, update the primary centers, then switch back and update the backup centers?"
you should have your systems configured for High Availability
Remember when you forcefully recommended that right at the beginning of the design phase, but it got nixed by the beancounters, and the project manager said there was no way the extra time could be allocated because the business-critical deadline had already been decided, and your boss said there was no chance he could clear someone from another project to help with QA, and the consultant said the guy at his office who was the HA expert had just that morning resigned after calling their boss a lying shitbiscuit? So of course it's your fault for not being able to provide the expected uptime.
Reminds me of a project about 10 years ago from the early days of BPOS. One of the first actions was to clean up on-prem AD before implementing the schema updates. Reported back to the CTO - Do you realize your PDC failed over 11 months ago? His reply: "Speak to IT to find out why!"
The 5 nines uptime figure is all about availability not downtime.
So if you have have 2 sets of machines where either can handle the full load, you take set 1 down and patch/test.
Then bring that into production and repeat for set 2.
I have seen services run at 99.999% when components have been down for a couple of hours that week for patching, because this was planned maintenance work.
2 (or more) sets of machines is a good idea. The trouble comes when you need to upgrade or replace one set and the beancounters notice that the system only switched to that set once or twice in the year.
You can bet if that happens, they’ll be asking why you need the machines.
You can usually persuade them by telling a horror story about what would happen if the only machine left running the system failed..
Then they outsource the system "to the cloud" that promises 99.999% but delivers only 99.999% "on average" but not specifically you and has no SLA indemnity.
Oh, and your job goes too.
Of course they are going with a worse service for too much money, but that sill not bring your job back.
In a corporate environment this is a hard to solve problem.
At home I patch now and ask questions later, as is best practice.
But in my last employer they were dependent on software by vendors who did not get computer security at all. And some of them are big names in the field. We were forced to run versions of MacOS and others that we knew were insecure as a result.
Then there’s Windows and Active Directory. Do they support dictionary checking passwords out of the box now? If not, why not?
As I understand, you can enable this kind of checking. https://techcommunity.microsoft.com/t5/azure-active-directory-identity/azure-ad-password-protection-is-now-generally-available/ba-p/377487
Yes it says "Azure AD", but it does offer the same for an on-premise (or at least hybrid) environment.
If you're talking about password strength rules, their only effect on me is that after I carefully and securely make up an utterly random letter password, the rule rejects it.
The other day I hit a system which rejects a password that uses any letter twice in a row. So mrmxy zpplx isn't random enough for that. (And because it's an arch enemy of Superman in comics, nearly, but never mind that.)
Also my employer recently subscribed me to an online security tutorial. So I made up a password for that but no! My password is too long!
The problem is that even if you follow security best practices yourself someone else will most likely leak your details.
I was just looking through the list of major data breaches listed on Wikipedia and I counted almost a dozen breaches listed there where I have been informed by the companies themselves that they lost my details or my details were listed on Have I been Pwned.
(And these are just the ones I am aware of)
https://en.wikipedia.org/wiki/List_of_data_breaches
https://haveibeenpwned.com/
Ok
Google / Firefox / Edge & Safari
You guys really need to sort this **** out
Enforce encryption, scan for CDN JavaScript files and block them
IF you need jQuery or any other framework YOU should be hosting it and it should be encrypted
It seems everyone knows the patches exist but they don’t apply them
Well if the big 4 browsers put a block on sites using older versions of libraries that are not patched then the CLIENT would take it more seriously.
Either take it seriously or remove the obligation on financial institutions to refund lost money and make the person responsible
So many years of the same issues
No it is not when you are running thousands of applications, many of which are older for all sort of genuine and spurious reasons. It is all well and good to try and enforce client access blocking in the way you suggest until something breaks. If the system that cannot be accessed does not have a patch then you have no alternative but to roll back all the clients. That actually increases the risk as you now have a far higher number of devices that are also in the hands of users that are now at risk.
Patching is important and as we have seen, all too often is not maintained in a rigorous way. Where patches are available they should be applied in a timely manner. There is no excuse for shirking that responsibility but it is simply not possible to have everything at the very latest patch level with in weeks of release. Maybe in years to come regulation may start to enforce that but it will come at a huge financial cost. And this brings us full circle. Often the reasons that there are unpatched applications in use is because the organisations using them cannot afford to upgrade to the latest version or the are linked to other systems/machinery that itself does not support the latest and greatest.