Shhh Sonia, stop making sense. The management lynch mobs will be along for you shortly :-)
If IT has a reputation as the gatekeeper, the security department is the one providing the locks and barbed wire. End users think IT security is a hassle: complex passwords, password expiry and multi-factor authentication are tolerated when they are made mandatory, but nobody is thrilled about it. But look at it from the …
Monday 9th April 2018 09:27 GMT A Non e-mouse
Fit for purpose
Your security people shouldn't just be dictating what users should or should not be doing. They should be engaging with the users and finding out what the users need to do. Then the security people should come up with appropriate security controls and test them with the users.
If you fail to engage with the users, they'll see you as a blockage and try to work around you. Usually resulting in a horrible mess of personal cloud accounts (gmail, dropbox, etc) that central corporate have no knowledge of.
Monday 9th April 2018 09:46 GMT Korev
Monday 9th April 2018 09:53 GMT MJB7
Monday 9th April 2018 10:15 GMT A Non e-mouse
Monday 9th April 2018 11:37 GMT handleoclast
They seem to be stuck in the last century and believe that forcing users to change their passwords every 30 days is best practice.
Show your auditors this advice from CESG (a part of NCSC, which is a part of GCHQ).
If they refuse to accept what it says, I suggest you expire your auditors and set new ones. No need to do it every 30 days, though, unless the new ones are as fuckwitted as the old ones.
Monday 9th April 2018 15:59 GMT Robert Helpmann??
We use 2FA for everything where I work. From getting on the network to accessing shares to logging into internal applications, it's all 2FA. We still have to change our passwords on a regular basis which means that most users will have forgotten what they set it to weeks ago and need to call the help desk to get it reset for them.I cannot think of a single person who enjoys this drill and it is completely unnecessary. I'm trying to get the process changed so that I don't have to endure it any more and someone up the ladder can take credit for my idea (a win-win situation in my environment).
Monday 9th April 2018 13:56 GMT GnuTzu
Monday 9th April 2018 10:40 GMT Anonymous Coward
Our security department are completely rabid and totally disengaged from us. They assume we're all desktop office users (Our department aren't. We have specialized hardware and software that doesn't fit the regular clerical user model.) We have to break the rules every day to do our jobs. The Head of Department asked for some policy changes so we could work within the rules and was threatened with dismissal for questioning company policy.
Last year, IT Security decided to buy an off the shelf VPN key security product to stop VPN certificates from being lifted from people's machines, not offering any evidence that any such incident had ever occurred. They didn't bother to ask anyone what they wanted and the product chosen doesn't support Linux (shutting most of our developers out of the VPN.) A discussion ensued on how to work around this new mandated product to get the keys into Linux. The Security department went on a rampage, likening moving keys from your Windows to Linux partition to ram-raiding the front of the building and vowing to seek dismissal of anyone they found doing it. The developers now don't work from home any more. Password rules are now so complex and include "mustn't be similar to a password you ever used before", causing users to write passwords down. Security seem to be living in a different world from the people using the technology and actively pushing users towards circumvention.
Monday 9th April 2018 13:07 GMT Anonymous Coward
Sounds a little bit like a place I once worked.
New security policy came in, with monitoring tools pushed on to peoples machines, all without any consultation of the users that this would impact.
All the developer PCs (mix of desktops and laptops at the time (about 15 years ago)), failed the audit, and we were all instructed to remove the offending software. (Dev tools, mock systems, VMs, 3rd party text editors, IDEs, databases etc.).
We did state we can't do our jobs without the software, but this went unheard, and so we started to comply, whilst also contacting the various projects we were working on, to let them know we could no longer finish the work, or start new work, promptly bringing several projects to a halt (these were multi million $$ projects, I was working for a large US corp at the time).
The PMs escalated, and so a few management meetings, up to board level ensued, and within a week, all personnel classed as developers, were given a waiver for the new policy, at which point we could put all the tools back (never actually removed of course!), and we started work again.
This much to the annoyance of the security team, who were promptly blamed for delayed projects, the financial impact, and later on, the root cause being put down to their lack of engagement with the rest of the business.
Eventually we created an approved list of tools, and the policy was reintroduced, but we made sure we controlled the 'dev tool' list, and not the security team itself.
Monday 9th April 2018 18:19 GMT Anonymous Coward
Oh, so common when IT for R&D is under the control of bean-counters and other grunt-and-poke types. We had a new edict that said "Only IT can install software." Small company but IT was under the purvue of the bean-counters, who hired the IT managers without consultation. Similar results: "Only install authorised software" and all s/w in R&D is authorised.
Monday 9th April 2018 14:06 GMT GnuTzu
From the other perspective, there are those that go out and purchase IT products without consulting security professionals are even doing a feasibility study--let alone a risk analysis. And, when the security people say that it will actually break existing security controls--and they then get told that the new product was already paid and that they'll have to go ahead and spend more to make it work.
And, this is happening with professional acquisitions people and departments. Oh, the dysfunctional organisations are everywhere. And, the responsibility of getting the departments to work smoothly with each other is at a pay grade well above that of those who must suffer such nonsense. You would think a higher paid executive would have better sense, but so many of them seem totally oblivious to the inefficiencies created by the crap they keep rolling down hill.
Monday 9th April 2018 11:01 GMT tip pc
Let staff understand the reasons behind the security & a process in place to challenge when needed.
Let staff understand the reasons behind the security & have a process in place to challenge it when needed.
I've worked in places with very stupid reasons for doing things in a so called secure way. Often its because its always been that way or there's some folk lore story about some person who was really smart or some response to some audit in the dim distant past.
There should be a reason for the security and a reason for enacting it in that way. Often working practices or modern technology can mitigate the reasons and everyone can make their day jobs that bit easier. Often security is just theatre and no one is willing to make changes to actually make sure its relevant.
A classic are organisations that use proxies yet their internal DNS will resolve internet addresses too. The only devices that should be allowed out & able to resolve internet addresses should be the proxies and in some cases some stuff (not all) in the DMZ. If your resolving google.com from the cli at your desk, something MAYBE* wrong.
Monday 9th April 2018 15:51 GMT Daniel 18
Re: Let staff understand the reasons behind the security & a process ....
"A classic are organisations that use proxies yet their internal DNS will resolve internet addresses too. The only devices that should be allowed out & able to resolve internet addresses should be the proxies and in some cases some stuff (not all) in the DMZ. If your resolving google.com from the cli at your desk, something MAYBE* wrong."
This description is incomplete and ambiguous. The asterisk hints that you may already know this and have decided that a complete explanation to an arbitrary IT audience may be harder, longer, and more trouble than it is worth.
That said, apparent desktop behaviour says very little about the actual infrastructure architecture hiding behind it, particularly to someone who is not an infrastructure nerd.
Monday 9th April 2018 11:19 GMT LegalAlien
Tuesday 10th April 2018 05:22 GMT Christian Berger
Many IT-departments choose to have neither of those:
For example ours forces us to use insecure systems (we have to use Acrobat Reader for PDF, as well as Office Products) it filters outgoing E-Mail for document types like .wav. It's probably spending a lot of money for "security solutions" which do nothing, and their e-mail solution can't handle mailboxes larger than 2 Gigabytes.
The optimal solution changes depending on what department you are talking about. For an office department you might be able to just lock down Windows installations, but for technical departments the easier and much more secure way is to use Linux or some BSD. Nobody in a technical department will care about compatibility bugs in evince or even consider sending HTML E-Mail.
Monday 9th April 2018 12:11 GMT hitchslap
So the advice from the NCSC above does have a point...however it also misses a lot.
I get it....strong passwords changed frequently is a pain, causes disruption etc etc. But there is a school of thought that writing complex and hard to remember passwords down can be a good thing in certain circumstances. https://www.schneier.com/blog/archives/2005/06/write_down_your.html
I know of one very large company now advocating the writing down of passwords.
Also moving away from a preventative to a detective control is a risky business and always makes me feel uncomfortable. Is your organisation capable of reacting to such detected anomalies? I'll bet it's not nearly as good as you'd like to think it is.
This advice does add something to the conversation but will likely be abused as there are so many other considerations that people won't take into account when deciding to bin their password reset policy.
Like all security controls...it's a pick and mix to overlap controls that are workable for the user, get the organisation into a compliant state and within it's risk appetite.
Poor show NCSC that shows a lack of real world experience.
Monday 9th April 2018 16:08 GMT Paul Crawford
Re: Conflicting Advice
OK so lets say you force your users to change passwords every 30 days, and even more assume this does not lead to piss-poor practice in terms of post-it notes, easy-to-guess choices or IT support getting lax in terms of vetting those requesting a reminder/renewal: Now you have an average time from breach to password change of 15 days.
Do you really think that any competent bad guys won't have totally screwed your systems in under 15 days? Not put in shadow accounts and/or key-logging software? Not used network access to compromise all those unpatched* devices you don't/can't have AV on like printers, IoT crap, etc?
So how much more useful is this compared to password changes one per year or only on employee changes or suspected breaches?
[*] when did you last get an update for any of your printers with built-in web servers?
Monday 9th April 2018 18:58 GMT Anonymous Coward
on password changes and security policies
I'm usually significantly more in favor of harsher security policies, as a result of different thoughts. I always try to ensure that people have what they need to work within my security system. However, sometimes, you need to make a system secure because your users won't care. Take, for example, the password reset system. I'm not in favor of it for the purposes of blocking access to outside infiltrators--they will require only ten minutes inside to make things damaging, so I want things like 2FA to block them. I want password resets to make things secure from the inside. If passwords are changed once a year, it will prevent situations like the one for an employer of mine from a few years ago. If you were to track me down and identify that employer, I can virtually guarantee that I can get you access to 90% of their systems in four password tries. I'm not evil, but anyone working at my employer might be; I therefore have to plan for that occasion.
The same is true for password strength. Even writing it down is better than using "123456789". An external actor can't read your sticky note, but they can crack a ridiculous password. I'm also certain that users can remember sufficiently long series of characters if the system is important enough. That was another problem with my former employer. They have weak passwords. They never change their passwords. Worse still, they reuse a lot of their passwords. I haven't worked for them in years, but their security is terrible; they had access points whose credentials were unknown, so someone tried the default from a google search. Of course, those credentials worked. They still did weeks later. Somehow realizing this was not an impetus to the company to get that fixed, and nor were my repeated suggestions to that effect, causing me to leave. A standard password policy would prevent this.
In some cases, the users will just have to put up with security related inconveniences. The security staff should have a way for users who need to do something not permitted in the existing framework to ask them for help, and they need to be supportive, but users aren't often allowed to just decide they aren't going to adhere to our standards. If they have a good reason, the security staff should want to know about it and help out. If they don't have a good reason, then they are being obstructive and putting customers' data at risk. Data that I've been tasked to protect, especially when some of it is sensitive, is more important than the time or frustration of a user who has to retrieve and enter 2FA tokens.
Anonymous because I don't want it to be easy to track these statements about my former employer down.
Monday 9th April 2018 20:38 GMT Charles 9
Re: on password changes and security policies
But as the article notes, what happens when the person complaining is over IT's head, meaning they have the power to have IT forcefully rearranged? One of the biggest issues in security these days are snafus coming from up top, fostering a loophole culture.
Tuesday 10th April 2018 16:53 GMT Missing Semicolon
Re: on password changes and security policies
" I'm also certain that users can remember sufficiently long series of characters if the system is important enough"
No. No we can't.
How many systems are "important" enough? Who volunteers their system to not be "important"? As a result, you might have 2,3 10?
Even if we use CorrectHorseBatteryStaple passwords, more than a couple is beyond most people.
But stupid IT "secure" passwords, that must contain symbols and numbers, not a chance.
Monday 9th April 2018 21:15 GMT Bob Dole (tm)
I truly don't understand why people bring up Cambridge Analytica as an example of how Facebook failed to protect their user info. Or even saying that CA was doing something naughty.
Facebook essentially told all of those companies "Look at this data you can get from our users!" Come on in and pay us for it!
This wasn't a failure, it was absolutely intentional and the only reason that it became a "big" deal is because some people didn't like which political party was dipping their toes in the water this time.
So, please stop equating that data mining with a security failure. It's not a security failure if it's by design.
Tuesday 10th April 2018 03:08 GMT eldakka
> Facebook essentially told all of those companies "Look at this data you can get from our users!" Come on in and pay us for it!
Facebook does not sell user data to 3rd parties.
It sells access to users.
For example, if you want to advertise a product to users, you don't go to Facebook and buy a bunch of raw data, analyse it, then target those users.
No, you go to Facebook and say "I want you to present this ad to users who fit this profile".
However, Facebook would previously allow academic's access to the raw data, for no charge (free), specifically for the purposes of research.
However, and this is where Facebook fucked it,
1) that access was too broad (i.e. a researcher could get data on the people who agreed to the research, as well as those people's friends who might not have agreed).
2) Facebook never audited those researchers to ensure they complied with the academic access for research T&Cs.
3) Facebook allegedly was willfully blind to what was happening with this "for academic purposes" data, the black market that arose around it.
One of these researches, not Facebook, sold the data to Cambridge Analytica.
Facebook did not sell the data, either to CA or to the researchers.
However, they are still responsible for that data, as it was due to their allegedly willful blindness and lax attitude to security and their "all your data is belong to us" attitude. Therefore they are still complicit in this situation.
Monday 9th April 2018 22:11 GMT Anonymous Coward
Your company's security is only as good as the worst of your online business partners.
Had an a acquaintance with a customer who brought in an unsecured (No AV even) XP machine that was handling all their email correspondence with sometimes very dangerous if exposed data. You might try to do a query of all your companies and people you send "security needed" information to, but that might not be popular with your sales people, your management, and you'll never get the truth.
Nice article, but the biggest breakdowns in security have lately almost always been from 3rd parties.
Tuesday 10th April 2018 08:34 GMT Anonymous Coward
Tuesday 10th April 2018 09:36 GMT Anonymous Coward
Surely an overreaction?
I can't help thinking that most people are decent coves just trying to do the right thing, so isn't this whole security thing just a storm in a teacup? I mean, seriously, who can remember the last real security breach we had that wasn't just a simple misunderstanding that could've been solved by a cup of tea, a rueful smile and a handshake: "Sorry about clearing out your funds, old chap, let me sport you a G&T"; "I say, most decent of you, particularly as I can't now afford one myself!"; "Yes, let's just shake hands on that, and I'll direct your wife and daughters to the workhouse whilst you imbibe your (final) beverage."
We all remember the days when most banking was done at the golf club or at the Gentlemen's club, by chaps who knew and trusted each other - and we all remember that those days were better days too. You got credit if you knew the right people, pure and simple. No talking dogs, no dead-eyed nagging women.
Every time you place a password on something, you're pretty much damning the whole human race with your arrogance. Every time you select a PIN number where the digits aren't all the same, you're just being boastful. And a gentleman never boasts. It just isn't done. If you must use a password, at least make it something easy to guess, so a friend can step in to help and do the right thing, if you're indisposed.
So come on, you security folks! I know you find it all interesting and so forth, but you're getting a touch tiresome; so just sit down, take a deep breath, maybe stretch out on a comfortable sofa, and you'll find most problems simply just go away.
Anyway fellows, I've probably banged on quite enough; I'm off to air my humble thoughts on immigration, foreigners and the poor on the Daily Mail site.