In an ideal world
they are orthogonal. However with management in the picture they are inversely proportional.
Our recent article about the fine line between security and usability started some very interesting discussions and active criticism, most of which was targeted at us - suggesting that security and usability do not form a one-or-the-other type relationship (or are at least far more independent than dependent on each other). We …
"decades of applications being able to run at administrator equivalent levels has resulted in a glut of software that claims to require administrator access to successfully install and operate on Windows."
That gives no excuse to developers releasing applications in 2007 (!!!) requiring administrator access. Or for that matter, it gives no excuse to developers who released applications since 2002, or 2000.
User Account Control only exists because of lazy non-Microsoft developers. And to a lesser extent, lazy Microsoft non-OS developers.
Further, good security can enhance usability. Good apps have less chance of scaring the user away from doing something if they know the computer won't break.
Want to disable User Account Control? Log on to Vista with a Standard user account. No more UAC prompts, and a safer computing experience to boot.
"they are orthogonal. However with management in the picture they are inversely proportional." .... I would agree, Tom, for an ideal world too .
"The net result is that the user encounters a usability issue with their shiny new operating system, as a security component seems to be running in overdrive and actually reducing the efficiency of the user with seemingly constant interruption."
Send IT on a Sabbatical. Follow the Journey Magical Mystery Turing.Buy IT Off for a While.
"This is a critical problem as it allows a lower privileged user, who has access to operate access, the ability to take full control of the system - a problem when it comes to shared environments like web hosts." MeThinks lower privileged user is Elevated to Challenge Self Doubt. AIQuantum BetaTest of Oneself 42 FailSafe All Ways Always. A Binary Pre-Cogniscence for HyperVision ? .... on the Greenwich Veridian.
Stop being silly. Making security unusable is how you get security breaches. I've worked for US Defense contractors and use a VERY secure and VERY usable system for our VERY secure software development interactions. I've been doing this stuff for a couple of decades, I have good judgment, which comes from experience, and a lot of my experience comes from bad judgment.
Of course you introduce a bit of annoyance when you increase security, but a well-designed system (like mine -polishes fingernails on lapel) works by seamlessly presenting things that you CAN do, and not even letting on that there are other things that you can't do. I use modern systems like WANs, RADIUS, VPNs and security fobs.
Make doing things the right way easy by applied and inventive affordances. It avoids cock-ups like having disks sent via mail unencrypted (which is EXACTLY the kind of thing that happens when you make security unusable).
This is not new stuff, folks. Just because geeks can't do decent UI is no excuse for making a silly pronouncement like "Usability or Security: Pick One."
Gordon, dare I say you've *woefully* understated your case?
MS documentation since NT 3.0 has made it clear that Windows programs should respect the distinction between administrators and ordinary users. Since Win95 and Win32, there has been no excuse not to include calls to the relevant APIs. Debugging under DOS-based Windows was always such a painfully shit experience that surely no-one did it after NT came out, no? So the writing has been on the wall for over a decade now.
So why do we still have so much crap software?
Firstly, most programmers are idiots. Readers of El Reg are obviously a cut above the average, but face it: most software is written by people who don't understand the connection between the 99% spam in their inbox and their own product's insistence on running as an administrator.
Secondly, if MS try to enforce the rules, the people who get hurt are the end-users (who are MS upgrade customers) and not the idiotic programmers. That means MS have very little leverage over the idiots.
*Fortunately*, we now have botnets that threaten to destroy the entire IT ecosystem unless major vendors like MS change their attitude, break thousands of crap programs, and make it painfully hard for end-users to do what they want to do. We might start to see some progress.
"Does your new code work?
By Tom
Posted Thursday 22nd November 2007 22:05 GMT
I dont know I dont have the security access to test it....
You can book £6Billion to that without writing a line of code!"
Of Course IT works, dread naught, Tom. And thats a lot of 00s for not writing code lines. Must be for Wizardry instead......Internal Investment
the reason so much ISV code circumvents API is optimization, driven by competition.
if third-party software only addressed APIs, all the calls would occur at the same speed on identical hardware/OS, and the quality of the ISV's software would become the sole source of competitive advantage.
that would require retaining, training and investing in your programming staff, instead of farming out the work, on an ad-hoc contract basis, to inexpensive code monkeys in a distant cube farm, somewhere in the developing countries (and they're lucky if they have cubes). it's cheaper to just hack the software into the guts of the operating system, ignoring the API security and abstraction layers altogether. that way, the ISV saves money and time; the code is closed-source, so if it's crap, nobody sees it; and it can always be patched later.
please note that most ISVs are business first, software designer distant second. in terms of project management methodology, time and cost are highly significant to them, and scope is quite flexible. their primary goal is to get shiny product to market (and then sell it to uninformed, credulous middle management in as many organizations as possible), not to create a good or great product (that would be nice, but it's not nearly as important).
having dealt with the dog that is Vista (want your computer to run 4 times faster? try removing Vista and installing XP, go on, i dare you...), i can easily understand why ISVs would try to optimize around the clunky house that Allchin built. i hope it's the DRM in the kernel that makes it so slow, and they tear that out next year when the rest of the media content oligopoly caves on that front. barring that, i fail to see the value of moving from XP.
full disclosure:
i turned to unix/Linux after almost 20 years of supporting MS products. i made the transition to preserve what is left of my sanity. so far, so good.
In physical security we have a saying:
"Economy and Efficiency Are Not The
Sole Objectives of Security Administration".
Within reason, who cares if security and efficiency conflict? If we REALLY wanted to make things efficient, we'd eliminate passwords all together but we'd reach a point of diminishing returns where our efficiency would lead to chaos. The computer security sector needs to learn this principle better. We can only converge so far before we put all of our eggs in one vulnerable basket and we can only save employees from the harsh and cruel burden of using a secure password or following other good practices before we've gone too far.
As for the effect on developers, poor dears! Take the time to get it right before releasing it. Then there's the arrogance of too many IT managers who insist to me that their networks are secure when I know better. If you can't catch that zero day virus I send you, you'll fall to the bot DoS attack I unleash. You can kid management but you can't kid your colleagues. Every concession for economy or efficiency make us vulnerable.
Our whole society seems to be going the wrong way. We've taken so many jobs offshore for the sake of profit that soon we won't have anybody well enough paid in the US to buy the products made abroad. It's all about money. If software isn't cheap and efficient and idiot proof for employees to use no one in management will buy it so we make it "user friendly" but turn it over to thoughtless idiots to use.
Make software developers civilly liable for major court judgments for the results of their insecure software and business owners responsible for carelessness of their employees who lose my credit card data, establish national standards for how much security is enough, and we'll begin to solve this problem. Stop complaining and go change your password to something other than the last four digits of your social security number! Quit trying to sell everyone on the desirability of converging everything to one big network in the sky and take a lesson from experience--redundancy is the key even if it is less efficient or costs more. Quit thinking of your career development and think of quality workmanship at every level. If we don't tighten security whatever the cost some group of thugs will do to us what million man armies failed to do. It starts with critical infrastructure, of course but is equally important for every level of American business. Someday we could be brought to our knees by an attack on the banking system, for example. In 1975 the worst case scenario for the Federal Reserve was a fire in the check clearing center that would stop American commerce in its tracks. Closing the check clearing center for even a few hours would cause major market fluxuations and cost businesses billions in bounced check fees, etc. Today the worst case scenario is some jerk not doing his job to maintain computer security. And if you thought a fire in the check clearing center is scary, asking employees to be responsible for security because the software developer wanted to make his product easy to use is really scary.
Begin to treat computer security as the national security problem that it is and quit worrying about costing businesses money or soon you'll be enrolling in Chinese languages classes to be able to communicate with your new government leaders.
Well, security is diametrically opposed to usability, one need only look at the extremes of security and usability to realise this.
Though of course, that particular phrase has lead to convoluted security being accepted where there exists security that is quite simple to work with after deployment. And has lead to security not being adopted due to fear of drops in productivity.
The phrase 'security by obscurity is no security', is another damaging phrase, which on the face of it is true, but when inspected one finds that all security is via obfuscation (encryption is obfuscation).
I am not sure if this, 'security and usability is orthogonal', will catch on, and I do not think it is a truism. But, I know what they are trying to do. They are trying to say security does not add usability problems, well it does. But, it doesn't have to infringe upon usability as much as the proportionally inverse camp think it does.
Let's take the email retrieval problem:
A simple usable solution is to connect to the system giving a password in the plain for a user account on the system.
Now this is not secure, but you will be able to pick up your email in any country in the world, using nearly any terminal. And it can be configured server side in minutes, only requiring the user to remember a password.
A secure solution is to connect to the system via TLS with a trusted third party certificate, exchange a cram-md5 or digest-md5 password, for a SASL user not connected to the system user accounts.
Now this is fairly secure, but you will have to have certificate info on the client side, your smtp will have to support TLS and digest or cram md5 password exchange, and the setup on the server side is also more complex. You probably won't be able to retrieve your mail from any terminal in the world, so your usability has dropped.
But, using this system day in day out hardly effects usability if you are connecting from the same system each time, which is configured to make the process automatic.
Under normal circumstances, and in regards to the above example, the extra effort for security diminishes day by day, with no extra effort required from the user, so yes that could be seen to be orthogonal.
But, the server now contains a few extra systems, and whilst they can be configured to be only accessible from the local host, they represent a way to escalate privilege in the system. So, has the security of the system actually been increased as much as one would initially give it credit for.
And this is the Pandora's box of security, we cannot really measure security, it is not a science. Security is an art, and it seems that we are at the whim of fate in most of what we do.
Security really requires vigilance, intuition, deep technical knowledge, early identification of vulnerability and a lot of luck. There are some really bad configurations which are more likely to lead to compromise, and there are clever (situation and time dependent) configurations that tend to reduce the chance of compromise. But there is never anything that is 100% secure. And there are no trite phrases that sum up security accurately.
Though I will give it a go:
'Pay your security people well, you don't want them to be the weak link in the chain.'
>> "Why we chose to represent it that way is because many developers, users, and administrators don't see it any other way."
This is not very accurate. You still do not seem to get the root of the problem, which makes you, sadly, a part of it.
First, only idiot developers see both concepts as a dichotomy; those without the necessary experience, or inclination to design their applications correctly. And that is the _very_ reason why propagating such fallacies is so wrong and even counter-productive: anybody in a position to offer recommendations or suggestions to the developer community should aim at knocking them over the head with a clue stick, rather than telling them what they want to hear, or are used to hearing.
Second, users -- and to a lesser degree, administrators -- being significantly removed from the underlying implementation of their solutions, really _do_not_ care nor understand why usability and security should be mutually exclusive. In my experience, they want to do their jobs and the software should allow them. And moreover, they are of the mind that they should feel confident that whatever they are doing with the software will not result on their computers being infected with malicious software, or any other negative effects. And they are vocal about this: why should their activities with the software be insecure? why do they have to think about security when the software should be in control of it? and more importantly, why should the software impede their work in doing what software should do in the first place?
If more developers payed attention to their users, or at least took some time to understand the implications of the use of their applications, we wouldn't be having this discussion.
-dZ.
The key issue with computer security is the C language, with its lack of control over variable-bounds. The majority of exploits involve buffer-overflows of one sort or another, and while these may RESULT from programmers' coding-mistakes, the underlying CAUSE is the security-failings of the language itself.
That, and frivolous 'multimedia' gimmicks in email and Web-browsers, which provide numerous opportunities for malicious code to get itself run.
Meanwhile we have 'Cloak-and-Dagger Security' in the form of multiple useraccounts on what is actually a personal computer, hypercomplex filesystem-permissions which no-one outside of Redmond fully understands, Forced password-complexity and frequent password-changes, and now UAC. All that this does is to infuriate users. In some case it may actually make for lower security, for example forcing users to keep changing password simply results in the password being put onto a post-it. You thus have a system with zero security, whereas peviously it had some.
In many ways the Win95/98 platform offered better security. Whilst it lacked thelabyrinthine userpermissions of NT-based systems, it also offered far fewer exploits to the potential intruder.
A good article. Security except good coding practices, and so on does NOT belong to applications. It is asking a disaster.
You have end-point security ( usually the user ), device security, connection ( network ) security, another device security ( server ), application use security and information access security ( and how often, when, what access is allowed / disallowed ). Layer that with managed security NOT with separate products! Everything is there, user has the normal hassle, ID, password, keycard, challenge, whatever BUT nothing else. Any piece missing in chain - close, disconnect, alert, kill someone, etc. but it is not more difficult than a normal login or scheduling a job. Details really don't matter, the protocols are there, the encryption, the key exchange, the ACL's, even the DB views allowed for this user by this application, etc.
Now - it needs pre-planning on infrastructure and system level NOT in applications, too late there.
Wasn't it a long time ago already the rule, manage externally, not in code, don't write filenames, directories, IP addresses, keys, access rights, and so on in program. If you do the same ( manage externally ) with security it is not very complicated except in business ( and political ) sense.
BUT, as the article almost says, how to change what was learned when PC time started, no security at all in design?
So why is it that an MDB file is an "unsafe file type", blocked by MS applications like Outlook, but an XLS file is not? It would seem that security still runs second to marketing. But that is no excuse for not fixing the JET vulnerability. The problem is that MS decided long ago that JET should be discarded, not fixed.
BTW, I found while testing that VBA (used in Access and Excel, but not the source of the Jet vulnerability) would correct stack corruption errors in the user code, where Open Office would crash on the same code.