back to article Security is hard because it has to be right all the time? Yeah, like everything else

One refrain you often hear is that security must be built in from the ground floor; that retrofitting security to an existing system is the source of design complications, or worse, outright flawed designs. While it is the case that the early internet was largely silent on the question of security, I suspect “retrofitting” is …

  1. Ozzard
    Pint

    I disagree - a reliability incident is temporary, a security incident is permanent

    One for a gentle discussion over a beer, but I class security vulnerabilities as fundamentally different because you can't recover from data theft. The exfiltrated data has been copied, and from that point on you've lost control of it. It's a permanent loss. In my neck of the woods, where I'm dealing with medical data, clinical trials data, and biometric data, such a loss can cause someone significant damage up to and including threat to life. Even less obviously deadly data, like that stolen from University of Manchester in summer 2023, can be life-changing - student accommodation data included stated gender and, in some cases, sexuality data for students from countries where non-cis, non-hetero people can be jailed or killed.

    Clearly one looks at any system through the lens of risk, but I do think that some of the security risks are qualitatively different in most systems.

    1. John Miles

      Re: I disagree - a reliability incident is temporary, a security incident is permanent

      Reliability issues will generally have users complaining (though not always), you can even automatically monitor system and have it email when things go wrong, running late etc.

      Security issues will generally only have people complaining when the attacker makes a move at which point it is too late. For security you can't let your guard down, you need to monitor logs and ensure the system logs them, look for the unexpected/out of place and verify them.

      But above all for security you need people who understand how things work in depth and not just following a script.

      1. cyberdemon Silver badge
        Coat

        Re: I disagree - a reliability incident is temporary, a security incident is permanent

        I guess the difference between security and reliability is that reliability assumes that your enemies are wear, the environment, the laws of physics, unpredictable users, and statistics. So if it works under extended test conditions or 'fuzzing' then it's fine. Whereas security assumes that any mistake, however small, will eventually be discovered and exploited by Nicolas Cage, Stephen Seagull, or Dwayne 'The Rock' Johnson

      2. DS999 Silver badge

        Re: I disagree - a reliability incident is temporary, a security incident is permanent

        Security issues will generally only have people complaining when the attacker makes a move at which point it is too late

        Not even then most of the time. So long as they don't see any negative effects from the compromise they don't care. If you tell them "a hacker got into our systems and we need to reinstall from scratch to be sure it is clean" most won't want to wait that long, they would rather be up sooner even if there is less certainty the hacker has been removed.

        Unless there is some fallout like the company's dirty laundry being exposed on Twitter causing the stock price to plummet, or employees are starting to see their identities stolen because of the personal information that was purloined, they don't care about security issues anymore than they would care if someone was caught having broken into the company HQ riffling through the CFO's filing cabinets.

  2. Boris the Cockroach Silver badge

    You can only

    do so much with security when you start to look for the weakest link.

    And its always the users

    Take 1 case I know of, Company laptops provided , however , during night shift when internet is'nt needed, its turned off. employee thinks "If I use the phone tethering function on my phone , I can stream the latest TV shows on the company laptop while I work" which he does by searching for streaming services and coming across "Download and install this to watch the latest netflix" which he does and watches netflix..... and it has a rather nasty payload attatched.

    Next day, the server starts up, connects to the internet and the laptops and downloads gigs of data and then the anti-virus alarms go off and system grinds to a halt.

    Cue much gnashing of teef and wailing.... once the culprit is found, he no longer has a job. and good riddence.

    Or a new employee gives her work e.mail address to a friend, that friend's son downloads stuff from a dodgy site, and the resulting spam bot fires off an e.mail to our new employee... who unquestionly opens it because its from a 'trusted' source...... yeah we're back to wailing and gnashing of teef.

    So the question should be "How do you secure your applications tightly enough so that the system cannot be compromised without tying the system down so tightly that people cant do their jobs"?

    1. ecarlseen

      Re: You can only

      This is all stuff that's straightforward to address.

      End-users cannot install software, period, ever. End-users do not get admin access to their devices, period, ever.

      Nothing on the internal network connects to the Internet except through a filtered proxy. Don't allow end-users to download executible, library, or script files unless they're devs. Make exceptions painful and whitelist only.

      Nothing on the admin network connects to the Internet, except through a very strict, whitelist-only (site and content-type) proxy. No exceptions, period, ever.

      Proxies are DMZd and isolated with the expectation that they will be breached. This can seldom be made perfect, but make it as tight as possible.

      Company-owned remote clients run in VPN always-on mode. No split tunnels - VPN clients are filtered just like internal clients. They can watch Netflix on their own !#$% device.

      Etc.

      I've implemented every one of these rules (and more - I've been doing zero-trust since way before it was cool) and made them stick. In 25 years of IT management, my networks have NEVER been hacked. We had one outbreak of ransomware that was automatically isolated to two hosts, with no data exfiltrated, and only 40 end-user-hours (total) of productivity and six IT support hours of productivity lost at one site. The end-user hours of productivity were lost due to two network shares that were encrypted; we just restored the hourly backups and went about our business. That's it.

      Security is possible, but it takes extreme thoroughness and discipline. Beyond that, I was able to do it because I had support from the C-Suite and the Board of Directors - learning to communicate with and "sell" policies to these people is just as critical as the technology part.

      1. Mike 137 Silver badge

        Re: You can only

        "End-users cannot install software, period, ever."

        That's included in probably the vast majority of "user policies". However, what about the unavoidable vast amount of totally unverified scripts that get silently downloaded while browsing even ostensibly legitimate web sites? You can't turn off scripting these day or you "break the web", but malicious scripts are a primary vector for browser-based compromise from credentials exfiltration and session stealing to workstation take-over. And, increasingly, web sites draw scripts dynamically from large numbers of sources, any one or more of which can be modified or compromised at any time.

        Just for example, I recently investigated the script assemblage of my doctor's (NHS templated) surgery web site, and found, buried among the dependencies, a script drawn from github, by a pseudonymous author with no contact details. Can I trust that? In two letters -- NO. But do I have a choice?

        To a great extent, securing day to day user activity from compromise has become really hard because web devs seem to be entirely oblivious of the hazards they thrust upon us.

      2. ecofeco Silver badge

        Re: You can only

        I'll have to stop you at end users.

        You are correct. Now try to get everyone on board. Now try to get the top executives to follow the rules.

        Yeah. Not happening.

        I don't think I've ever worked anywhere that wasn't held together on bailing wire, duct tape and a wish and a prayer, with even the admins being scary slack.

        And then there are the vendors. God save us from the vendors. Of which we have no control over, yet get the blame for their mistakes.

  3. Pete 2 Silver badge

    The biggest flaw

    Any discourse on security that doesn't discuss the users is missing the single biggest vulnerability.

    Without considering how the system will / could be used, everything else is academic.

    1. chivo243 Silver badge
      Meh

      Re: The biggest flaw

      I've long pondered the perfect system(s), and it pretty much demands users, from top to bottom, who know what they are doing and why they are doing it. I once had to school the HR director on the perils of using their personal accounts on company assigned gear. I also took a rap on the knuckles for doing the right thing...

      1. ecofeco Silver badge

        Re: The biggest flaw

        A LOT of this out there.

        We'd love to enforce policy, but we'd get fired.

  4. katrinab Silver badge
    Mushroom

    "This suggests the next possibility, which is that security is harder because we’ve set it up as an absolute requirement under all conditions, whereas we sometimes cut ourselves some slack on scalability and availability."

    I don't agree. Security is always a compromise. We could secure our bank accounts in the same way that we secure our nuclear launch codes. But difference is that if bank account access gets into the wrong hands, it isn't the literal end of the world, whereas if the nuclear launch codes get into the wrong hands, well there probably is another security layer after that, but it could be the end of the world as we know it.

    1. elsergiovolador Silver badge

      be the end of the world as we know it.

      End of this world, sort of - and we are unlikely going to know it. The planet will keep spinning. New species will take over.

      It's going to take them a fair bit of time to get to the point where they will be debating whether climate change is cockroach made or not and how much tax should they be paying for it to go away.

  5. Anonymous Coward
    Big Brother

    Early internet was largely silent on the question of security ?

    The Internet does exactly what it says on the tin. Connect computers through a generic protocol. The defect resides in the computers connected at either end. The main defect being the click-and-get-compromised innovation of the modern Graphical User INTERFAC~1 ..

    > Security has introduced the idea of defense-in-depth (DiD)

    Run the main OS on a read-only system with userspace being run on a virtual machine that disappears into the Æther on the next re-boot.

    1. An_Old_Dog Silver badge

      VMs for Security

      Security-compromising bugs have been found in hypervisors, and also in hardware-based security mechanisms.

      Memory Protection Extensions ("MPX") and Software Guard Extensions ("SGX") are security features which have been eliminated from Intel's 12-gen CPUs.

  6. Bebu
    Windows

    Abstractly...

    I would guess for any system (set of states, and the permitted inputs and the consequent transitions between states) sufficiently large its possibly impractical to prove a particular property holds in the face of all possible inputs.

    I suspect this gets precariously close to the halting problem.

    Real computer systems have an absolutely humungous set of states of which the vast majority will never occur during the life of the system.

    I guess security comes down to attempting to partitioning a system into a very much smaller number of equivalence classes on which a security property can be demonstrated to hold.

    The hazard even here could be not capturing the entire set of states in the first instance so that you have equivalence classes that don't cover the whole system.

    The old saw that complexity is intrinsically the enemy of security (or correctness) now applies in spades.

    When you think of a LLM's gazillion states and lack of observability things are about to get a lot worse.* ;)

    * From my school days one of my favourite phrases from french was de mal en pis which I liked to think meant 'bad enough to drive a man to drink.'

    1. Anonymous Coward
      Boffin

      Re: Abstractly...

      @Bebu: ".. When you think of a LLM's gazillion states and lack of observability things are about to get a lot worse.* ;) .."

      A well thought-out post. I'm wondering why the down-vote?

      1. Mike 137 Silver badge

        Re: Abstractly...

        "A well thought-out post. I'm wondering why the down-vote?"

        Probably because voting is largely based on whether the voter "likes" or "dislikes" the post (as on 'social' media), not on whether it contributes to further understanding.

  7. Mike 137 Silver badge

    Thank you Larry!

    At last someone has stressed in public the point that security is not a special case. Unless the mindset that underpins security permeates the enterprise, 'security' will remain an afterthought and consequently fail to deliver. Just for example, Equifax implemented a traffic analysis system that could decrypt and analyse TLS traffic (security) but they let its certificate expire so it stopped analysing TLS traffic without anyone noticing (operational management failure), and as a result the now infamous breach went undetected.

    If finance was managed like security generally is, most organisations would be bankrupt in a month.

    1. ecofeco Silver badge

      Re: Thank you Larry!

      I don't get your 3 downvotes for telling the truth.

      So have my upvote.

  8. Anonymous Coward
    Anonymous Coward

    Puzzled!!

    Security in old fashioned speak used to distinguish exactly WHAT needed to be secure!

    Fort Knox: gold bullion up to the ying-yang......yup....highest security.

    Edinburgh Castle: Who cares about the Scottish crown jewels?........yup....maybe less security than Fort Knox.

    ...and so on...........................

    This article makes absolutely no mention of WHAT is being secured..................

    ......and instead talks about "architecture", "you need to build multiple, possibly overlapping defenses"......without a single thought about exactly WHAT is being secured.

    So...yes.....Equifax probably needs better "security"..........................

    .............but the bookings system for admission to the Culloden site might need a somewhat lower level of "security"!!!!

    Poor analysis!!! Why am I not surprised??!!!

    1. doublelayer Silver badge

      Re: Puzzled!!

      In principle, this is true, because there are different security requirements based on what the likely consequences of compromise are. However, I still have to disagree with you because it's the argument that people always bring out. What they say is "the potential damage if anyone breaks into this is so small that it's not worth building it more securely". What they mean is "building it more securely would take more time or money and I don't want to".

      The world contains a lot of people who assume that security isn't their problem and rationalize it using the same arguments you make. I work in security, so this bothers me a lot. It reminds me of a colleague I had when I was just starting my career who approached all questions of security using the following criteria: if a nontechnical person can break into it, then we should secure it (E.G. if the system logged in automatically or if the password was password, I could do something). Otherwise, since it wouldn't stand up to a concerted attack by China (it was always China), why bother to do more? Something as basic as encrypting drives on laptops was dismissed as unnecessary because his concept of a nontechnical attacker wouldn't know how to get around that, so as long as the laptop asked for a password before logging in, it should be good enough. I don't think you have the same mindset as that guy. I think the argument you make makes it too easy for that kind of guy to rationalize his conclusion.

  9. Eclectic Man Silver badge
    Boffin

    Security and robust functioning

    There are several issues here.

    One is writing robust code - when other people are going to either use it or actually rely on it for important of life-critical functionality. This includes things like type-checking and range checking of parameters and inputs (C is wonderful at allowing you to write 128 characters to a declared 32 character string, I know, I've done it). And if the parameter is within not just the 'expected' range, but the range the code can cope with.

    Does the logic flow correctly and is it complete? In one piece of code from ICL there was a test :

    IF parameter < x then A

    IF parameter > x then B

    Sadly the coder had not considered the case of "IF parameter == x " at which the next wholly inappropriate line of code would be executed.

    This bug was sporadic and took quite a lot of time and effort to find.

    Then there is 'security' - preventing successful attacks on the functionality of the code. Of course, security code must also be robust and complete, but 'security' includes physical access management, as well as social and technical issues.

    Edit - typos

  10. Anonymous Coward
    Anonymous Coward

    Singling out points of failure

    It does worry me that more and more systems are coming to rely on a tiny handful of security certificate and cloud everything-and-the-kitchen-sink providers. The fact that services like Office365 have a validation logic designed by a nesting rat armed with a plateful of spaghetti does not help mitigate the risk. When the MS certificate service wobbles, as it does most days, we all start queueing for coffee. A well-targeted DoS exploit will soon be all you need to bring half the planet to its knees.

  11. sabroni Silver badge

    Security is hard because it has to be right all the time?

    Security is hard because it's the one bit where you're making your system NOT work. All the other code is trying to make things happen and your security code tries to stop stuff happening but only for some users. Of course it's easier to fuck that up, we spend nearly all our time trying to make systems work, breaking them is special cases while still working perfectly for most is a much trickier ask.

    1. Aladdin Sane

      Re: Security is hard because it has to be right all the time?

      It's also hard because it's the only example where you have people actively working against you Depending on your view on bean counters, of course.

  12. Mr Dogshit
    Headmaster

    It's not hosts.txt

    It's hosts

  13. Anonymous Coward
    Anonymous Coward

    Security is somewhat unique in the most effective ways of improving security worsen usability.

    This is not true of most other problems. If you improve scalability, reliability, availability etc, the system just works better. The only intrinsic downside is it costs more, takes longer and is harder.

    To improve security you reduce access. That is the fundamental thing you are doing, even when it is disguised by encryption or any other trick to make it look like you aren't.

    This conflicts directly with usability.

    This is what is uniquely difficult about security, and why we will only ever get just enough of it to squeak by.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like