back to article What would sustainable security even look like?

"There seems to be something wrong with our bloody ships today," fumed Admiral David Beatty during 1916's Battle of Jutland. Fair enough: three of the Royal Navy's finest vessels had just blown up and sank. It was all the worse because Beatty had promoted the policy of spending enormous sums of cash on the Navy, to keep it …

  1. b0llchit Silver badge
    Boffin

    Nobody is legally responsible, oops

    First problem is one of liability and responsibility. Nobody feels responsible and all exclude any kind of liability.

    If you buy a product, then any damages are limited to the "cost" of that product. But when McD offers a too hot nugget over the counter, then, oh boy,... the hot nugget just cost a fortune for McD.

    Why this discrepancy? If you sell software/licenses, you should not be able to hide behind not-my-problem legalese. Please note that it says sell software/licenses, i.e. you profit from the product you sells. And in the end, the C-suite must be personally accountable and liable for cut corners and deviation from best practice. They are the responsible persons, regardless.

    Second problem is that throwing money at security is not solving any problem. You need to address expertise and process. That means it takes time, lots of time, to develop. It also means focus on reliability and not on features and featurism.

    1. Andy E
      Flame

      Re: Nobody is legally responsible, oops

      You have hit the nail on the head in that the problem is nobody is legally accountable for security lapses. Until that is fixed there is very little incentive to invest in effective information security.

      Introducing accountability for security may have unintended consequences but it is badly needed.

      1. Doctor Syntax Silver badge

        Re: Nobody is legally responsible, oops

        The accountability needs to be at the level where the decisions are made, not at sysadmin level.

        1. Anonymous Coward
          Anonymous Coward

          Re: Nobody is legally responsible, oops

          >The accountability needs to be at the level where the decisions are made, not at sysadmin level.

          Unless, of course, the sysadmin goes off the reservation and does something outside of the approved working policies and practices.

          If (for e.g.) the CTO/CSO edicts a policy to block the use of USB sticks everywhere, and the sysadmin allows himself to use a USB stick and as a result the company gets pwned. Should the CTO/CSO or the sysadmin get the sharp end of the shitty stick?

          1. Graham Cobb Silver badge

            Re: Nobody is legally responsible, oops

            That's easy: the CTO. They need to have the legal responsibility so that they are incentivised to make sure their employee cannot violate the rules.

            That is how company regulation works.

    2. Blazde Silver badge

      Re: Nobody is legally responsible, oops

      There'd be no non-bankrupt software companies if they had to pay for too hot nugget damage. (At least in which ever jurisdiction was crazy enough to implement that law - I'm sure other country's software industry would happily fill the gap).

      Perhaps the best you can hope for is a niche market to develop where companies provide liability cover, with the understanding or hope that that business model incentivises them to provide really secure software. In reality the pressure to make software and services secure would depend on competent scrutiny by their insurance companies, who would need to understand the best practices, the ever-changing threat landscape, and black swan probabilities, and be able to audit the software companies appropriately. Thus out-sourcing the problem and maybe even making it worse. At best corners would be cut, pay-outs would be common, lawyers would get bank, premiums would rise. It might even be a Haifa day-care fine situation. I can imagine the bubbly TV spot... Upset IT Director: "Oh noes my systems are all crashed and I think the CCP has my customer's data". CGI Angel appearing from no-where "Don't worry, we at Covered Secure Software Magic have your back.. here's a colossal cheque". IT Director *suddenly smiles and dances happily* "Maybe I can buy that new upgrade with the spare cash".

      So you can buy the $1,000 niche software that will cover your clean-up costs and might be a bit more secure, the $100 software we have now where you'll have more choice of vendor and can punish whichever one leaked their master keys last week, or you can buy $100 software and your insurance separately. Not a clear win.

      1. Max Pyat

        Re: Nobody is legally responsible, oops

        This is tripe,

        You're just handwaving your way through a series of "arguments" until you seem to have validated your assumptions to your satisfaction.

        Please inform yourself more widely on how liability works in other more developed industries that have dealt with more complex issues before returning to the topic.

        1. Anonymous Coward
          Anonymous Coward

          Re: Nobody is legally responsible, oops

          > Please inform yourself more widely

          Please provide us with some guidance on this: decent books, standards, blogs - anything?

          Can you do any better than saying "do your own research"?

        2. Blazde Silver badge

          Re: Nobody is legally responsible, oops

          how liability works in other more developed industries that have dealt with more complex issues

          An industry with more complex issues than software security doesn't exist because software security is a Turing-complete complex issue. You can't just make a ten point fire safety checklist and have an engineer going around doing flame tests, or make sure your accountants are ACA qualified, or whatever. Look how ransomware insurance doesn't work.

          Genuinely the closest 'developed industry' I can think of to software security is the armed forces, because it's an adversarial environment with constantly evolving risks. Maybe no coincidence Rupert already started the comparison with naval warfare. So let's look at that: Most developed nations have had some concept of war pension, disabled veterans benefits, etc for hundreds of years. Soldier safety liability. I'm being a bit flippant of course but has it helped stopped soldiers being killed or maimed in conflict, or is it just an accepted cost of war?

        3. Cav Bronze badge

          Re: Nobody is legally responsible, oops

          You can have quick or you can have secure. Australia's Information and Communications Technology Research Centre of Excellence (NICTA) took 4 years to mathematically prove that just 7,500 lines of code would work correctly. The team that did this had 12 researchers, NICTA/UNSW PhD students and other UNSW contributing staff. (UNSW - University of New South Wales). If we did that with every piece of software then we'd still be running Windows 1.0 and the cyber world we have today would not exist.

          Code can be far more complex than other "more developed" industries... And no, those industries are not "more complex".

          1. Norman Nescio

            Re: Nobody is legally responsible, oops

            You can have quick or you can have secure. Australia's Information and Communications Technology Research Centre of Excellence (NICTA) took 4 years to mathematically prove that just 7,500 lines of code would work correctly. The team that did this had 12 researchers, NICTA/UNSW PhD students and other UNSW contributing staff. (UNSW - University of New South Wales). If we did that with every piece of software then we'd still be running Windows 1.0 and the cyber world we have today would not exist.

            And even when formally proven to meet the requirements of the (formally validated) specification, there's no guarantee that the specification is correct for the use-case. This is a known, and seemingly intractable, problem with formal proofs for anything other than toy systems. The computer does precisely and correctly what it was asked to do, but we might not have asked the right questions. The answer is 42.

            A more relevant example is MCAS. The code likely did what was asked of it, but a poor assumption at the start was to rely on a single input that could go bad in a way undetectable by the MCAS software - the implicit assumption was that the input would always be correct, or that failures would be easily detectable*. Formal methods are a great tool for reducing uncertainty in what a computer does, but they do not guarantee that you are getting the computer to do the appropriate things, or that the hardware implementation isn't capable of throwing a spanner in the works.

            *This oversimplifies a little, but gets the point across.

        4. parlei

          Re: Nobody is legally responsible, oops

          I work in the hospital laboratory field. Systems -- of all kinds -- have to be validated according to an appropriate standard (ISO 15189 in our case, IVDR in the case of those selling us stuff), and when a manufacturer sells us something they actually have to have documentation that what they sell actually does what it says on the label (e.g. how accurately does it measure the concentration of a certain molecule, in what body fluid and what known interferencees exist?). And we then have to do some due diligence to verify that what they claim probably is true (and document that we have done so).

          No one could guarantee that there is nothing can go wrong with using a product, but they can show that they have used an appropriate process to attempt to find the limitations of their product, and honestly have disclosed what they have found. And yes, they have to disclose to customers when they become aware of critical issues.

    3. mpi Silver badge

      Re: Nobody is legally responsible, oops

      > Why this discrepancy? If you sell software/licenses, you should not be able to hide behind not-my-problem legalese.

      Then who will sell software?

      The fact of the matter is; In terms of security, software is way more complicated than most things that exist.

      Yes, a giant warship is a super complex mechanism...and if one of the welded seams is slightly outta whack, that is a weakness that could prove fatal in combat or an emergency.

      But I cannot attack that weakness from the other side of the world while sitting in an air conditioned room. I cannot analyse the warships seams from half a world away. And even if I could, I cannot use that information to attack the entire fleet, all at once, without anyone even noticing until the damage is done.

      A warship also doesn't become susceptible to an attack years after it's launched, because someone discovers that, oops, that steel you used back then crumbles to dust when you blow at it while titlting your head slightly to the right just so and close one eye while your friend whistles "Eye of the Tiger".

      1. Max Pyat

        Re: Nobody is legally responsible, oops

        This is nonsense that just betrays a myopic ignorance of the wider world.

        Your last paragraph makes no sense. Of course a warship can become vulnerable years after it was launched. And you know what?: if it is discovered that the steel used is vulnerable to a new shaped charge penetrator, the navy doesn't get the chance to do remote updates over the Internet.

        Seriously: get good or go home.

        1. Anonymous Coward
          Anonymous Coward

          Re: Nobody is legally responsible, oops

          "get good or go home"

          And how shall we "get good", oh you mite brane?

        2. David Hicklin Bronze badge

          Re: Nobody is legally responsible, oops

          Jutland's problems were more process that design related as they valued rate of fire over everything else, so they bypassed all the anti-flash prevention measures to speed up ammo delivery that mean a hit to a turret could flash down to the magazine.

          Sort of a parallel with security, if it makes the job to hard to do then people will find a way around the restrictions to speed things up.

          1. Diogenes8080

            Re: Nobody is legally responsible, oops

            Lots of both, sadly.

            On the strategy and design side, excessive focus on constructing line of battle versus scouting elements, inadequate flash testing, neglect of emerging aviation, disgracefully poor shell testing and a battle plan that expected the enemy to stand and fight at a numerical disadvantage.

            On the operational side, a lack of practice by the battlecruiser squadron, failure to learn from earlier engagements, failure to fully utilise Room 40 and an emphasis on Grand Fleet Orders over a more practical doctrine that would have improved Beatty's reports and allowed Jellico to give simple immediate instructions to his subordinates. Well, the GFO were his but he was starting with some very poor material.

            Naturally, both Jellico and Beatty were promoted.

        3. mpi Silver badge

          Re: Nobody is legally responsible, oops

          I guess that there are many many many programmers in this world who are better than me. And yet, we constantly discover vulnerabilities in the software they write.

          We constantly devise new methods of quality assurance in software. And yet, we constantly discover vulnerabilities in the software whos creators implement these methods.

          So either all of programming needs to "get good", ooooor, the far more likely explanation: there is a serious flaw in the "get good" methodology. And that flaw is the empirically demonstrable fact that software instrinsically CANNOT be made to be 100% secure, and cannot be resilient to flaws in the way other engineering products are, due to its MO.

  2. that one in the corner Silver badge

    To start, I admit I've done access control in the past that I now know wasn't great (and am glad that those installs don't get connected to the Internet).

    Nowadays, I'd quite like (it isn't necessary, but would be nice) to access some of my LAN boxes from the Internet - and I've got loads of material saying how to set it all up, thanks, but don't trust it enough to take the risk (for the perceived gains).

    > You need to address expertise

    Gaining expertise, particularly as an autodidact[1], in online security seems strangely hard, compared to pretty much any other area of ops (for a small LAN) and/or programming.

    First is a lack of confidence in proving that the methods being proposed actually work, and against what: basically, you have to be able to demonstrate that you can break into the "unprotected but otherwise correctly set up and working system" first, then show that the added protection fixes the issue. I.e. turn a claim (or even just a vague worry) about a security issue into a testable issue in the bug tracker. To do that, I first need to be able to break into the online system like a real Bad Guy and for some reason the books[2] on securing your Apache server has recipes for setting permissions but none for smashing down the door in the first place!

    Second, to be blunt, is a dismissive tone about the subject in forums where you'd hope to see better. Even in Register forums, there tends to be many replies that basically boil down to "well, I do better than that"[3] and no pointers to practical sources of learning. Compare that to other subjects (h/w and s/w) where you can often get useable tips and tricks.

    To be frank, the end result is that I have very little confidence in any of the "how to do online securely" claims :-(

    [1] I'm not in a position to just be put onto an expensive course at company expense

    [2] tutorials etc; unless you have some references to better materials.

    [3] comments on a Reg story last week (URL) even had someone else pointing out this attitude

    1. pfalcon

      On the expertise and testing side of things, I'd add that the current security Industry is making things worse by encouraging complacency.

      I've just gone through the process of getting quotes to perform penetration testing on a product I'm responsible for, and as part of that, to also get the client's office network/infrastructure tested (a third party handles that side of things). The results of getting some quotes has shown some rather disturbing points:

      1- Estimates varied by 100% between vendors. Now that means that each has a vastly different idea of the kinds of tests they will perform and the manpower required. Also its plain from the responses that most of the vendors have a semi-automated suite of testing tools that they apply to given situations. Now while I happily admin that to perform a serious round of testing requires they be scripted - surely the KINDs of tests and the areas to explore surely need to be highly customised for the system concerned? At the very least there needs be someone knowedgable guiding those tests - but I don't get that feeling...

      2- Testing of Office networks, especially those using Microsoft 365. Sharepoint...etc seem to boil down to performing an audit on the systems and security settings in place, and calculating a "score". Now this sounds well and good as a theory, until you look at the scoring, and find out that its impossible to get even a mediocre score (let alone a "high" score) without basically quadrupling the license fees you pay to Microsoft and others for the upper tier services linked to their online suites. It doesn't focus on the true security elements of an office - like Staff Training/Awareness, or perhaps using best-of-breed systems, as opposed to just giving more money to MS (or that MS reserves "best security" only for those who can afford it!?).

      How are you supposed to measure the security of a system when the processes used to measure them are so flawed?

      And this is before you get into the actual developer/coding issues and gaps therein, which the article explores.

  3. John H Woods

    What would sustainable security look like?

    Two somewhat controversial opinions:

    It would involve actuaries, or accountants with an actuarial bent, to actually price risk. Even vague impact and likelihood figures can tell you that your back-up system isn't quite as expensive as it looks, or that your cloud-based IT isn't quite as cheap as it seems.

    More specifically, and partly as a result of the foregoing, it would have a lot less outsourcing. Sure, for an SME, it often makes sense. But as you go up the scale to medium size enterprises, large ones and finally, government organisations, the desperation to offload your IT to organizations that care about it less than you do seems increasingly insane.

    1. sitta_europea Silver badge

      Re: What would sustainable security look like?

      "...the desperation to offload your IT to organizations that care about it less than you do seems increasingly insane."

      Have an upvote from me for that.

      To "offload your IT" I'd add "and put everything in the cloud".

      The trouble is when everything is in the cloud, and the cloud fails, everything fails.

      I'm not even going to mention the Carrington Event.

    2. Pascal Monett Silver badge

      Re: What would sustainable security look like?

      Interesting point.

      One would think, what with all the billions that viruses have been claimed to cost up to this point, that pricing security would be a done deal, those actuaries would have a full-time job and the cost of failure would be a regular part of marketing blurbs to close the sale.

      It would appear, then, that all the hype around how muck the latest breach has cost the industry is not reliable enough for even marketing to pick it up and act on it.

      Telling, isn't it ?

  4. Anonymous Coward
    Anonymous Coward

    What would sustainable security look like?

    Like normal security, but made of bamboo?

    1. Anonymous Coward
      Anonymous Coward

      Re: What would sustainable security look like?

      Our server rooms are patrolled by Pandas (we bought our kit cheap when the local plod sold it off).

    2. that one in the corner Silver badge

      Re: What would sustainable security look like?

      If you try to break in it'll grass you up.

      1. Anonymous Coward
        Anonymous Coward

        Re: What would sustainable security look like?

        Oh, please: cheap bamboo jokes just panda to the masses.

        1. Korev Silver badge
          Coat

          Re: What would sustainable security look like?

          We'll have to try to bear it...

  5. Richard 12 Silver badge
    FAIL

    Security is undocumented

    For example, today I ran into an issue with macOS code signing. Something Gatekeeper does when moving builds between buildbots made signing fail.

    I searched for solutions, and every single one of them said "Disable SIP".

    So there's two ridiculous issues. Codesign is somehow incompatible with Gatekeeper, and Apple haven't documented what Gatekeeper does - which is probably why codesign fails.

    It happens all the time. Microsoft, Apple, AWS etc ship a new security feature, but refuse to document what it does or how to use it properly, even internally.

    So everyone turns security features off, and we end up with systems that are even less secure than before.

    Insecurity by obscurity, one might say.

  6. Doctor Syntax Silver badge

    "greenhouse gases are a byproduct of an economy wedded to cheap energy in one case, and in the other single points of failure in an industry wedded to huge datasets in under-engineered systems"

    In the first case you have to remember that as a result of allegedly environmentally based Luddism use of nuclear was minimised for decades. In the other we have the opposite of Luddism - a rush to either transfer data out of the data centre to somebody else's computer somewhere on the internet or to keep it in-house but insufficiently separated from the internet.

    1. Anonymous Coward
      Anonymous Coward

      The reason the last round of Nuclear Power development was withdrawn has more to do with cheap shale gas than "Luddism". It's also not Luddism to save a stich in time and pay extra for, say, a reactor built with a base of lead which will save a lot of trouble in case of meltdown.

      1. Anonymous Coward
        Anonymous Coward

        Also dealing with nuclear waste on geological timescales seems a far from "solved" problem and is an environmental concern and externalised cost.

        These giant holes in mountains (read: former fault lines!) to bury it are the best solution we've come up with. The massive kicked can costs of decommissioning sellafield. This all says something to a layperson like myself, however untrue it may be these days no government messaging on these fronts have ever eased my concerns on those fronts.

        Does it make me a luddite? I don't do anything to educate myself on these matters so maybe. But at best government information is useless in helping so I default to "no thanks".

        Not to mention disasters have happened fairly recently, and the Ukraine war has demonstrated these energy "security" measures are also a massive liability at the same same.

  7. Anonymous Coward
    Anonymous Coward

    Honestly, in my own line of industry the degree of onerousness that trying to stay on top of IT security is so severe; it genuinely may be more cost effective to crew our sites and remove IT from the loop.

    No IT, no cyber security problem.

    This does assume that the crew on the site are themselves trusted of course.

    1. Version 1.0 Silver badge
      Pint

      "I have a total irreverence for anything connected with society except that which makes the roads safer, the beer stronger, the food cheaper and the old men and old women warmer in the winter and happier in the summer" - Brendan Behan

      That pretty much summarizes how people everywhere used to look at the world. It's different these days and not that much better - OK, so Behan always said that he was a drinker with a writing problem but I have always respected his views compared to what I see these days.

  8. Claptrap314 Silver badge

    One has to wonder

    if the editorial staff at El' Reg bothers to read the comments at all sometimes.

    I keep pounding on this--the end consumer is getting exactly the security that they are willing to pay for. But with extremely limited ability to value and absolutely zero ability to evaluate security, how much is that?

    That's a critical part of the problem, and yet this childish piece doesn't even hint at it. Here's the next, only hinted at in the comments: writing secure code is not hard. It is entirely beyond the capabilities of almost all dev organizations. Because almost no dev organizations have someone with at least a master's in mathematics from a tier-I or better institution, And proving that a given piece of code does what you want and nothing else is at least as hard as getting one of those. (And you need it for every code change.) Emergent complexity and the one-bit difference between secure and not-security make it thus.

    In the meantime, the top-tier attackers a throwing around amounts of money that would get the notices of the FAANGs.

    Maybe this guy is paid by the word or something.

    1. Anonymous Coward
      Anonymous Coward

      Re: One has to wonder

      Small applications, written with minimal, stable libraries (preferably, no library at all) are feasible to audit and diagnose. The complete antithesis of how most stuff is developed.

      Most of the time you'll be lucky to even get a look at the black boxes involved.

      Obviously there are exceptions but that is very much the point.

    2. veti Silver badge

      Re: One has to wonder

      All journalists are, sooner or later, paid by the word.

      I have to say, though, that I don't share your faith in the power of mathematics in this context. For what you say to be true, you would need to evaluate not just the code being compiled, but also the compiler, the operating system on which it runs, the circuit board on which the system was mounted, any network to which it was connected, and ultimately the silicon itself, transistor by transistor. And of course the same for each platform on which the code was to run.

      And even then you're not really secure unless these platforms are locked down to an extent that would, frankly, make them of very little use to anyone. People like to, for instance, be able to plug in and use a mouse (or equivalent device) of their choice. Update their OS. Attach a new monitor. You can't keep the system in stasis, but for your mythical perfect security, every change would need to be reviewed and validated by someone who was infallible.

      Frankly, I don't think we've got that many mathematicians to go round. And if we had, I wouldn't trust them.

      1. Claptrap314 Silver badge

        Re: One has to wonder

        I know that we are in violent agreement here, but I'm going to prove our (mutual) point by thumping on you. Read again what I said: "proving that a given piece of code does what you want and nothing else". I did not say "code as complied", or "code as run in a particular (version) of an OS", or "on a particular machine". Just that one phrase, and you misread what I said enough to solicit a full explanation of a fact which I myself have written in these comments on multiple occasions.

        I stand by what I said: a person brain-damaged in a way to be true mathematician, with proper training, is capable of demonstrating that a small-ish piece of code does exactly what it is supposed to do. If you want to handle many small-ish pieces, you need many mathematicians. And if you ask them to prove that these pieces fit together appropriately, you will receive a unanimous, if multi-valued, rude response of some kind.

  9. StewartWhite
    Mushroom

    We get the Cybersecurity we deserve

    People and organisations pretend that they want secure systems but in the rush to do ever more pointless things more quickly and flashily they'll buy or any old load of (preferably cheap) tat kit/software and ignore security as it's considered "somebody else's problem" and hence doesn't really exist.

    There are also way too many cybersecurity software snake oil firms selling the next silver "AI" (or whatever buzzword is the mot de jour) bullet for there to be any industry led effort to help deal with the problem. Why fix the issue when you can make way more money endlessly selling yet another bit of software?

  10. Anonymous Coward
    Anonymous Coward

    About Admiral David Beatty.....

    ......perhaps he would wonder about more recent events:

    (1) Five billion (with a "b") pounds spent on an aircraft carrier.....and one of the propellers falls off! Yes...really!

    (2) Six Type 45 destroyers back in dock because the gas turbine cooling system does not work. "Destroyers often dead in the water"

    About item #2 - that would be about one billion (with a "b") pounds each -- before repairs!

    Now....about software reliability.....with these sort of stories about defective hardware.....why is anyone surprised that "computer security" might be either complex, or hard, or both?

    Quote: "....strong regulation based on engineering best practices, audit trails, rigorous testing...."

    Interesting. But some recent research and a little programming has made me wonder about training -- BEFORE we get to the quotation from Rupert's article:

    (1) "The C Programming Language", Second Edition, 1988 is 272 pages long.

    (2) "Programming Rust", Second Release, 2021 is 713 pages long.

    Book #2 describes a programming environment much "safer" than book #1........but much more complex, and more in need of careful thought before actual programming starts.

    (Quote from book #2, Chapter 5: "Rust's ownership model will give you some trouble. The cure is to do some up-front design and build a better program." No s**t, Sherlock!)

    Now you can see why C is going to be difficult to replace. If the size of the two books is any guide, C is probably easier (and cheaper) to learn. And training in C is probably quicker (and cheaper) to provide.

    But then, I don't suppose C-level executives are comparing their cumulative security costs with the cost of better training -- because they can mange the second one down, and they are not responsible for the first one!

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like