"it is imperative that organizations strengthen their application security posture"
It sure is. I'm sure another couple hundred catastrophic breaches will help solve the problem.
Cybersecurity workers review major updates to software applications only 54 percent of the time, according to a poll of tech managers. That figure comes from CrowdStrike, which recently published [PDF] its 2024 State of Application Security Report. It's based on interviews with admittedly just 400 US security managers, so take …
A couple hundred? So we'll be in better shape next week. :)
After three decades or so working in software security, I'm pretty sure I won't see major improvement in my lifetime. Some things have gotten better, broadly speaking — the situation may be dire today, but it genuinely was worse in the past — but the attack surface and quality and quantity of attacks have grown so fast that there's been little overall net gain.
"Catastrophic Breaches" is a good album name.
“There doesn't seem to be a single root cause as to why security reviews are so time and money-consuming – it comes down to a variety of factors.”
Because the underlying platform is such a spaghettified mess that it is impossible to do a security review using formal methods.
No, that's not it, or at least that's not the major reason. It's because security and vulnerabilities are such large sets that there's no simple formal method of defining something secure. Take the operation of opening a file and writing something to it. The OS doesn't make that insecure. While you might find a filesystem bug that makes that operation vulnerable or a bug in a kernel or process that can be invoked by doing so, those aren't that common. Yet there are still lots of possible vulnerabilities any time that is done, most of them intra-program. The file could be subject to a deserialization attack when it's read back in later. It could be used to use up some resources and provide a DoS method. It could be used to inhibit performance. If the program mishandles paths, it could be used in a directory traversal attack. There are some inter-program methods as well, or at least inter-process. None of these things are due to the platform and tend to be as available on any operating system, but they're down to practices during development of that application. Many of them won't apply just because of the way the program is designed. If you don't let the user name the created file, that excludes some classes of possible vulnerabilities right there. That's not a universal rule that the user must never supply file names, but one consideration among others when making implementation decisions.
A security review is supposed to identify risks like this, but only some of those are easily detected by an automated tool. Tools are improving, but there are still many that will be difficult or impossible to detect that way. Often, vulnerabilities in a piece of software are not carried over from its platform, but come from that software itself. Blaming the platform when bugs are found elsewhere is just going to let writers of insecure code off the hook.
Never skip a security review of a major app update again, with Agile.
With Agile you will deliver working software frequently.
Deliver every couple of weeks and you will never deliver a major code update that needs a security review again.
Security reviews sorted ;)
Obviously that's unworkable.
My team reviews every change before anything gets merged back to a release branch. Reviews happen constantly; there's rarely a day when there aren't some reviews in flight. Most reviews are open for a couple of days. The workload is reasonable; I average perhaps half an hour a day on reviews even when we're in a busy phase of development (and not in holiday season, or doing a lot of planning or other non-coding activities).
I know of other teams who follow the same approach. It's doable, if management supports it. Of course not all bugs are caught; that's not what reviews do. But a lot of things do get caught, and team members share knowledge and discuss potential issues and improvements. Code reviews also help calibrate developers, maintain standards, and improve code maintainability.
"There doesn't seem to be a single root cause as to why security reviews are so time and money-consuming"
It's work and work is both. it becomes visible to the bean-counters because it isn't some visible new feature that marketing can point to so it doesn't appear to have any value. If security failures were a cost, such as by vendors becoming liable at law for any damage caused by a security failure then maybe it would have a value.
"... Skipping the review process isn't simply down to neglect and laziness. Reviews take time, and time is often money. Only 19 percent said a security review took less than a day, while 46 percent estimated one to three days were needed. A further 29 percent claim reviews could take three to five days to complete. ..."
What does a security review consist of anyway? A review of what, exactly? How many lines of code? What resources are used? How long is a piece of string?
The only sensible way to proceed is to assume that all applications are inherently insecure, and to treat them all as the existential threat which they most probably are.
This is an interesting question, as the article and report are unclear. On the one hand there is an issue in development, where it is possible to do a targeted security review, assume t and test.
On the other there is the user organisation who get completed update sets with the helpful detail:
“ A security issue has been identified in a Microsoft software product that could affect your system. You can help protect your system by installing this update from Microsoft.”
If there is any further detail it is of the form: “fixed memory leak in xyz”.
So the report isn’t all that helpful since it confuses these two differing scenarios.
quote: The only sensible way to proceed is to assume that all applications are inherently insecure.
Yes. Agreed. The tech we have is so complicated and infosec skills so thinly spread that we need to change the way we do things. Internal systems/infrastructure should never touch the public internet. Airgap them, two systems per desk if necessary. What you have left is a mix of disposable systems and stuff you need to focus on. Disposable systems, if taken down, can be ripped and replaced within hours, with minimal data loss. Consider distributed solutions, so you don't have honeypots of personal data. If you don't need to computerise something, don't. Don't go digital for the sake of it. If printed forms work, use them. Aside from anything else. archiving paper is easier and cheaper. You don't have to curate it so much, regularly checking/changing media, changing file formats. 1980s paperwork just gets dusty. Do those 5¼" floppies still work as well, or have you resaved all your media into newer formats on newer media types half a dozen times? Hybrid paper/tech systems allow you to make the best choice in individual cases. And use simpler software.
An increasing complicated and ever more bloated OS will become inherently less resilient and less secure. We need to go back and rethink how we use software, to save cash, improve resilience and increase security. Not just be led by MS, Apple and Google on to the next version on the road to digital hell.
Certainly an innovative and very bold step by the Flystrike CrowdStrike media team to try to draw attention to the publishing of their report by quickly following it up with an informative example case study to highlight the importance of it…
I wonder if the ink on the press release was even fully dry?
(Yes, I know I am not the first to make this point, but it is just so funny…)
When companies stop doing security on the cheap, finding it cheaper to pay for lawsuits, fines and outages than to spend the money on security up front. Followed by the worn out PR rhetoric of how important their customers privacy and information is to them and the false promises of a secure future. If customers privacy and data was so important they would show it by implementing security from the outset instead of doing damage control afterwards. The blatant disregard for the security of customer information is found in regular storage their data in clear text on network accessible systems. It should be a no brainer to encrypt financial data, credit card information, personally identifiable information, but in majority of cases is left in clear text on unsecured systems\networks for easy theft.
30 years into the Internet being a marketplace, 23 years since 9/11 and the data hoarding created by it's perceived need to collect more data about users than all the companies and Governments can even begin to process in multiple lifetime. Yet in all that time we have failed learn, let alone implement many of the basic tenants of computer, network and cyber security. Securing the U.S infrastructure has been an on going Government farce since the Reagan Administration, yet we continue to implement remote automation with insecure IoT, hardware, software and without even basic security practices.
"The definition of insanity is doing the same thing over and over again and expecting a different result." We have repeatedly proven our insanity!!!