Obvious flaw in the argument..
Who, exactly produced the code for AI?
Ex-CISA head Jen Easterly claims AI could spell the end of the cybersecurity industry, as the sloppy software and vulnerabilities that criminals rely on will be tracked down faster than ever. Speaking at AuditBoard's user conference in San Diego, Easterly said the threat landscape has never stopped evolving. The proliferation …
I could see this approach working if everyone ran everything in the cloud and build pipelines could update continuously with fixes as the AI DAST/SAST tooling found vulnerabilties and fixed them,
BUTT...
This does not fix the problem with operating systems being vulnerable to things (as they are not 'cloud') nor will it help with locally deployed apps (unless there is near constant updating of the apps), nor will it help with testing compatibility for clients that consume the updates, or the changing user experience.
I'm torn here between marvelling at the vision of people that think AI can save the world (even when it seems like the use cases are scraping the bottom of the barrel with a plan to throw it against the wall and see what sticks) and the shot-sightedness of the same people's understanding of how normal enterprise IT works.
Ultimately, she said, "if we're able to build and deploy and govern these incredibly powerful technologies in a secure way, I believe it will lead to the end of cybersecurity."
In other words, if we are able to build software securely, we will have software security.
FFS, does she have no common sense?
For starters AI isn't going to fix crap software reliably anytime soon (if ever) and then there's the minor problem that human greed and lawlessness are a constant. The software industry have been insecure since forever, and there's NOTHING going on that persuades me that their products are becoming any more resistant to the malcontents.
We've already seen AI used for cyber attacks, impersonation fraud, and simple malicious spam, and the crims have barely got started on the opportunities of AI.
It's not even true though. It's a profoundly over-simplified view of computer security, naively ignoring the adversarial nature of the endeavour. Even if somehow, miraculously software bugs ceased to exists, humans, and other systems, still need to use that software and that use is itself represents one of the broadest categories of vulnerability.
'The end of cybersecurity' will come whenever computers cease to exist and not before.
to think that a little AI magic pixie dust will solve all security problems.
The truth is good, old fashioned software engineering practice that starts with a secure design and ends with quality assurance testing.
Yes: AI might help with this but AI must not be used as an excuse to cut s/ware development costs - which only results in enshittification.
I read it as Easterly suggesting that vendors used AI vulnerability scanning before releasing systems for hackers to try. Not trusting AI to write secure code.
Jen Easterly did a lot of good work at CISA, she was pushed out because of politics over the role of the agency.
Expressing the opinion that all security incidents are caused by poor quality software and that LLMs can solve the problem indicates a fundamental lack of understanding.
Whether Easterly did good work at CISA before she was pushed out for not sucking up to the mad orange king does not change my opinion of her lack of understanding.
Exactly. I mean poor network configuration is a significant vector. Not to mention that most companies are unable to secure vulnerabilities when they are found within a timely manner. More often than not, businesses are incentivized to ship software rather than perpetually. Fix bugs to perfection before release.
"We don't have a cybersecurity problem. We have a software quality problem," she said. The main reason for this was software vendors' prioritization of speed to market and reducing cost over safety.
... the real focus should be on the fact that the common factors uncovered by MITRE nearly 20 years ago – cross-site scripting, memory unsafe coding, SQL injection, directory traversal – remain part and parcel of shipped software. "It's not jaw dropping innovation… They were the golden oldies."
So far so good. However ...
This is because software companies insisted customers bear all risk and convinced government and regulators that this was acceptable.
That's far too broad of a statement. Suppose instead a NIST public standard such as "this software contains no listed CVE's as of MM/DD/YY". If a realistic standard existed public companies would effectively have to require it to meet their own standards. But any such realistic standard(s) would not be a cure all.
Finally, the kicker:
AI offers a way to address this, she claimed, as it is far better at tracking and identifying flaws in code. And it would be possible to tackle the mountain of technical debt left by a "rickety mess of overly patched, flawed infrastructure."
Here Easterly is paraphrased as saying "it [AI] is far better at tracking and identifying flaws in code", which is at least very vague - better that what?. AI is a tool that can assist humans, not an intelligence that can replace humans, unless you want to introduce even more convoluted vulns. Perhaps the reporter's paraphrasing of what Easterly said was unfair? It's this last paraphrased statement that has really ticked Reg readers. Probably she she deserves a chance to respond to this paraphrasing before being condemned in the court of El Reg.
"We don't have a cybersecurity problem. We have a software quality problem,"
The biggest problem with this statement is the fact that she blindly assumes that all break-ins are down to poor software quality.
Many break-ins are actually accomplished through social engineering or stolen credentials. No amount of appealing to LLM wow-wow is going to solve those two classes of problem.
The LLM wow-wow won't solve software quality problems either - it's more likely to create new ones.
She's simply making money at speaking engagements by implying her previous employment gives her pronouncements credibility. That might work with middle-management types but it's obvious nonsense when anyone with technical competence looks at what she's saying.
Based on previous things she's said which I thought were on point, I'm inclined to try to parse her statements as logical. The problem is that they aren't. If you use AI scanning before release, you need people to read and respond to those scans. You can't eliminate the industry if you need a lot of people to read scans no matter how good they are.
No, I think she's really under the impression that LLMs can do things they can't. It's not entirely on her. Lots of companies say they can automatically detect and then fix security issues without human oversight. They're usually wrong, but they exist. Some of those are companies that existed before LLMs making exactly those scanners that can make the remediation process faster. She probably hasn't used these herself and doesn't know that, while there are indeed times where they detect something real, write a fix to it, and not break anything, there are a lot more situations where one of those three things doesn't happen, meaning it tries to automatically patch something that wasn't a problem, writes a patch that leaves the problem in, or breaks the software in the process. If you assume that all these security software companies wouldn't lie, then things are looking up. Unfortunately, they often are, whether or not they know it, and most of them do know it which is why, instead of automatically adding fixes by default, they put them there for human review. They still advertise the automatic part though.
Reading stuff like this always reminds me of Richard Feynman's appendix F to the report of the presidential commission on the Challenger disaster. https://www.nasa.gov/history/rogersrep/v2appf.htm. In his appendix, Feynman describes a three order of magnitude gap between the reliability estimates of the working engineers (estimate 1 failure per 100 launches) and the project management (1 per 100,000 launches).
Let's just say that I suspect Ms Easterly probably wasn't the best possible choice for CISA head and that Trump's nominee for the job Sean Plankey doesn't look to be that much of an improvement. Could be wrong about that. Hope I am.
And, Oh Yes, Trump wants to cut the CISA budget and reduce staffing by a third. Will that make CISA 33% less ineffectual?
>"We don't have a cybersecurity problem. We have a software quality problem," she said. The main reason for this was software vendors' prioritization of speed to market and reducing cost over safety.
That's actually not wrong.
Where she goes wrong is with the solution. Software vendors put security at a very low priority not because they're dumb or evil (though some are), but because all the economic incentives are extremely in favor of speed to market and cost reduction, and security costs a lot of time and money. As long as the incentives are the same, shifting the problem to AI won't solve it.
And now I know, I'd confidently say that I wouldn't trust this person to competently operate a microwave oven, never mind any sort of "computer".
No, AI isn't the magical unicorn pissing rainbows and sparkles. And one needs only look at the quality of GenAI pictures, stories, discussions, and code to know that it may well fix the problem it identifies but create a dozen different problems in the process. There's no "intelligence", no "understanding", and very little "memory" (as in remembering context). That's not something I'd let anywhere near actual executable code without plenty of human oversight, and full unit testing.
Spoken like a true corporate shill.
Hillarious at best, terrifying otherwise.
He's clearly not spotted that the introduction of AI coding has coincided with a massive dip in the quality of software being produced.
It seems to be daily I'm exposed absolute mounds of crap that are proudly released by many a major company. The only elements of these junkware apps that seem to work well are their unlawful levels of data collection and their continued persistance in pushing some form of AI labelled chatbot dungheap.
It'll all boil down to costs. The price of secure engineering is still going to be high with "AI" solutions because the billions invested have to be repaid and a poor sod will still have to be paid to verify and, crucially, be capable of understanding the output and consequences. My bet is nothing much will change once the true cost becomes apparent.
Sounds great except who is going to validate that the AI code solves the problem and doesn't introduce any new ones?
Especially given that compromising AI systems is trivially easy and there appears to be no way to make LLMs secure.
Either you have a security team to second-guess your security-team-replacing AI or you don't have security.
I always argued that flowcharts were the way to design software. They show logic in two dimensions, making many errors and omissions much easier to see. But the industry has chosen the path of 'foolproof' programming languages, so the flow of disasters has continued apace.
I would like to see AI creating and analysing flowcharts to find and fix flaws.
@Primus Secundus Tertius:
Are you insane? Flowcharts are horrible!
I learned computer programming making flowcharts of my programs before I wrote them, and I did not mind doing so at that time, because at that time, my programs were beginner-level-simple.
When I got to college, the complexity of my programs went way up, and my willingness to use/produce flowcharts plummeted accordingly. Fortunately, we were not required to design or document our programs with flow charts.
Flowcharts are "stuck" at the lowest-possible level of detail, and can give no overview or understanding of why all the thunder and motion are happening as they are within a program.
Flowcharts are unwieldy, spread across many pages, with many "connectors". because you can't show much of a flowchart on a single page. So there are all the physical/concentration breaks where you flip forward and backward to find the relevant connector. Further, these breaks are arbitrary, and have absolutely nothing to do with the program's organization or control flow.
I used/use pseudocode and data flow diagrams for high-level design and documentation.
I don't know what the mod, cool kids use these days.
Probably nothing, 'cause docs aren't "Agile".
Even if I accept that logic, how do you propose compiling a flowchart to a deterministic program? Programming languages have the advantage that you can run and model with the same thing which helps a lot because, as experience has shown me, any two attempts to write the same thing will have weird differences if they're big enough.
The other problem with flowcharts is that they only kind of work for a simple type of program which takes one input, runs for a while, and produces some output. If it runs multiple things in parallel, collecting some information while running other things, handling failures and potential problems without direct user interaction, a diagram of what it's doing gets a lot more complicated. You have two choices, neither of which is good. You can split it into lots of little flowcharts including arrows that come in from nowhere, or you can build a massive flowchart which covers an area the size of Wales (although I propose you use a flatter place) and still has arrows coming out of the Irish Sea to describe new events or data that weren't present when the program started.
Yes, that's the lots of little flowcharts option I already mentioned. The problem being that you now have to deal with all the various ways control flow can be modified. I write software that has scheduled jobs, event-activated jobs where something notifies us to start one, pull event jobs where we need to search for triggers to start one, user-called jobs, and job pipelines. These all run in parallel with mechanisms to prevent them from stepping on each other and to keep them in the proper order of data flow. Flowcharts to define that have lots of problems representing the external sources of information that can cause the jobs, and the flowchart describing how each function operates would be very complicated if you need to get all the concurrency data in there. If you don't get all the concurrency data in there, the system is guaranteed to hopefully crash but unfortunately quite likely deliver false results in a matter of minutes which will deliver an angry person to your desk. This system is not functionally described in a form you can easily break into little self-contained units.
help until software engineering is treated exactly as real world mechanical, electrical or civil engineering.
Well thats a bold statement to start with coming from a lowly industrial robot programmer. but consider the following, your m$ powered computer suddenly decides to blue screen for some reason and reboot, well the software industry has somehow managed to convince everyone that "that happens sometimes" and "Its not our fault".
Now think about an aircraft flying along at 5000 feet suddenly going rudder hard over and plunging into the ground at 500 mph and boeing putting out a statement to the effect of "well 737s do that sometimes". how long before boeing would go out of business with that attitude? or saying "well we had no proper engineering of the rudder as it looked ok and passed our depleted QA department so you cant sue us as we supply every 737 with a disclaimer that any crashes are not our fault"
Or a building company saying "not our fault the building fell over.. must have been one of the techs altering the rivet temperature when they were hammered in", even in my line of stuff , we have to be so careful to get the machinery to coordinate and to check we're not going to do something stupid such as drill a hole 5 meters into a chuck(that makes a very loud noise that wakes the boss up)
Until you force the software industry to have professional standards enforced by law, the addition of AI to any software creation process will not improve the security or stability of any software products as the very design of such products can be flawed from the start.
Somebody else here said "Hillarious at best, terrifying otherwise." It is clear from other comments that people do not think correcting obvious bugs in software constitutes a fix for security. Things are not secure and there are many reasons for which there are corrections known. As the power and sophistication of attacking systems grows, AI will be aiding attackers and the cure will have to include AI for defense.
When we see the first completely AI developed exploit chain being found by researchers (or more likely used by NSO Group)
AI can maybe find some stuff that's very similar to stuff already out there, but it isn't going to find anything novel which is usually required for at least one step in the chain. Just look at this, in particular the step where they turn an obsolete compression format designed for fax that's still in open source PDF libraries into a circuit emulator to implement a 64 bit CPU that executes one stage of the exploit! Anyone think that AI would EVER come up with that, suggest a patch to fix it before it was found, or be able to determine what happened when examining evidence of an exploit like Google Project Zero's team did? So no, AI is not going to render CISA obsolete, not even close!
https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-into-nso-zero-click.html
The Intelligence Agencies love these exploits and jealously hoard their zero day exploits. If this was in danger from LLMs (not really AI) then it would pose an interesting dilemma for them. But I suppose they could always poison the LLM well and use it to add more scurrilous backdoors that they have developed.
“ the need for AI systems that are created, designed, developed, tested, and delivered with security as the top priority.”. And what would those be? A lot of these “AI” code generators seem to be regurgitating stuff they scraped off Stack Overflow or just making things up.
This post has been deleted by its author
I would punt that in general determining whether code contained a security vulnerabilty can be transformed into the Halting Problem.
In any case none of this would directly the exploitation of address hardware vulnerabilities like spectre etc
All in all a spectacular example of American daft bintery.
...it's the remediation time. Discovery of a SW vulnerability is actually pretty quick, and there are lots of methods of detection: white hats, bounty teams, AV/malware subscriptions, RCAs, to name but a few. When there's a discovery, it takes time and money to remediate, and not everyone has an infinite budget or man-hours to throw at the problem.
"AI offers a way to address this, she claimed, as it is far better at tracking and identifying flaws in code. And it would be possible to tackle the mountain of technical debt left by a "rickety mess of overly patched, flawed infrastructure."
AI is *not* going to re-code or magically fix decades of old code. System Owners (i.e. - humans) still need to accept and approve the changes. As I just mentioned, discovery is quick, remediation it slow. AI (or whatever you want to call it these days) may develop or improve *new* code. That's yet to be seen as well, given many other nice articles published in El Reg. All I read here is a pipe dream that's about as close to nuclear fusion is to 2030.
And "no security teams"? That's nuts. So many things wrong with that blanket statement, but that's already been shared in the comments.
CISA used to have teeth, at least for government systems. Flaws can and do get bubbled up to congressional oversight if the risk and the affected system is important enough. To wit: "if you don't fix this, we pull your funding." I say "used to...".